Hacker Newsnew | past | comments | ask | show | jobs | submit | dworin's commentslogin

It's very hard to find people with both deep domain knowledge and deep math/statistics knowledge, in the same way that it's often hard to find people with deep programming knowledge and deep business knowledge.

We solve the latter problem by having business analysts or product managers that "get" the technology enough to provide direction, even if they wouldn't be effective implementing it themselves. I think there's a next phase where, as we try to do data-science at scale, we look for a similar role that deeply understands the business and knows enough about the analytical techniques to define the problem and work with a team of specialists to figure out the best analytical approach.

People talk about data science teams being multifunctional - with programmers, data engineers, data scientists, and designers - but we always leave out the role for someone with deep business expertise and shallow but meaningful data science expertise.


As the author of the OP, I must say this is very well put. Part of the problem is that there is no fixed 'role' for the person with the 'deep business expertise and shallow but meaningful data science expertise'. In my experience, it could be a bunch of different people. When I was in a network security startup, this expert would typically be a malware analyst. In other companies, depending on the project, it could be someone from Product, Sales or Marketing. Similar to designers, a data scientist is expected to figure out who the main stakeholders are and get them engaged in the process, instead of the business stakeholder being part of the data science team per se.


The best business practices to emulate come from successful in boring and fiercely competitive industries, where you can see if those practices are really making a difference. When a company has a single massively profitable product that's mostly protected from competition, it's hard to know if anything they're doing makes sense at all. But boring companies don't make exciting Fast Company headlines.


I think this is a fairly common pervasive, semi-conscious but unspoken feeling throughout many marketing and communications measurement organizations.


>>> the change is mostly about moving this risk (and potential reward related to it).

This is exactly right! And the theory behind it is actually one of the papers behind this year's economics Nobel Prize.

Different billing models aren't about paying for outcomes - they're about how to both incentivize and compensate when 1) there is uncertainty about the effort and ability required and 2) there is uncertainty about the degree to which effort and ability will lead to outcomes and 3) outcomes are costly or difficult to measure.

Different ways of billing are really about who takes on the risk (and gets both the upside or the downside), as well as how to align incentives between the service provider and the recipient. Oftentimes risk-sharing and incentive-alignment are in conflict with each other, which is why this isn't an easy topic.


This echoes some of my recent frustrations with customer success organizations to a T. I'll also add in that many customer success organizations feel like the Post-Sell Upsell Sales Team rather than the Make My Customer Successful Team (upsells might be a goal, but it shouldn't feel that way to customers).

At most SaaS companies, customer success is designed as a slightly-better support function, rather than a value-added consultative function. This is actually an evolution from the old model, where you'd have a more expensive professional services function that accompanied enterprise software purchases, usually because the implementation itself required a great deal of technical sophistication that the cloud has made obsolete.

Customer success managers tend to be lower paid than consultants who have domain expertise and strategic thinking skills. They also typically handle a much larger client load, which makes it hard to invest time in relationships, and have automated their work to the point of annoyance, which makes it hard for them to individualize.

On the other hand, it's not always such a great idea for a company, especially a SaaS company looking to make an exit, to have their own professional services function, let alone a paid services function. Consultants are expensive and have much lower margins than software. They also add headcount and bring down valuations when it's time to sell the business. That's why most software companies a partner ecosystem around their software, rather than trying to do it in house.

Based on my experience, there are a handful of customer success organizations that get this right. But as a discipline, customer success is still meandering around trying to figure out what it really is. At most companies, the legacy is in a customer support function, not a professional services, consultative sales, or account management function. So that's the level of service you get. I'll be interested to see if they'll respond to feedback like the posters and evolve towards a more consultative model.


In my experience "expertise in making sense of data" is only one piece of the puzzle, and often not even the most important one.

Domain expertise is hugely important at making sense of data. Self-service allows domain experts to quickly look at data themselves. They may have to learn skills in data-sensemaking, but the expert in data will have to learn about the specific domain (often much harder).

I'm noticing that more and more people in a variety of fields have at least a passable understanding of how to make sense of data. For quick questions, self-service access to data makes the process much faster with little risk.

I've been in organizations that tried to put data behind gatekeepers who would protect users from making mistakes. In those cases, we made a lot more mistakes because not enough analysis was done, or people didn't have access to data.

I've been in other organizations where we let everyone look at the data. Sure, some people made mistakes, but we used that as an opportunity to teach.

If I had to bet on which type of firm would win, I'd bet on the latter. I'm deeply skeptical of the promises made by BI vendors, but self-service analytics isn't one of them.


As a user more than an engineer, most of the apps I use have 'one feature,' but the one feature that's important to me is different than the one feature that's important to other people. My 'one feature' may also be a unique combination of smaller features that, when brought together, solve one very important problem for me, and some different combination of smaller features in the same tool will solve a different problem for someone else.


Agreed. When I seek out a new program, I usually have some specific goal that my current software doesn't cover, which usually means that it's something fairly "out there", and not even necessarily a core feature of whichever software I find.

Still, a program needs a guiding light to its design. It has at least one purpose, and that purpose should be the central focus of its design. That doesn't mean there can't be ancillary features, of course.


There's actually a lot of research about confidence (and an entire pop-psychology book on the subject).

The basic finding is similar to what the article says: too much confidence is bad because it makes you miss things, but so is too little confidence if it makes it hard for you to do anything. The trick is calibrating confidence and how you react to under/over-confidence.

In a situation of low confidence, you want to take action to grow your confidence by getting feedback, learning new skills, or collaborating to get new ideas. When you have too much confidence, you need to figure out what you're missing or get a more realistic perspective on where you are.

For better or worse, there's also social value in projecting confidence (not arrogance, but confidence) in certain situations.


I feel like my confidence is usually in the right balance. I have enough to feel like I can build something, but not so much that I refuse to test and check that it works. On new projects where my confidence is lower, my first few PRs can take me forever as I examine every little detail for correctness.


Sounds like a lot of optimization problems when training neural networks.


"2. Internal employees - Stack Overflow said this has been available internally for a bit, but when employees find out what others are making they are inclined to compare their own efforts/abilities vs others. It can lead to people either asking for raises to match their co-workers, or perhaps feeling slighted and seeking other employers."

One nice thing about being transparent and consistent with salaries is that you can have an objective conversation with someone about the reasons that they're making less, versus having to rely on vague, irrelevant, or harmful explanations like "he was making more at another company," or "he negotiated harder." If someone thinks they should be making what another developer is being paid, they need to make the case based on clearly laid out criteria.

There's no compensation system that makes everyone happy, and there shouldn't be. You want a system that leaves people knowing where they stand, what it takes for them to make more, and management that encourages them to grow into that amount.


>One nice thing about being transparent and consistent with salaries is that you can have an objective conversation with someone about the reasons that they're making less, versus having to rely on vague, irrelevant, or harmful explanations...

Those objective conversations are a great benefit to salary transparency in theory, and I can't imagine you can open up your numbers without being at least somewhat prepared for those conversations to take place. I would be curious if companies that provide salary transparency wouldn't be scurrying to make some salary adjustments in the days before the data becomes public.

The issue lies in the fact that developer contribution is more than just commits and LOC stats, and it's tough to measure objectively. You're likely to get into some rather vague explanations, even if they aren't as nefarious as negotiating ability or salary history.

It's a step in the right direction, though until we have clear and widely accepted methods of measuring contribution we'll still have disagreements on individual employee value.


it sounds a bit too idealistic. How about if someone is hired at a hire rate because "well we needed someone with his skill set ASAP, so we agreed to pay him a premium" (but we can't afford to pay everyone at that rate) or "We desperately needed someone quickly with skill set X and he was the only person available"

And sometimes comparing people and their skill sets is really apples to oranges. If one guy is an expert on some very specific top, and thats important for your business (and is therefore an expensive hire) - it doesn't mean you should create an incentive for other employees to learn his skill set (maybe you only need one statistician or expert in COBOL or whatever)

"There's no compensation system that makes everyone happy"

Sure, pay people way above market rate and don't allow them to compare wages. They will not feel ripped off and they don't develop a sense on inadequacy


> Sure, pay people way above market rate and don't allow them to compare wages

That sounds illegal. Can you really not allow your employees to compare wages?


In the US at least (I can't speak for elsewhere), preventing this is illegal. Companies discourage it through various ways and it's practically an embedded culture thing at this point throughout the country. But: you cannot legally prevent it.

The National Labor Relations Act of 1935 is what you're looking for here - it provides employee protections for such discussions.


Yeah, it's interesting. They can't prevent it, but if you do it, you can guarantee you'll be fired. They'll just eliminate your position for some other reason.


You don't want to work for a company that operates that way anyway, if you can avoid it.


No, you cannot.


I don't know about the comparing bit, but paying way above market rate doesn't even seem to work. I read some Netflix reviews on Glassdoor, saying that people increase their standard of living to their high income and become dependent on it. The anxiety and stress this causes has a negative impact on the culture. Totally ridiculous that people screw themselves over like this, IMHO. :-(


Money doesn't buy you happiness, but at some point you prolly stop blaming it on your employer haha


What sucks about this is that the people who can't handle the high salary ruin it for everyone.


"He's more skilled than you" sounds far more toxic than either of those "bad" reasons you gave as examples.


>> Good HR means three things: a clear management structure, a way for people to talk about workplace issues and concerns, and pathways for people to evolve in their careers.

I've seen a number of startups who think the only part of HR where they need to invest resources is recruiting, right up to the point where the wheels start coming off. Once you're feeling the pain points - disgruntled employees, people leaving, communication problems, etc... - it takes a lot more time to get things working again.

It's hard because these issues aren't the things recruiters are good at solving, so they need more specialized knowledge, but at 20-50 people don't have the capacity to bring on the right expertise.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: