Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"One key: continued experimenting with AI, even if an initial project doesn’t yield a big payoff. The authors say the most successful companies learn from early uses of AI and adapt their business practices based on the results. Among those that did this most effectively, 73 percent say they see returns on their investments. Companies where employees work closely with AI algorithms—learning from them but also helping to improve them—also fared better, the report found.

“The people that are really getting value are stepping back and letting the machine tell them what they can do differently,” says Sam Ransbotham, a professor at Boston College who coauthored the report. He says there is no simple formula for seeing a return on investment, but adds that “the gist is not blindly applying” AI to a business’s processes."

As Robin Hanson puts it, 'automation as colonization wave'. Just like electricity or computers or telephones or remote working: they always underperform initially because of the stickiness of organizations and bureaucracies.

Per Conway's law, no organization wants to reorganize itself to use a new technology like software, they want to make the new technology an imitation of itself, old wine in 5% more efficient new skins. It takes either intense near-death-experiences (see: remote working/corona) to force a shift, or starting up new businesses or units to be born-digital.

Those who undergo the birth trauma, like Google, are sitting on a river of AI gold and putting AI into everything; those who fail will whine on surveys about how AI is a scam and overhyped and an AI winter will hit Real Soon Now Just You Wait (it will pop any second now, just like how the media has regularly reported since 2009 about how the Big Tech bubble would pop)...



>those who fail will whine on surveys about how AI is a scam and overhyped and an AI winter will hit Real Soon Now Just You Wait

isn't this very similar to the logic you used a paragraph above though when you spun it as "all great things have birth problems, just wait?". The born-again AI company rhetorically sounds more like conversation to a Christian cult than a business strategy.

This issue of huge promises of the digital revolution followed by very meagre productivity gains actually has played out not just in 'AI' but a lot of sectors over the last three decades at this point.

Even for self-proclaimed AI companies like Google, how much of Google's financial bottom line is this new-agey wave of AI, and how much of it is pagerank, a ton of backend engineering and selling ads?


Googles search results and ad targeting are now entirely powered by machine learning. So basically all of their revenue is now from ML. PageRank is ancient stuff.


>all of their revenue is now from ML. PageRank is ancient stuff.

Seems like they would be making the same money either way, so the ML advantage might not be the major factor.

Conway's Law really does seem to show here.

If you replace occurrences of AI in the article with Natural Intelligence that is what so many companies really need to implement more of beforehand.

That way any decision-making that is delegated to AI afterward will not have the same limitations that the organization already had.

You usually don't want a business model with an anti-growth pattern baked into an even more opaque and unchangeable feature.

Problem is, deep application of the NI is going to get you most of the way you want to go business-wise, after that the AI might be even more difficult to justify.

Industrial-wise, with unique & complex equipment & data, where an operator gains skill through familiarity with both, a machine can be trained to gain some of that skill.

But there will always need to be someone better trained than the machine in order to get the most out of the machine.

The best investment is often going to be in a system to leverage their ability rather than try to operate without them at all.


I think that Google would be a memory without them aggressively developing and leveraging new tech (AI) for search and other ajoining services (maps, translate, adwords).

Adwords exploits automated auctions (from Multi-agent systems)

Maps uses planning

Translate uses deep networks

Search uses ??? (dunno, but it ain't map reduce for sure)


"Googles search results and ad targeting are now entirely powered by machine learning. So basically all of their revenue is now from ML. PageRank is ancient stuff."

I wonder if that is the reason I feel Google search has gotten so much more dull.


And yet people try other search engines and consistently come back to google. Because they are good at learning from data.


The born-again AI company rhetorically sounds more like conversation to a Christian cult than a business strategy.

Why a Christian cult? How is AI hype more related to Christianity than any other cult?


Taipeng heavenly kingdom.

Its just cultish insanity that happens to also be one of the bloodiest ever civil wars.


because the born-again stuff made me think of Evangelicals, but I guess if you want to go deep into this I think there's actually a lot of unironical parallels. The singularity is kind of like the rapture for nerds, AI people get weirdly dualistic about uploading their minds to clouds, and the overlap between SV capitalism and Protestantism goes back to Weber. I think it was Charles Stross who also pointed to similarities bwetween Russian Orthodox Cosmism and Western sci-fi. Secular western people in particular get weirdly religious about AI, like some sort of psychological replacement


The US started with puritanical protestants.


I'd posit that those are all coincidental, and certainly not an exhaustive survey of cults or cult-like practices, Christian or otherwise.


For that matter, it seems that the analogy is to Evangelicals, who aren't generally considered to be a cult. They're too numerous, and while they do have many charismatic leaders, a cult usually has one particular top leader.

They do, however, practice the zeal of the converted, which is I think what they really were aiming it. And that by itself would qualify as "cultlike" by some definitions.


It’s because the ML solutions to optimization problems that are relevant to most businesses are garbage. Look at the attempts to solve the traveling salesman problem with just 100 nodes (a problem pretty much solved for the operations research community) with RL [1].

In other applications like forecasting, there is so much hype, and when you put the (non-interpretable) solutions to the test you end up disappointed (versus a pure stats approach).

There are of course fields where ML is pretty much the only viable solution, but trying to blame corporate bureaucracy to explain its failures in other fields does not help.

[1] https://research.google/pubs/pub45821/


At this point another AI winter seems impossible. AI has such a strong foothold in functionality like voice recognition or image manipulation that are core to big tech company products, it wouldn't make sense for them to give up on the whole paradigm like what happened in the 80's.


While I don't think we'll have another AI winter for the reasons you point out, I do expect a large contraction in the AI market where a few of the successful companies in AI dominate and you won't see people throwing AI at problems that don't need it anymore.

In the 90s every washing machine had "fuzzy logic", it was the new hyped thing. Of course there are legitimate applications of fuzzy logic, but you don't have to apply it everywhere. It quickly died down once people noticed this fact.

Right now we're in the phase of the hype cycle where clueless managers ask their engineers to apply AI to anything, because they read about the AI revolution everyday and fear being left behind. But some problems have good non-AI solutions where AI won't reap much benefits, while the data collection, the experts, the time to develop, and the necessary restructuring to become an AI company costs a lot.


> Those who undergo the birth trauma, like Google, are sitting on a river of AI gold and putting AI into everything; those who fail will whine on surveys about how AI is a scam and overhyped and an AI winter will hit Real Soon Now Just You Wait

I wouldn't call Google as sitting on a river of AI gold.

It is using AI to make more gold from the same mine, instead of the obvious gold, it scratches and extract gold from the dirt and rocks. But it's still doing it from that same mine.

And that's exactly what AI is, garbage in, garbage out. I used to be a machine learning engineer in a company that is about as old as the dinosaur. They wanted to apply that gold digging AI machine google had on a landfill, which didn't quite pan out the way they liked it.


Not a machine-learning engineer per se, but I work with some machine-learning technologies at Google. Perhaps that's when you know that AI has truly become embedded in the fabric of a company: when it's become just another tool that ordinary engineers are expected to know and use to solve problems.

Anyway, I think AI is a sustaining innovation, not a disruptive innovation. It makes existing businesses work better, but it doesn't create new markets where none previously existed like the Internet did. Google makes a ton of money off AI; the core of the ads system is a massive machine-learning model that optimizes ad clicks like no human could. But that only works because they already have the traffic and the data, which they got by positioning themselves as the center of the Internet when it was young.

I do agree that companies need to adjust their processes to take full advantage of AI rather than expecting it to be a magic bullet, but I don't know if I really think "birth trauma" is the right metaphor. More like adolescence; it's a painful identity shift, and some people never successfully make the leap. Those who can't don't die, though, they just become emotionally stunted adults that never reach their full potential.


Could you elaborate more on the widespread use of AI in Google?

Does search actively use AI as well? Is it fully dependent on a NN without manual algorithms?


Search was pretty actively against AI usage at the time I left it in 2014. Much of this was because of Amit Singhal, though: he had a strong belief that search should be debuggable, and if there was a query that didn't work you should be able to look into the scores and understand why. There were AI prototypes that actually gave better results than the existing ranking algorithms, but weren't placed into production because of concerns on maintainability. I have no idea if this changed after Amit left.

I work on Assistant now, since recently rejoining Google, and it uses AI for the usual suspects: speech and NLP.


> it uses AI for the usual suspects: speech and NLP.

Is there a place that doesn't do that? That's entry requirement right?


Yes, which is why I feel comfortable revealing that Assistant uses AI for speech and NLP. ;-)


Google uses a ton of AI because they have a ton of data that’s easily categorized and has clear success criteria. Without those things, it’s doubtful that AI would’ve done Jack squat for google.


References on Hanson/electricity/computers: https://www.gwern.net/newsletter/2020/06#automation-as-colon...


I have a whole smattering of thoughts based on the article and your comment...

From the article: "The authors say the most successful companies learn from early uses of AI and adapt their business practices based on the results."

This sounds exactly like the advice that ERP companies used to recommend to organizations. Don't try to customize the ERP, adapt your business to the out-of-the-box best practices. Those who don't often face huge costs and pain trying to adapt the tool to the business, rather than the other way around.

Interestingly, this might say more about organizational culture that encourages adaptation than anything else. It has been said (I think Steven Hawking may have been the one) that "adaptability is intelligence" [1].

From your comment: "Per Conway's law, no organization wants to reorganize itself to use a new technology like software, they want to make the new technology an imitation of itself, old wine in 5% more efficient new skins. It takes either intense near-death-experiences (see: remote working/corona) to force a shift, or starting up new businesses or units to be born-digital."

Conway's Law is about how the design of an organization's information systems often mirror their communication patterns. I have seen this myself to be quite realistic, anecdotally... although.. isn't this really kind of sad in a way? First of all, I see this as an excuse to write off groups of people as "legacy" and "outdated" - for example, digital transformation seems to be as much about a generational shift as it is a technological one. Or perhaps it is just showing the way - change culture/communication - value adaptability - to change systems.

Laws were meant to be broken, so why the belief that because of Conway's law, it always has to be this way, or that Conway's law is a constraint that cannot be broken? Certainly, we won't change things if we aren't willing to adapt.

Second, regarding the recognition of it taking near-death experiences to force change - doesn't that seem true for the broader human population, not just organizations? For that matter, why do we seem to be so stubborn and unable to flip these odds in our favor? That part is frustrating.

Next, to starting new business or units to be "born-digital" - of course, I understand why people think this way, why it probably works, and why it is a go-to strategy. But there is some part of me that this is part of disposable culture, it is partly exactly what is wrong with how we approach things - instead of incremental improvement, it is burn things down or start from scratch. Because we can't adapt the designs we implement, we have to start all over.. and it seems like a wasteful exercise. Where is our logical "exnovation" (the opposite of innovation)?

Lastly, about the gap between those who become "AI-enabled" companies and actually achieve success, and those who do not. In full recognition of the reality that those who do not get with the times, often get left behind, when it comes to technology - this gap worries me more. I'm thinking here of intellectual property, and that what is the best AI model will most likely always be locked and controlled by the profit motive - and thinking, what then?

I suppose this is why some thinkers are so worried about an AI-enabled future (ex. Elon Musk or Stephen Hawking) - it's not the positive potential that scares them, it's what happens to humans [2].

[1] http://www.actinginbalance.com/intelligence-is-adaptability/

[2] https://www.washingtonpost.com/news/innovations/wp/2018/04/0...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: