I hope one day we can drop the corporate facade so these blog posts can be two to three sentences of "Our company sucked. Microsoft wanted to give me a fuck ton of money. I decided to take it. Some other guy is in charge now. Good luck!"
Does getting acquhired by MS imply your company sucked?
There are things I like about where I’m at but if MS really wanted to they could send enough dump trucks full of cash to ruin the local traffic situation. Is it worth staying at a place if you can’t even get out of the driveway to get to your favorite little cafe?
It means your people are good and your product isn’t. That requires a kind of cognitive dissonance that some people would call “suckage”. Certainly had that conversation many, many times.
Just be careful you aren’t Boeing buying McDonell Douglas, where the cut throat culture infected the host and killed it, rather than being muted and managed out.
> startup culture injected into Microsoft is a good thing
Companies like Microsoft have consumed countless startups and startup employees over the decades and are practiced at digesting them without being changed all that much.
The prompt "summarise this in 2-3 sentences in the least charitable way possible" returned the following on chat.mistral.ai
>Inflection is changing its business model again, now trying to sell its AI services to commercial clients after realizing people can't replicate their AI model. They're losing two co-founders to Microsoft and bringing in some new CEO to run the show. They claim nothing will change for users, but who knows what will happen with all these changes going on.
Pi (their chat bot) is pretty nice to talk to. It’s really good with the whole para-social aspect of chatting without being weird. Is that useful? Maybe for research, probably less so for a direct product.Microsoft can probably get something useful from that.
More importantly though, Microsoft is now hosting a whole suite of LLMs on Azure. This is the lesson they learned after OpenAI had that acute leadership crisis. This is another hedge against OpenAI. I wouldn’t be surprised to see Microsoft start distancing themselves from OpenAI in the future.
Pi is nice to talk to because unlike other chatbots, it always, always, always asks a follow-up question. I'd love to see the system prompt. That's what's driving their claim of high engagement—the bot just keeps drilling you with questions, and for some people, the bait is irresistible.
Not only follow up questions, but it also gives straight answers without bs, lecturing the user, claiming the model is somehow limited or flat out refusals which are Gemini's specialty.
He has ZERO technical skills. ZERO hardcore STEM background. He was just friends with Demis at the right time and road that hype train.
At Google he had a similar role VP of AI products or something and he contributed nothing (except all the garbage ethics, safety crap that didn't help google, probably kneecapped it actually)
AI safety and ethics are a big deal. Nobody wants to buy a big heap of linear algebra. A big heap of linear algebra that is so close to becoming sentient that we have to plan what its ethics should be? Where do I put my money?!?
If you ask engineers to sell linear algebra, they invent things like control systems, which are too tricky to sell.
I have some serious issues with this company's PR.
They say their Inflection-2.5 model is the world's best personal AI[0] which is a dumb claim to make considering it is done off of automated benchmarks which we know are flawed and even if you assume automated benchmarks are good enough to claim that title, it would be held by dozens of other open weight models on HF, not Inflection-2.5.
They said their Inflection-2 model was the second best in the world[1] while comparing it to Palm-2 which no one considers close to the best in the world. They again, based their claim on automated benchmarks which anyone knowledgeable in the space would know can be gamed and is not representative of actual conversational performance. (take a look at the Lmsys Arena Leaderboard for a better metric)
They list other models they consider good while failing to mention or compare to Mixtral 8x7b, the best open model that exists.
And they introduce buzzwords no one in the area uses like IQ and EQ as if they are innovative concepts.
Making big, bold claims without evidence is the exact kind of manipulative PR speak I'd expect from a company with little to no substance.
"After just 3 days of being incorporated and just 2 days of actual work performed, we have now created the best AI on the planet with trillions of active subscribers, per second.
The quality of our product is so high, it is being compared to a perfect diamond the size of the entire solar system. But, we are humble, and while we think our tech is pretty darn good, we know that we can do better. That's why we are introducing v2 in just 2 business days from today, which will render all other forms of intelligence (human and artificial) irrelevant."
It really captures what I felt as I started to read this PR bullshit. I fed it through an LLM to summarize it, and there was no substantive content in that summary.
Lampooning aside, the sobering reality is that a tiny number of people are acquiring O($10^8) wealth (or more) via such shenanigans, and that is the reason we will see a lot more of this.
Dunno, it's pretty damn good and quite useful. I asked Mistral 7b and Claude Opus to count the banned substances on the WADA anti doping list. Both avoided a straight response giving back loads of bs. Claude was next to useless. Mistral gave a somewhat acceptable answer counting substances on every chapter after much persuasion. Inflection's pi AI gave me a straight answer to the first question without any bs. I like its casual tone and the fact that it does not use needlessly superfluous language like the GPT models.
Also, and I'm going to put this real bluntly, a "personal AI" isn't worth crap to a lot of people until it's allowed to talk about sex.
I'm not even talking for porn purposes. Sex is a basic and healthy part of most peoples' lives, any number of relationship issues revolve around it, and for plenty of people it's literally their profession, legal or not. Anything that pretends it can 'personally' help a broad spectrum of people while treating sex as verboten is just bullshit.
My firm belief is that there's no moral objection here by the companies involved. They just want to avoid being cut off by credit card processors because porn, and are cowardly enough to act like doing so is somehow protecting users.
They learned that most people can't tell the difference between actual technical concepts and PR speak and it worked to sell NFTs, so now that AI is all the hype, guess what.
They had raised massive amount and not from good patient investors. No traction means Mustafa got fired. This is not surprising though but what is surprising is MSFT picked him up. The guy is not technical, is not even visionary and had just got lucky hanging out with Demis. I would think Satya had better taste.
He also left DeepMind because of allegations of bullying employees (https://en.wikipedia.org/wiki/Mustafa_Suleyman). Between that, what you just brought up, and the strange PR blitz he went on to promote his book, I kind of predicted Inflection would run into major trouble well before OpenAI and Anthropic
There’s a whole extremely famous book series on this very topic, you might have heard about it. The first volume’s title is ‘Paul is bad’ but it’s more widely known as ‘Dune’.
I actually found the portrayal of Paul in the Villeneuve films to be closer to what Frank Herbert described as his intent in interviews when compared to the book.
That is, sincere with terrible consequences for bystanders. Not bad per se, but not conducive to the common good.
I also felt that the books undermined the warnings of tyranny by leaning on prescience to provide an ends justify the means argument. Moreso with God Emperor where the Golden Path seemed to be the ultimate end justifying any act.
Well, as Paul Atreides himself explains in the sequel to Dune, his Fremen Jihad causes the death of over 60 billion people throughout the empire. You can safely guess that the vast majority of them are innocent ordinary people, but just not important enough to pause over on the path to grand prophetic dreams. Basically, from when he takes power again, he becomes a genocidal monster in charge of an army of murderous fanatics, and then he spreads this right across the empire, well beyond Arrakis itself where he already caused all kinds of chaos.
The story is a classic narrative of hoped-for ends being used to justify horrible actual means, but cloaked in space opera.
No, it's in the second book actually, (spoiler alert) which covers the time in which Paul's rule has been firmly developed following his enormously bloody jihad across the Empire, and the time after his death, after which his son Leto succeeds hims and lets himself be symbiotically covered in baby sandworms to turn into the eventual god emperor of the Imperium, ruling for 3000 years. (If I remember correctly)
That would apply probably to most famous entrepreneurs. Whatever I have read about Jobs, Musk, Gates and others, they all are very willing to abuse people to achieve their goals.
It's crazy there's zero accountability for bad behavior in tech. I went through my own story at Google, and seeing them say it was vaguely bad before promoting him to VP mirrors exactly the "intervention" I saw.
The deck is completely stacked against you based on hierarchy. Behavior that a fast food manager would proactively solve in 30 seconds gets ignored in white collar tech. No one above you will even mention it - they know you can't win and they just hope you'll quietly give up.
If someone above you in the informal hierarchy is messing with you, there's massive confirmation bias if you complain. They'll spin it to whoever you complain to make you the bad guy. HR never helps - their job is to investigate, and then give the results to someone 2-3 steps above you to do something with.
The higher ups control the outcome, and they designed the power structure in the first place, their confirmation bias is accept the spin.
If you want to survive, avoid conflict 100% of the time. Let people blame you, fail reviews undeservedly.
My Google career ended from just doing exactly what I was supposed to do in order to get a 3 year delayed project done, that 4 separate VPs had been asking for all those years. I spent 6 months warning my manager fuckery was afoot. Didn't matter. TPM witnessed and defended me, didn't matter. Guy who led it hired his unqualified childhood buddy to replace me. Didn't matter. All on me. Everyone wanted to do it, and gee whillakers, refulgentis went mad and dropped the ball completely for some reason.
Of course, 6 months later they delayed the project a 4th year because they could, documenting the only downside being a strained relationship with a less influential partner team. (my orgs managers didn't realize their...unvarnished...takes were in a doc shared with all of Google)
At the end of the day, HR will funnel you into taking mental health leave -- 6 months worth, exactly long enough that an EEOC complaint can no longer be filed. (took me 6 years to realize why "disgruntled Google employee" news articles always included a bit referencing leave/6 months off as if it was a bad thing. go/mh-leave if you're at Google. You don't actually need to talk to HR, and I don't recommend going to them ever. I didn't for this, but they wouldn't have helped.)
>Guy who led it hired his unqualified childhood buddy to replace me. Didn't matter. All on me. Everyone wanted to do it, and gee whillakers, refulgentis went mad and dropped the ball completely for some reason.
How is that possible at Google, which should have a hiring committee? Managers aren't allowed to just hire rando person.
Did your director or VP not like you or something? I'm curious if there's more to this story.
Why didn't higher ups care? I didn't bother going up higher than skip. My core interlocutor was my skip's peer's report's report, I didn't expect my skip to go to war over slow-drip white collar bullying. Honestly, I was done and planning my exit once year 3.5 of "not this year" hit, going crying to VPs you see once a month / once a quarter felt insane & would have just devolved to he said/she said.
To your point re: seems like a lot. I worked with a couple counselors at same level as my skip over my last year there. (highly recommend G2G if anyone reading is at Google, kept me sane.) #1 said they dealt with less after kissing their VP's wife at an off-site - which is why I got #2, wasn't sure if that one was too skeezy at first.
What would I have done differently? I was honest the whole time, which didn't help because the fact I wasn't happy and it was escalating was clear. ex. with the hiring friend thing, told my manager that I was shocked and didn't expect that kind of thing at a startup.
How do you hire a friend without domain experience as a manager?
There's a core principle that once you've made it through Google interviews, domain doesn't matter, all Google SWEs will excel. Having loose rules with kind intent is awesome, but they're double-edged. That gives you rationale, and combined with moving recruiters into individual orgs., lets you put your thumb on the scale and bring in who you want. Also 2022 through June, Google was desperate to hire, this would have been justified as an awesome referral, and what's a friend, anyway?
> lets you put your thumb on the scale and bring in who you want. Also 2022 through June, Google was desperate to hire, this would have been justified as an awesome referral, and what's a friend, anyway?
Ok, I guess you're saying a manager brought in his friend, who was already hired by google (or passed HC), but they didn't get them hired at Google, right?
>#1 said they dealt with less after kissing their VP's wife at an off-site
Lol please tell me this is an official story I can find somewhere???
Anyway, I've heard Google is extremely slow with firing folks, even ones who are abusive. But I do see people get fired, though usually after many years of a pattern.
> Ok, I guess you're saying a manager brought in his friend, who was already hired by google (or passed HC), but they didn't get them hired at Google, right?
Exactly, you're right, they still went through interviews.
One person’s “politics” is another person’s “demonstrate basic empathy and understand that your point of view is not universal.”
Yes, there are creatures who have no real skills other than navigating political currents. But there are also creatures who can’t understand that technical brilliance is no excuse for utter social cluelessness.
You seem to think that people are either good self promoters or socially total clueless. There are a lot of very good people who have normal social skills but aren't good at self promotion (or just don't want to do it). These people won't go very far in most organizations. Self promoters without real skills will win over them. Best is to have real skill and also be a good self promoter.
Of course there's a spectrum here. But when I read someone complain "I did great technical work but POLITICS," there are several possible scenarios:
1. The engineer is reasonable and the people they complain about are craven self-promoters with no real skills
2. The engineer is unreasonable and the people they complain about have normal, regular, business-normative expectations of engineer conduct.
3. Both - the engineer is bad and the self-promoting people are bad.
Your scenario lines up with #1, i think.
I see all three happening. I see many engineers who are gruff, entitled, lack the ability to talk about anything other than their own work, cultivate perceptions of technical status, and seem to actively want to make everyone avoid them. I also see social climbers (#2), but they are easy to spot and not that common in my environment (though others are of course different, including at previous companies).
That's the secret of higher education! Forget about the coursework, forget about the social aspect, forget about exposure to ideas. The actual value of universities is that you keep kids busy during the time when they are potentially destructive, and when they pop out they are now magically older and can handle some basic adulthood-on-training-wheels expectations.
And yet a few hundred years ago we had 14-year-olds in the mines of Cornwall supporting whole families.
The interesting emphasis to me is commercial customers. They are acknowledging that the competition to be the nth gpt vendor is too stiff for them, and they aren’t successful as an independent venture funded research lab.
I can think of 4-10 other large vc funded operations in this boat.
Between this and the Mistral deal (that’s currently under investigation by the EU), Microsoft looks to be really trying to get back into the monopoly business with AI.
I don't know about Inflection, but their new CEO is a great leader who I've worked with in the past. Just having him on board raises their status significantly in my mind.
How is Inflections "Public Benefit Corporation" and "legal status" play here, and how are the lawyers dancing around that entity type (delaware registration, looks as if).
"This is why we decided to make Inflection a Public Benefit Corporation (PBC). It means we have a legal obligation to run our AI studio in a way that balances the financial interests of stockholders, the best interests of people materially affected by our activities, and the promotion of our specific public benefit purpose. That purpose is to “develop products and technologies that harness the power of AI to improve human well-being and productivity, whilst respecting individual freedoms, working for the common good and ensuring our products widely benefit current and future generations”.
> https://inflection.ai/an-inflection-point
This space is moving so fast and it’s hard to keep track of SotA. How are management even keeping track? I suppose MSFT has enough cash to buy ‘em all.
What's your source on this? They just very recently reached 100K downloads on Android and according to various SEO tools, they get maybe ~4M visitsper-month (and these tend to overestimate, plus it's monthly visits, not DAU).
It would be extremely poor form for founders to intentionally leave their startup at this stage to take another job. In all likelihood they were pushed out by the board.
My guess is that the founders were all in on Pi, their conversational language model, and investors lost confidence in their ability to build a sustainable venture-scale company on the ~4th best AI chatbot.
Could this be related to the recent news that Apple is in advanced stages of choosing Google as its AI provider? Maybe the last hope for Inflection was to become Apple's AI provider and when that fell through it was time to put an end to the misery.
This has only helped their small odds of becoming Apple’s provider.
You can’t serve the LLM for every iOS device in the world without a big pile of inference machines and the only companies that fit that bill are Microsoft, Google, and Meta.
You don't train GPT-3+ models or serve millions of users off Amazon spot instances, that's for sure! And whatever arrangement they have is likely transferrable or salable.
Yeah I’m aware I just didn’t know if they had something in house or leased from an HPC provider (not spot instances - although honestly that sort of thing wouldn’t surprise me from some other company like Stability).
It appears that Inflection AI made no sense to begin with and Pi was quite frankly a performative research demo and didn't generate enough money for the VCs to justify another fundraising round. How is Inflection AI worth $4BN?
What can Pi do that is unique over the best of cloud LLMs and the hundreds of $0 free LLMs out there?
It appears that it is a vehicle for VCs to quickly run this company to the ground for a quick exit, knowing that this company is extremely overvalued.
Probably after this acqui-hire, the value of Inflection AI is now down to its real value of $200M at most.
My guess is that Mustafa wanted to sell to MSFT at 10X but MSFT didn’t wanted pay that kind of money. Mustafa was ok with fire sale but VCs were greedy. Mustafa then quite in rage.
The series of events are more 'sketch' and smell worse than the OpenAI / Altman thing. This feels like a corporate war strike / competitor assassination. PI is AMAZING and its latest v2.5 model was only on the market for like a week before this shit went down. I say something in the new tech made massive stakeholders in M$ frantic and panic and deploy a rapid strike team NOW! Before people actually catch wind of how amazing it is/way.
> As part of this, we’re thrilled to announce that we will now host Inflection-2.5 on Microsoft Azure helping us get it into the hands of creators everywhere.
Hahah. You know, the word 'creator' has truly become co-opted by consumerism and greedy bigcorps. What a lot of posts like this never mention is one of the primary uses of AI: to make more efficient the system of trying to get us to buy even more things we don't need. I mean, who are we kidding? In a well-balanced life, we shouldn't need or even interact with personal AIs. We should slow down and appreciate what we have and seek for simplicity.
If there's too much data to be handled in business, it means business is not going in the right direction, not that we need new tools to handle it.
If we are pressured to write more, then we are creating things not of true value but merely to amuse.
If we feel like becoming more efficient is a good thing, it simply is an extension of the original psychological manipulation of advertising demanding MORE for the industrial machine.
Let's not fool ourselves into thinking this garbage is something good.
Pretty hard agree here, but I think there’s space for a gripe.
Our default state as humans is pretty awful and materially deprived, so I think making technology to improve our conditions is good. Making technology that does powerful things in the real world is difficult partly because of the vast quantity of data in its structure. If we make powerful AI systems equipped to handle that scale and built to make our lives better, it would be good, but we’re failing to do that which is the issue.
> Our default state as humans is pretty awful and materially deprived, so I think making technology to improve our conditions is good.
It's not a dichotomy! When I rail against technology, I am not saying that everyone need go back to the stone age. But don't you think there might be a happy medium?
At least personally, I've found that giving up a fair amount of technology in many cases has actually improved life in ways that I didn't think possible. Not everyone needs to be ascetic, but we seem to be headed to a life very tightly integrated with technology, and I think it's arguable that THIS POINT is past the point of that happy medium.
"It is difficult to get a man to understand something when his salary depends upon his not understanding it".
This quote summarizes much of the discourse in HN and the tech industry at large.
As you point out, it is in fact a false dichotomy but it is hard to run counter to hundreds of years of cultural conditioning that has run amok since the start of the industrial revolution. Hobbesian thought is common in these threads!
Luckily, we've had recent thinkers like Ellul, E.F. Schumacher, and Ivan Illich to challenge these entrenched views.
I think much of our technology today is just about consuming more crap, and all that could go and we’d be fine, but I also think that our, say, medical technology could be infinitely better than it is. There’s also cool potentials like space travel or intelligence augmentation that I wouldn’t want to concede.
It's hard to agree with most of what you said without starting from the big assumption that "capitalism is bad" and "being anti-capitalism is a virtue"
Becoming more effective in and of itself simply means becoming more productive, which means creating goods and services with less, which means more "welfare" to people
If people are being convinced to buy shit they don't need, that's a separate issue. But don't throw the baby out with the bathwater
In my well balanced life, I love interacting with LLMs. I also appreciate the things I have all of the time, without needing to slowing down technological progress to do so
I find it curious that your reaction to this is to reduce OP's point to the "capitalism is bad" trope when it is in fact more nuanced than that. His point isn't anti-capitalist per se - it critiques the traits of the dominant economic system that many of us currently live in.
I won't assume bad faith on your part, but your argument also relies on a big assumption with regards to productivity gains, and who reaps them, and at the detriment of whom and what.
> In my well balanced life, I love interacting with LLMs. I also appreciate the things I have all of the time, without needing to slowing down technological progress to do so
Okay that's fine, but you are not the rest of the world. While you enjoy the ultimate fruits of capitalism, there are people in developing countries suffering because of it through a lack of freedom and a destructiom of the landbases that they could once depend on. When climate change comes because of consumerism, it is you and other well-off people that will be able to move first.
OF COURSE, you will disagree with me, because you are probably at the apex. (Just the very fact that you have a well-balanced life means you are close to it, proportionally speaking to the entire human population.)
I don't claim that all capitalism is bad BTW. Rather, what I claim to be bad is global capitalism where other considerations have been eliminated or decimated.
People in developing countries are better off with capitalism than without it... speaking as someone born and raised in a developing country. No, capitalism isn't perfect, but then again nothing is
Speaking of Brazil specifically since that's in your nickname, the deforestation of the Amazon to make room for cattle isn't caused by developed countries running LLMs. It's caused by Brazilians who are negligent, complacent, happy, even, with that choice.
Brazilians are causing deforestation. Capitalism is not needed. It's not like other economic systems protected the environment. Capitalism is just concurrent with deforestation, but not its inherent cause. You might as well say sunrises are causing deforestation because with every new sunrise, more trees are being chopped down.
That is not my argument against LLMs. Please don't put words in my mouth. My actual argument against LLMs is that:
1. They will replace so many jobs en masse that they will cause a large loss of meaning in many lives, that they will concentrate wealth in the hands of the tech companies, that they will enable even more efficient propaganda, and that they will be used as a drug to distract the populace from increasingly destructive practices enacted by global capitalism
2. LLMs will isolate us by providing more and more services that once were provided by humans. AI therapists, girlfriends/boyfriends, solvers of problems that we once turned to our fellow human being to solve. This vast increase in individual autonomy, pushed by psychological manipuluation to THINK that we need such autonomy, will destroy more local communities and connections that people have with each other. In conclusion we will become isolated and miserable, relying completely on the machine to keep our lives balanced without any autonomy to seek out our own way of doing things.
Is that still cliche enough for you, or would you like to make more comments in bad faith about my position?
>That is not my argument against LLMs. Please don't put words in my mouth.
That was the most people could read thought from your previous comments, now that I make you notice that, you wrote some actual arguments, even though I do not share your doomerish vision of the future.
Wouldn't a personal AI help you live a more well-balanced life, allowing us to offload tasks to it so we can appreciate the world around us and simplicity, rather having to do it all ourselves?
So far it seems that other automations have not done so. We've offloaded many tasks to computers, but now we spend more time than ever in front of a screen (see https://www.statista.com/statistics/645644/north-america-dai... for example). More people live in cities, with less opportunities for fresh air. And let's not forget that the system itself is so complex now with so many dependencies that we surround ourselves with anything but simplicity.
There might be a world where what you say is possible of course, but I don't see it in this one when it comes to more advanced forms of automation.
In fact, sometimes the joys of life can be found in the most simple of mechanical tasks. Of course, not everyone thinks so.
The problem is that none of them are actually good enough for that yet. The closest I've seen are calendar-managing LLM services, and even those only really work because it's a tightly-constrained task set and easy to confirm the results as good or not.
Edit: I still can't even trust Siri to be fully accurate 100% of the time about turning the lights on and off, let alone anything more complicated than that.
Pretty mind-boggling how ChatGPT wrapper apps built by dev houses in Poland are absolutely crushing it, while Pi failed to gain any traction despite raising billions.
The only inflection I see here is Microsoft single-handedly dominating global tech industry and its future after its years in sleep. US DoD must have been really worried about the US Empire’s continued dominance.
A plausible theory with less coordination required: Microsoft is a large company with lots of money attempting to regain competitive advantages against their rivals through acquisition.
While that's a true story today, Satya is brutally outplaying Google, Amazon is a utility company and Apple's domain is hardware and will likely continue to be. Mark is making some good strategic plays to keep the field even, but Microsoft really is poised to clean up as AI takes off.
Microsoft is poised to do well, sure, but they're currently beholden to Nvidia for hardware and OpenAI for models, and their main surface area for products is Office, Azure, and to a lesser extent, Outlook.
Comparatively, Google controls their hardware, Google builds their models, and Google has Search, Workspace, Gmail, Android, Cloud, and YouTube, all of which could have significant AI investments.
I'm not saying Satya isn't doing a great job, he really is, and I have many criticisms of Google's approach, but I do think Google is at least equally well positioned, and possibly better positioned.
I agree that TPUs are a huge competitive edge. That and huggingface integration is going to let google do very well serving open models until other competitive hardware comes online. The problem is that I think the window to capitalize on TPUs isn't going to be long enough to cement any sort of dominance, and they're fumbling so badly now that I don't have faith in the leadership.
Outlook is a part of Office. Also, I’m not sure ‘vertical integration’ is the only way to succeed at business. Google is trying to do that here - with a fast-changing landscape, one would want to use the best downstream provider.
I was trying to think of user-perceived surfaces. I'd expect most Outlook.com users are not office users, as they originated as Hotmail or MSN users.
One would want to use the "best" downstream provider, but part of being best is the cost. Would you accept a chip that's 30% slower and 80% cheaper? Yeah probably, especially if you're serving at scale to non-paying traffic. I don't think Google/Amazon/MS need to make chips as fast as Nvidia, or as scalable, as long as they work out for serving costs at scale and have the enabling technology (mostly about sufficient memory).
Is tech hiring great? No. Is it the absolute worse it can be? Also no. Internship or apprenticeship instead of tech interviews are just take home tests that last multiple months, and that is a huge waste of time.
Yes, broken interview system currently rewards people who interview more, but when you combine it with other signals like resume, project presentations and other things, the chances of hiring bad candidates are low. Yes, you miss out on a lot of great candidates, but that is not a problem that big tech companies need to solve. Besides, not knowing the status of your employment 3-6 months down the line is not a great thing for candidates.
Internships are definitely not a waste of time. First of all, they pay well in and of themselves (at least in tech). Second of all, most internships are filled by students on their summer break. What better use of that time than getting an inside view of a company they might want to work for? From the companies perspective it gives them a much better idea of how well the candidate performs and how they fit in at the company, giving much higher confidence in a hiring decision that could lead to significant future impact.
I think you misunderstood. Internships for students during the summer definitely makes sense. However, i was talking about internships/probation as a way to evaluate candidates instead of interviews. Meaning if you are a staff engineer with 10+ years of experience, you’re still going to be hired conditionally for 3-6 months to see if you’re good enough for the team. I personally find that very prone to abuse.