The typical job of a CTO is nowhere near "finding out what business needs and translate that into pieces of software". The CTO's job is to maintain an at least remotely coherent tech stack in the grand scheme of things, to develop the technological vision of a company, to anticipate larger shifts in the global tech world and project those onto the locally used stack, constantly distilling that into the next steps to take with the local stack in order to remain competitive in the long run. And of course to communicate all of that to the developers, to set guardrails for the less experienced, to allow and even foster experimentation and improvements by the more experienced.
The typical job of a Product Manager is also not to directly perform this mapping, although the PM is much closer to that activity. PMs mostly need to enforce coherence across an entire product with regard to the ways of mapping business needs to software features that are being developed by individual developers. They still usually involve developers to do the actual mapping, and don't really do it themselves. But the Product Manager must "manage" this process, hence the name, because without anyone coordinating the work of multiple developers, those will quickly construct mappings that may work and make sense individually, but won't fit together into a coherent product.
Developers are indeed the people responsible to find out what business actually wants (which is usually not equal to what they say they want) and map that onto a technical model that can be implemented into a piece of software - or multiple pieces, if we talk about distributed systems. Sometimes they get some help by business analysts, a role very similar to a developer that puts more weight on the business side of things and less on the coding side - but in a lot of team constellations they're also single-handedly responsible for the entire process. Good developers excel at this task and find solutions that really solve the problem at hand (even if they don't exactly follow the requirements or may have to fill up gaps), fit well into an existing solution (even if that means bending some requirements again, or changing parts of the solution), are maintainable in the long run and maximize the chance for them to be extendable in the future when the requirements change. Bad developers just churn out some code that might satisfy some tests, may even roughly do what someone else specified, but fails to be maintainable, impacts other parts of the system negatively, and often fails to actually solve the problem because what business described they needed turned out to once again not be what they actually needed. The problem is that most of these negatives don't show their effects immediately, but only weeks, months or even years later.
LLMs currently are on the level of a bad developer. They can churn out code, but not much more. They fail at the more complex parts of the job, basically all the parts that make "software engineering" an engineering discipline and not just a code generation endeavour, because those parts require adversarial thinking, which is what separates experts from anyone else. The following article was quite an eye-opener for me on this particular topic: https://www.latent.space/p/adversarial-reasoning - I highly suggest anyone working with LLMs to read it.
Would you consider cute animal videos that are not AI generated to be so much more worthy of your time? Because I don't really care whether cute animal videos are AI generated or filmed - I simply don't want to spend even a second on them.
And most people I know who love spending time on this kind of content would not care either - because they don't care whether they waste time on real or AI animal videos. They just want something to waste time with.
> Would you consider cute animal videos that are not AI generated to be so much more worthy of your time?
Yes indeed. I do love me some cat and bunny videos. But I hate getting fed slop - and it's not just cat videos by the way. I'm (as evidenced by my comment history) into mechanics, electronics and radio stuff, and there are so damn many slop channels spreading outright BS with AI hallucinated scripts that it eventually gets really really annoying. Sadly, YT's algorithm keeps feeding me slop in every topic that interests me and frankly it's enraging, as some of my favorite legitimate creators like shorts as a format so I don't want to completely hide shorts.
> And most people I know who love spending time on this kind of content would not care either - because they don't care whether they waste time on real or AI animal videos. They just want something to waste time with.
The problem is, these channels build up insane amounts of followers. And it would not be the first time that these channels then suddenly pivot (or get sold from one scam crew to the next) and spread disinformation, crypto scams and other fraud - it was and is a hot issue on many social media platforms.
No one buys into Elon's firms because he's expecting dividends.
His investors are not investing because of his success rate in delivering on his promises. His investors are investing exclusively because they believe that stock they buy now will be worth more tomorrow. They all know that's most likely not because Elon delivers anything concrete (because he only does that in what, 20% of cases?), but because Elon rides the hype train harder tomorrow. But they don't care if it's hype or substance, as long as numbers go up.
Elon's investors are happy with his success rate only in terms of continuously generating hype. Which, I have to admit, he's been able to keep up longer now than I ever thought possible.
Theranos were also hyping a lot and trying to build some stuff. There is some threshold (to be decided where) after which something is more of a fraud than a hype.
Also these days stock market doesn't have much relation to real state of economy - it's in many ways a casino.
Not sure who determines the threshold, he certainly goes to court more than your average person, but these are not start ups, they are large companies under a lot of scrutiny. I don't think the comparison is valid
>he certainly goes to court more than your average person
Yes because he sues a lot of entities for silly things such as some advertisers declining to buy ads that display next to pro-hitler posts, or news outlets for posting unaltered screenshots of a social media site he acquired.
If Theranos promised ten amazing innovations or useful products, got 7 of them to market to great success while revolutionizing their industry I'd forgive them if their other 3 products turned out to be hype.
> The hype to substance ratio isn't quite as important as some choose to beleive
Musk's ratio is such that his utterances are completely free from actionable information. If he says something, it may or may not happen and even if it does happen the time frame (and cost) is unlikely to be correct.
I don't get why anyone would invest their money on this basis.
Remember when people said that Starlink would never happen? What about when "experts" said that a private space company would never launch rockets. Or that no one would by an electric car made by an upstart company? Or when people said that the downsizing at Twitter would cause the company to collapse and that we could expect it to be defunct and dead within a year?
Some combination of the two, for sure. doesn't mean that Musk can't keep doing it. however you describe it or define it, it's a proven strategy at this point. I'm not sure Larry knew how Musk would make him good on Twitter, but he knew enough about Musk to be confident it would happen.
I think this is why he gets away with it. A "win" is a product delivered years late for 3x the promised MSRP with 1/10th the expected sales. With wins like these, what would count as a loss?
He gets away with it for one reason only, and because he consistently delivers good returns on capital.
Most of Tesla's revenue derives from Model Y and FSD subs. I agree that Cybertruck was a marketing ploy. Don't think it was ever intended to be materially revenue generating.
Revenue has flatlined, but investors' confidence comes from Musk's track record for delivering good returns to investors. I think we can agree Musk succeeded in 2020 to 2025 in this regard. Whether you are confident he can do it again over next five years is the key question.
I'm personally more persuaded by the argument that Tesla is a meme-stock at this point - like much of crypto, it runs on "vibes", not solid fundamentals.
But even if share price is the metric for success, 33.6% over 5 years is like 6% compounded annually, which is okay I guess? [0]
Tesla's sales have suffered, yes, and Elon's image is a significant contributor to that, besides all the reasons directly related to the cars themselves.
But Tesla's stock price is still stuck in irrational heights, not even remotely justifiable by the company's performance.
It just seems that people reconsider purchasing a physical object way quicker than they reconsider a stock investment. Maybe because the stock investment, especially in TSLA, is considered more like a gamble - "as long as others also think that this stock will skyrocket, even just because they think that others like me think it will skyrocket - as long as that's the case, I'm good with buying shares".
Actually, discount grocers operate on razor-thin margins of 2-4%. If your inaccuracy is geared to the benefit of your customer (because otherwise you'll be out of business due to the regulatory bodies) and thus removes just one percent of that, you suddenly lose a quarter to half of your earnings! And that goes ON TOP of the additional cost incurred with all that computer vision tech.
In addition to that, you'll have the problem of inventory differences, which is often cited as being an even bigger problem with store theft than the loss of valued product. If the inventory numbers on your books differ too much from the inventory actually on the shelves, all your replenishment processes will suffer, eventually causing out of stock situations and thus loss of revenue. You may be able to eventually counter that by estimating losses to billing inaccuracies, but that's another complexity that's not going to be free to tackle, so the 1% inaccuracy is going to cost you money on the inventory difference front, no matter what.
And to add to that, it's not a neutral environment. If there's 1% of scenarios that are incorrect, people will figure out they haven't been billed for something, figure out why, and then tell their friends. Before you know it every teenager is walking into Amazon Fresh standing on one foot, taking a bag of Doritos, hopping over to the Coca Cola stand, putting the Doritos down, spinning 3 times, picking it up again and walking out of store, safe in the knowledge that the AI system has annotated the entire event as a seagull getting into the shop.
Which is mostly the result of clever engineers that produced a machine no other company in the world can assemble, but that is absolutely crucial to businesses valued at double-digit trillions of dollars.
You don't really need an army of sales managers to sell such a product. Going lean on management and more heavy on engineering is therefore a good idea if you want to keep the lead you have.
No, but ASML's product is so complicated that they do need a lot more than just engineers - they have 5000 suppliers apparently, coordinating that takes a lot more than clever engineers.
Clever engineers are usually able to pick up basic supply chain management capabilities. At least as long as it's about suppliers of things in their technical domain.
For non-technical supply chain managers to pick up enough technical chops to understand the stuff they are supposed to manage the supply chain of is comparatively more difficult.
Especially when fierce negotiations to push the price down are not the highest priority, but robustness of supply chains, having alternative options that technically work, and ensuring quality according to tight specs are paramount. Which is how I assume ASML supply chain management to work.
I can third this observation. I've even had my flat above one of these for 10 years. Small company, privately-owned, five employees or so. They have a few pick-and-place machines (SIMATICs as far as I have seen) located in a small factory building and manufacture small production runs with them.
They don't have a real website advertising their services, but they seem to do well, probably their customers know them. They've run their business continuously for at least those 10 years I've lived at that spot. I could smell the soldering oven running constantly.
For an example with a website, see Waterott, it's run by one person who has a single Siplace SMT machine and stencils manually, and he has no issues earning money.
We know how each of the "parts" work, but there is a gazillion of parts (especially since you need to take the model weights into account, which are way larger in size than the code that generates them or uses them to generate stuff), and we found out that together they do something that we do not really understand why they do it.
And inspecting each part is not enough to understand how, together, they achieve what they achieve. We would need to understand the entire system in a much more abstract way, and currently we have nothing more than ideas of how it _might_ work.
Normally, with software, we do not have this problem, as we start on the abstract level with a fully understood design and construct the concrete parts thereafter. Obviously we have a much better understanding of how the entire system of concrete parts works together to perform some complex task.
With AI, we took the other way: concrete parts were assembled with vague ideas on the abstract level of how they might do some cool stuff when put together. From there it was basically trial-and-error, iteration to the current state, but always with nothing more than vague ideas of how all of the parts work together on the abstract level. And even if we just stopped the development now and tried to gain a full, thorough understanding of the abstract level of a current LLM, we would fail, as they already reached a complexity that no human can understand anymore, even when devoting their entire lifetime to it.
However, while this is a clear difference to most other software (though one has to get careful when it comes to the biggest projects like Chromium, Windows, Linux, ... since even though these were constructed abstract-first, they have been in development for such a long time and have gained so many moving parts in the meantime that someone trying to understand them fully on the abstract level will probably start to face the difficulty of limited lifetime as well), it is not an uncommon thing per se: we also do not "really" understand how economy works, how money works, how capitalism works. Very much like with LLMs, humanity has somehow developed these systems through interaction of billions of humans over a long time, there was never an architect designing them on an abstract level from scratch, and they have shown emergent capabilities and behaviors that we don't fully understand. Still, we obviously try to use them to our advantage every day, and nobody would say that modern economies are useless or should be abandoned because they're not fully understood.
The typical job of a Product Manager is also not to directly perform this mapping, although the PM is much closer to that activity. PMs mostly need to enforce coherence across an entire product with regard to the ways of mapping business needs to software features that are being developed by individual developers. They still usually involve developers to do the actual mapping, and don't really do it themselves. But the Product Manager must "manage" this process, hence the name, because without anyone coordinating the work of multiple developers, those will quickly construct mappings that may work and make sense individually, but won't fit together into a coherent product.
Developers are indeed the people responsible to find out what business actually wants (which is usually not equal to what they say they want) and map that onto a technical model that can be implemented into a piece of software - or multiple pieces, if we talk about distributed systems. Sometimes they get some help by business analysts, a role very similar to a developer that puts more weight on the business side of things and less on the coding side - but in a lot of team constellations they're also single-handedly responsible for the entire process. Good developers excel at this task and find solutions that really solve the problem at hand (even if they don't exactly follow the requirements or may have to fill up gaps), fit well into an existing solution (even if that means bending some requirements again, or changing parts of the solution), are maintainable in the long run and maximize the chance for them to be extendable in the future when the requirements change. Bad developers just churn out some code that might satisfy some tests, may even roughly do what someone else specified, but fails to be maintainable, impacts other parts of the system negatively, and often fails to actually solve the problem because what business described they needed turned out to once again not be what they actually needed. The problem is that most of these negatives don't show their effects immediately, but only weeks, months or even years later.
LLMs currently are on the level of a bad developer. They can churn out code, but not much more. They fail at the more complex parts of the job, basically all the parts that make "software engineering" an engineering discipline and not just a code generation endeavour, because those parts require adversarial thinking, which is what separates experts from anyone else. The following article was quite an eye-opener for me on this particular topic: https://www.latent.space/p/adversarial-reasoning - I highly suggest anyone working with LLMs to read it.
reply