Even though tesla has only 2 models, i would still consider it for a new car, if not for Elon Musk. I have an Y, and it does everything i want it to do. Drives nicely, lots of (cargo) space, no friction charging when driving in Europe. Just plug it in a supercharger and it loads fast. No hassle with subscriptions and cards. Very reliable.
With the 3 and the Y they're already catering for a large part of the market demand, but a smaller model, and a stationwagon might help get it up to 80%+ of all demand.
TUV inspection failures are not a good indication of reliability. The lack of Tesla dealers and no need for yearly servicing means issues get caught at the inspection step for Tesla where for others they are caught at the pre-inspection step.
Also, you need a breakdown of the failures as wear and consumables (washer fluid low, splits in wipers, headlight alignment, mobile phone holder in wrong location) can be a failure but would not be a good indicator for lack of quality.
That is bad. One issue seems to be that brakes of electric cars can get issues over time as they are not used enough (because instead of true braking the regenerative recuperation is used).
Good though: If you are in an accident Teslas are the safest car one can buy
> The Tesla Model Y achieved the highest overall weighted score of any vehicle assessed by ANCAP in 2025, recording strong performance across all areas of occupant protection and active safety technology.
"Most of the issues involve critical components like brakes, lights, and suspension. Many cars fail because of play in the steering or faulty axles. These are problems rarely seen at the same level in competitors like Volkswagen or Hyundai."
I just received a consumer association (consumentenbond) test on car reliability and this time the Tesla Y is the most reliable EV in the overview. It scored 9.3 out of 10. The Toyota Aygo is the most reliable ICE with a score of 9.7. Looks like Tesla has massively improved their reliability.
Palantir's mission is to exactly solve the problem you're describing: break through data siloes to get better information. Core of the platform are data pipelines that can move data from any silo into the palantir data lake, where it can be analysed. Their forward engineering project approach probably enables them to bypass the organisational boundaries between departments. Their top-down selling approach ensures management assists bypassing organisational boundaries.
> break through data siloes to get better information
This is the pitch of every consulting company ever.
In this case, Palantir is doing VLOOKUP on healthcare records to get suspects’ addresses. They then put that in a standalone app because you can’t charge buttloads of money for a simple query.
Something I see often in technical circles (and I'm not accusing you) is the manufacturing of consent for ghoulish behaviour by describing it in a reductive way. I think there's a bias to consider sophisticated violations of civil rights as more nefarious than mundane ones.
I think we should embrace AI to craft better software. You have a lot of control over the code generated by AI, so all your designs, patterns, best practices can be used in the generated code. This will make us better software craftsmen.
A nice example is guitar building: there's a whole bunch of luthiers that stick to traditional methods to build guitars, or even just limit themselves to japanese woodworking tools.
But that is not the only way to build great guitars. It can be done by excellent luthiers, building high quality quitars with state of the art tools. For example Ulrich Teuffel who uses all sorts of high tech like CAD systems and CDC machines to craft beautiful guitars: https://www.youtube.com/watch?v=GLZOxwmcFVo and https://www.youtube.com/watch?v=GLZOxwmcFVo
Unfortunately, craftsmanship does not come cheap, so most customers will turn to industrially created products. Same for software.
But your comparison is a bit off; you mention CNC machines and the like to build guitars, but those are tools that are still exactly programmed by humans. LLMs on the other hand are probabilistic - you prompt "write me a set of gcode instructions for a CNC to build a guitar body" and wait / hope.
Sure, LLMs as a tool probably have a place in software development, but the danger lies in high volume, low oversight.
But there's people using it large scale to build large applications, time will tell how they work out in the end. Software engineering is programming over time, and the "over time" for LLM based software engineering hasn't been long enough yet.
You have a lot of control over what the LLM creates. The way you phrase your requirements, give it guidance over architecture, testing, ux, libraries to use. You can build your own set of skills to outline how you want the LLM to automate your software process. There's a lot of craftmanship in making the LLM do exactly what you think it needs to do. You are not a victim at the mercy of your LLM.
You are a lead architecture, a product manager, a lead UXer, a lead architect. You don't have 100% control over what your LLM devs are doing, but more than you think. Just like normal managers don't micromanage every action of their team.
> You have a lot of control over what the LLM creates.
No, you don't, you have "influence" or "suggestion".
You can absolutely narrow down the probability ranges of what is produced , but there is no guarantee that it will stick to your guidelines.
So far, at least, it's just not how they work.
> You don't have 100% control over what your LLM devs are doing, but more than you think. Just like normal managers don't micromanage every action of their team.
This overlooks the role of actual reasoning/interpretation that is found when dealing with actual people.
While it might seem like directing an LLM is similar in practice to managing a team of people, the underlying mechanisms are not the same.
If you analyse based on comparisons between those two approaches, without understanding the fundamental differences in what's happening beneath the surface, then any conclusions drawn will be flawed.
---
I'm not against LLM's, i'm against using them poorly and presenting them as something they are not.
I think i have enough control, probably more than when working with developers. Here's something i recently had claude code build: https://github.com/ako/backing-tracks
This is probably just a disagreement about the term "control", so we can agree to disagree on that one i suppose.
The rest of the reply doesn't really relate to any of the points i mentioned.
That it's possible to successfully use the tool to achieve your goals wasn't in dispute.
I'll try to narrow it down:
---
> You are not a victim at the mercy of your LLM.
Yes, you absolutely are, it's how they work.
As i said, you can suggest guidelines and directions but it's not guaranteed they'll be adhered to.
To be clear , this also applies to people as well.
---
Directing an LLM (or LLM based orchestration system) is not the same as directing a team of people.
The "interface" is similar in that you provide instructions and guidelines and receive an attempt at the wanted outcome.
However, the underlying mechanisms of how they work are so different that the analogy you were trying to use doesn't make sense.
---
Again, LLM's can be useful tools, but presenting them as something they aren't only serves to muddy the waters of understanding how best to use them.
---
As an aside, IMO, the sketchy salesmen approach to over-promising on features and obscuring the the limitations will do great harm to the adoption of LLM's in the medium to long term.
The misrepresentation of terminology is also contributing to this.
The term AI is intentionally being used to attribute a level of reasoning and problem solving capability beyond what actually exists in these systems.
Looks like we just have different expectations: i don't want to micromanage my coding agents any more than i micromanage the developers i work with as a product manager. If the output does what it is supposed to do, and the software is maintainable and extendable by following certain best practices, i'm happy. And i expect that goes for most business people.
And in practice i have more control with a coding agent than with developers as i can iterate over ideas quickly: "build this idea", "no change this", "remove this and replace it with this". Within an hour you can quickly iterate an idea into something that works well. With developers this would have taken days if not more. And they would've complained i need to better prepare my requirements.
If it's working for you, great, but presenting it like it's a general direct replacement for development teams is disingenuous.
---
> Looks like we just have different expectations: i don't want to micromanage my coding agents any more than i micromanage the developers i work with as a product manager. If the output does what it is supposed to do, and the software is maintainable and extendable by following certain best practices, i'm happy. And i expect that goes for most business people.
None of what i said implied any expectations of the process of using the tools, but if you've found something that works for you that's good.
On the subject of maintainability and extension, that is usually bound to the level of complexity of the project and the increase in requirements is not generally linear.
I agree, many business people would love what you've described, very few are getting it.
> And in practice i have more control with a coding agent than with developers as i can iterate over ideas quickly: "build this idea", "no change this", "remove this and replace it with this". Within an hour you can quickly iterate an idea into something that works well. With developers this would have taken days if not more. And they would've complained i need to better prepare my requirements.
Up to a point, yes.
If your application of this methodology works well enough before you hit the limitations of the tooling, that's great.
There is , however, a threshold of complexity where this starts to break down, this threshold can be mitigated somewhat with experience and a better understanding on how to utilise the tooling, but it still exists (currently).
Once you reach this threshold the approaches you are talking about start to work less effectively and even actively hinder progress.
There are techniques and approaches to software development that can further push this threshold out, but then you're getting into the territory of having to know enough to be able to instruct the LLM to use these approaches.
> You have a lot of control over what the LLM creates. The way you phrase your requirements, give it guidance over architecture, testing, ux, libraries to use. You can build your own set of skills to outline how you want the LLM to automate your software process
Except for the other 50% of the time where it goes off the rails and does what you explicitly asked it not to do.
I am a craftsman of fine puzzles made from wood and CNC machined metal. I use LLM in lots of ways to help on individual parts of bigger puzzle design projects, like for example to create custom puzzle solver software which can search through large sets of possible notching patterns on wooden sticks in order to find ones that meet some criteria or are optimized in whatever manner I find aesthetically pleasing.
I’ve been writing various single-purpose software tools of these sorts for decades. I would not want to go back to hand-writing them now that I can have agents (cursor, claude code, etc) lay down the algorithmic architecture that I vibe at them, now that I know how to “speak that language” and reliably get the software outcomes that I seek.
I find this similar to how I would not want to spend all day turning the crank handles on a manual milling machine when I can have a CNC mill do it, now that I know how to use various CAM systems well and have the proper equipment.
Given that my overall craft is not limited to just writing code or turning crank handles, I readily embrace any improvements of my workshop “technology stack” so that I can produce higher quality artwork.
I agree. The article's logic is incoherent. It conflates the choice of tools with the decision what product to make and what level of quality to aim for.
If AI can be used to make bad (or good enough) software more cheaply, I have no problem with that. I'm sure we will get a huge amount of bad software. Fine.
But what matters is whether we get more great software as well. I think AI makes that more likely rather than less likely.
Less time will be spent on churning out basic features, integrations and bug fixes. Putting more effort into higher quality or niche features will become economically viable.
I wonder if that's only really true for "pre-LLM" engineers though. If all you know is prompting maybe there's not a higher quality with more focused that can really be achieved.
It might just all meld into a mediocre soup of features.
To be clear not against AI assisted coding, think it can work pretty great but thinking about the implications for future engineers.
>If all you know is prompting maybe there's not a higher quality with more focused that can really be achieved.
That's true of any particular individual but not for a company that can decide to hire someone who can do more than prompting.
>It might just all meld into a mediocre soup of features
I don't think the relative economics have changed. Mediocre makes sense for a lot of software categories because not everyone competes on software quality.
But in other areas software quality makes a difference it will continue to make a difference. It's not a question of tools.
And long before that i built an ms-access UI on top of an Oracle 6 database, so yes, MS-Access was one of the client-server era low-code tools. Like powerbuilder, Oracle Forms, etc. (Hi jouke!)
Depends mostly on efficiency: GraphQL (or Odata as a REST compliant alternative that has more or less the same functionality) provide the client with more controls out of the box to tune the response it needs. It can control the depth of the associated objects it needs, filter what it doesn't need, etc. This can make a lot of difference for the performance of a client. I actually like Odata more than GraphQL for this purpose, as it is REST compliant, and has standardized more of the protocol.
So you don't think the free market will force manufactures to compete on better batteries? I always thought the benefit of the free market was that it forced companies to compete on product quality... /s
To be honest with you, the free market does work when incentives are aligned.
If you get maximum profit from the maximum social good, people will do that (or find a way to cheat); but as it stands, theres money to be made in not doing this and the consumer won’t care too much if its 9 years or 10 years that their car lasts, so its not hurting sales to not fix this (even if fixed perfectly, it would take 10 years to prove after all!).
I think I’m dreaming, the investment would have to be enormous, who wants to hold stock of so many batteries? Who will convince manufacturers to integrate standardised batter packs instead of the more profitable “built-in phone style” that is used today, and the automotive marketing machine is really strong and will (correctly) lean on the idea that by having the battery replaceable would require less rigid car bodies, so their current incentive would be to fight this initiative and they would probably lead with the safety angle.
The anti-EV propaganda already works pretty well with the very little it has to work with (farming batteries is harmful), so, imagine what they could do with something of actual substance.
I recently did a day trip of 800km while it was freezing and snowing. Yes the range is impacted, so i never did more than 200km in one go. Then a quick 15 minutes break to recharge and continue. It takes a bit longer, but not bad enough to go back to ICE cars. EV drives so much nicer.
reply