Hacker Newsnew | past | comments | ask | show | jobs | submit | sodafountan's commentslogin

Because they're a corporation that makes money. They have incentives to employ people, and the vendor lock-in with Windows is far too large to change anything at the moment or anytime in the foreseeable future. Changing Windows to become a Linux-based distro would be a massive corporate undertaking; Microsoft isn't in the business of pleasing tech-minded people. They're a business that makes money.

Linux isn't a corporation; it's really more of an idea. They don't have marketing departments or people trying to sell you licenses. They don't have vendor lock-in or active-directory or a cloud based infastructure. They don't have an entire advertising division or a search engine. There aren't any shareholders to please or paid employees to keep on payroll for government kickbacks. They're not targeting the casual, media-focused, average computer user like Microsoft, which makes a lot of money by doing so.

In my last job, I worked in a mid-sized suburban office. There weren't any "Linux reps" knocking on our door, making sure we were getting the most out of Ubuntu.


I think stranger things have happened, but I don't really believe this is all that likely. Windows has sucked for 30 years now; tacking on another 15 probably won't change all that much about the current state of things.

Microsoft is an enterprise, and enterprises will continue to crank out enterprisey stuff. Linux is free and open source, developed by people with passion - some of it, I assume, is out of necessity. Unless the working world dramatically changes over the next 15 years, Microsoft is still going to Microsoft.

Windows sucks, Azure sucks, Office sucks. Microsoft is a corporation designed to make money, they have a deadlock on the market. From an investor's point of view, they're doing just fine. From a shareholder's point of view, uprooting the entire Windows base to make tech people happy isn't worth the investment. Microsoft hasn't been about making tech people happy since it went public. Microsoft makes money and employs people. People half-heartedly go to work to earn a living, they produce enterprise-grade software. Enterprise software makes money. That's all the investor cares about.

Actually, as a matter of fact, having Windows around to drive the continued development of Linux might be a good thing. I know Windows sucks, I know virtually anything technical is dramatically easier on Linux, but anything without competition eventually stagnates. Even if Windows exists simply as a "What not to do" in Linux, it's probably good that it remains around.

Currently typing this on a machine that dual-boots both Windows and Linux. Why? Because my laptop came installed with it.


Wow, this is interesting to see. I thought jQuery was dead.

My next question would be, is this something that OpenAI and Anthropic would train their data on? If I ask Claude Code to write an app and utilize jQuery, would it resolve to the previous version until it's retrained in a newer model?


Sure, the world still needs software. Perhaps even more so now with the advent of AI.

There's still a ridiculously minuscule chance that you'll end up profitable, let alone successful.

I read that software, from a business perspective, is almost always looked at as a liability rather than an asset. There just isn't very much profitable software in this world, and the software that can become profitable likely subsidizes its business with advertising. The only saving grace is that software that becomes profitable can typically scale. See Facebook, TikTok, Google.

Games sell but are risky and creatively demanding.

Software is a tough business.


This was an interesting application of AI, but I don't really think this is what LLMs excel at. Correct me if I'm wrong.

It was interesting that the poster vibe-coded (I'm assuming) the CTL from scratch; Claude was probably pretty good at doing that, and that task could likely have been completed in an afternoon.

Pairing the CTL with the CLI makes sense, as that's the only way to gain feedback from the game. Claude can't easily do spatial recognition (yet).

A project like this would entirely depend on the game being open source. I've seen some very impressive applications of AI online with closed-source games and entire algorithms dedicated to visual reasoning.

I'm still trying to figure out how this guy: https://www.youtube.com/watch?v=Doec5gxhT_U

Was able to have AI learn to play Mario Kart nearly perfectly. I find his work to be very impressive.

I guess because RCT2 is more data-driven than visually challenging, this solution works well, but having an LLM try to play a racing game sounds like it would be disastrous.


Not sure if you clocked this, but the Mario Kart AI is not an LLM. It's a randomized neural net that was trained with reinforcement learning. Apologies if I misread.

Yeah, that was the point of my post. LLMs traditionally aren't used in gaming like this.

I wonder if the future of AI is that we all just create our own programs out of thin air like this. Like if I need something, I just describe it to AI, and within seconds, it's generated and ready to use.

Operating systems become redundant; you open any digital device, and it's just a portal into the most advanced LLM on the planet.

Obviously just spitballing here.

I wonder how far AI will advance.


Operating systems, no. You still have to access what is going to be standardized hardware and make the analog bits behave digitally at low power.

Applications, yea, 100%.


I found it interesting that the OP defaulted to using an AI agent for his voice recording software rather than doing a Google search. Perhaps a sign of things to come? I would've chosen Google, but maybe I'll be falling behind in the future.

Aside from getting an LLM up and running on a device, what's stopping AI from creating an operating system? I admittedly don't know much about operating system development, but aren't most operating systems written primarily in C?

I guess what I meant by that is it would be interesting if the AI prompt itself were the OS, and all software would be generated via prompting the agent. No downloads, just a "What do you need?" prompt with the AI generating everything on the fly.

Perhaps becoming so fast that you wouldn't even notice it thinking. Just: "I need to edit a document that was sent to my email" The AI would then retrieve the email, download the document and generate its own text editor to display the document in. All within a few milliseconds.

Call it AIOS


>AI from creating an operating system?

Nothing really... Creating a working operating system and understanding all the hardware bugs it could run into is a different story.

Simply put when you look at the combined energy expenditure to create something like Windows or Linux and the numbers would likely stagger a person, like hundreds of gigawatts, hell probably terrawatts. This entropy expenditure is reduced by us sharing the code. This is the same reason we don't have that many top end AI models. The amount of energy you need to spend for one is massive.

Intelligence doesn't mean you should do everything yourself. Sharing and stealing are solutions used in the animal kingdom as alternate solutions to the limited fuel problem.


Hardware bugs can be documented for an LLM to learn from; it's really just a chicken-and-egg problem. There are plenty of open-source, working operating systems for LLMs to learn from as well.

And yes, I understand code re-use and distribution are valuable, and that's a good point. Having an LLM generate everything on the fly is definitely energy-intensive, but that hasn't stopped the world from building massive data centers to support it, regardless.

I guess the theory of my past few posts would be similar to rolling updates, so using the text editor as an example, you'd prompt the AI agent in the hypothetical OS to open a document, and it would generate a word processor on the fly, referencing the dozens of open source repos for word processors and pushing its own contributions back out into the world for reference by other LLMs - computationally expensive, yes. It would then learn from your behaviors, utilizing the program, and the next time you'd prompt the OS for a word-processor-like feature (I'm imagining an MS-DOS-like prompt), it would iterate on that existing idea or program - less computationally expensive because ideally the bulk of the work is already learned. Perhaps adding new features or key-bindings as it sees fit. I understand that hard-disk space is cheap, and you'd probably want some space to store personal files, but the OS could theoretically load your program directly into RAM once it's compiled from AI-generated source code. Removing the need to save programs themselves to disk.

Since LLMs are globally distributed, they're learning from all human interactions and are actively developing cutting-edge word processors tailored specifically to the end-users' needs. More of a VIM-style user? The LLM can pick up on that, prefer something more like MS Word? The LLM is learning that too. AIOS slowly becomes geared directly to you, the end-user.

That really has nothing to do with intelligence; you're just teaching a computer how to compute, which is what AI is all about.

Just some ideas on what the future might hold.


https://joshsiegl-251756324000.northamerica-northeast1.run.a...

I still need to map a domain. I used to maintain a personal blog years ago, but let it expire. I just recently created this new one.


Can someone explain to me how this was allowed to happen? Wasn't Siri supposed to be the leading AI agent not ten years ago? How was there such a large disconnect at Apple between what Siri could do and what "real" AI was soon to be capable of?

Was this just a massive oversight at Apple? Were there not AI researchers at Apple sounding the alarm that they were way off with their technology and its capabilities? Wouldn't there be talk within the industry that this form of AI assistant would soon be looked at as useless?

Am I missing something?


Source: while I don’t have any experience with the inner workings of Siri, I have extensive experience with voice based automation with call centers (Amazon Connect) and Amazon Lex (the AWS version of Alexa).

Siri was never an “AI agent”, with intent based systems, you give the system phrases to match on (intents) and to fulfill an intent, all of the “slots” have to be fulfilled. For instance “I want to go from $source to $destination” and then the system calls an API.

There is no AI understanding - it’s a “1000 monkeys implementation”, you just start giving the system a bunch of variations and templates you want to match on in every single language you care about and match the intents to an API. That’s how Google and Alexa also worked pre LLM. They just had more monkeys dedicated to creating matching sentences.

Post LLM, you tell the LLM what the underlying system is capable of, the parameters the API requires to fulfill an action and the LLM can figure out the users intentions and ask follow up questions until it had enough info to call the API. You can specify the prompt in English and it works in all of the languages that the LLM has been trained on.

Yes I’ve done both approaches


I appreciate the response, but that doesn't really answer my question.

I want to know why the executive leadership at Apple failed to see LLMs as the future of AI. ChatGPT and Gemini are what Siri should be at this point. Siri was one of the leading voice-automated assistants of the past decade, and now Apple's only options are to strap on an existing solution to the name of their product or let it go defunct. So now Siri is just an added layer to access Gemini? Perhaps with a few hard-coded solutions to automate specific tasks on the iPhone, and that's their killer app into the world of AI? That's pathetic.

Is Apple already such a bloated corporation that it can no longer innovate fast enough to keep up with modern trends? It seems like only a few years ago they were super lean and able to innovate better than any major tech company around. LLMs were being researched in 2017. I guess three years was too short of a window to change the direction of Siri. They should have seen the writing on the wall here.


According to everything that has been reported, both the Google Assistant and Alexa are less reliable now that they are LLM based.

I don’t know why, in my much smaller scale experience, converting to an LLM “tools” based approached from the Intent based approach is much more reliable.

Siri was behind pre LLM because Apple didn’t throw enough monkeys at the problem.

Everything that an assistant can do is “hardcoded” even when it is LLM based.

Old way: voice -> text -> pattern matching -> APIs to back end functionality.

New Way: voice -> text -> LLM -> APIs to back end functionality.

How often have you come across a case where Siri understood something and said “I can’t do that”? That’s not an AI problem. That’s Apple not putting people on the intent -> API mapping. An LLM won’t solve the issue of exposing the APIs to Siri.


I don't really want to continue on with this discussion, as AI in general can be absolutely infuriating. It's one of those buzzwords that's just being thrown around without a care in the world at this point, but do you have any links to those reports? I'd be willing to bet that if Google Assistant and Alexa were being run properly, then they shouldn't be less reliable when working with an LLM.

I don't think Apple didn't have enough people working on Siri, I think they had too many people working on the wrong problems. If they had any eye on the industry like they did in their heyday when Jobs was at the helm they would've been all over LLMs like Sam Altman was with his OpenAI startup. This report of SIRI using Gemini going forward is one of the biggest signs that Apple is failing to innovate, let alone the constant rehashing of Iphone and IOS. They haven't been innovative in years.

And yes that's the point I was trying to make, AI assistants shouldn't be hardcoded to do certain things, that's not AI - but with Apple's marketing, they'd have you believe that SIRI is what AI should be, except now everyone's wiser, everyone and their grandmother has used ChatGPT which is really what SIRI should have been. Changes to the IOS API should roll out and an LLM backed AI assistant should be able to pick up on those changes automatically, SIRI should be an LLM trained on Apple Data, its APIS, your personal data (emails, documents,etc.), and a whole host of publicly available data. This would actually make SIRI useful going into the future.

Again, if Apple's marketing team were to be believed, SIRI would be the most advanced LLM on the planet, but from a technical standpoint, they haven't even started training an LLM at all. It's nonsense.


AI assistants can’t magically “do stuff” without “tools” exposed. A tool is always an API that someone has to write an expose to the orchestrator whether it’s AI or just a dumb intent system.

And ChatGPT can’t really “do anything” without access to tools.

You don’t want an LLM to have access to your total system without deterministic guardrails and limiting the permissions of what the tools can do just like you wouldn’t expose your entire database with admin privileges to the web.

You also don’t want to expose too many tools to the system. Every tool you expose you also have to have a description of what the tool does, the parameters it needs etc. Ot will both blow up your context window and start hallucinating. I suspect that’s why Alexa and Google Assistant got worse when they became LLM based and my narrow use cases don’t suffer those problems when I started implementing LLM based solutions.

And I am purposefully yada yada yadaing some of the technical complexities and I hate the entire “appeal to authority” thing. But I worked at AWS for 3.5 years until 2 years ago and I was at one point the second highest contributor to a popular open source “AWS Solution” that almost everyone in the niche had heard of dealing with voice automation. I really do know about this space.


yeah, there's just really nothing left to discuss. Apple could have been a real leader in the AI space had they hired the right researchers to implement LLMs and beaten OpenAI to the punch.

I understand that AI assitants need access to tools in order to do anything on a computer, I've been working with AI augmented development for a few months now and everytime I need a prompt to run a tool it asks for permission first, or just straight up gives me the command to paste into a terminal.

ideally this would have been abstracted away if siri were an LLM, with Apple controlling which apis siri has access to and bipassing user confirmation all together.

It would have been neat if I were able to say, "Hey, Siri: send a text to John Smith with a playfully angry prose thanking him for not inviting me to the party". which would have the LLM automatically craft the message and send upon confirmation, perhaps with a disclaimer "made with ai" at the bottom of the text or something along those lines.

"Hey, Siri: What's the weather in Los Angeles, California" would fallback to a web api endpoint.

"Hey, Siri: How do I compile my C# application without Visual Studio" would provide step-by-step instructions on working with MSBUILD.

different prompts would fallback on different apis that only Apple would expose. Obviously not allowing the user to gain root access to the system, which is what you would expect from Apple.

I guess from a purely technical standpoint, you'd train two models, one as "Safe" and the other as "Unsafe". "Safe" is what would be used by the end-user, allowing them to access safe data, apis, messaging, web.. you name it. "Unsafe" would be used internally at Apple and would have system-wide access, access to unlimited user data, root privileges, perhaps unsafe image generation and web search... basically no limit to what an LLM could achieve.


And spend billions and billions of dollars to get - a better Siri?

That's my point, Apple has invested so much into Siri that there's no reason why it's not the most advanced LLM in the world. They missed the mark completely. Why? If Jobs were still in charge, the entire team would have been gone years ago.

This was my experience using StackOverflow. I've commented, asked questions, and received answers. Aside from a few questionable downvotes I received occasionally, I never felt like the community was toxic.

Ideally, you'd train them on the core documentation of the language or tool itself.

Hopefully, LLMs lead to more thorough documentation at the start of a new language, framework, or tool. Perhaps to the point of the documentation being specifically tailored to read well for the LLM that will parse and internalize it.

Most of what StackOverflow was was just a regurgitation of knowledge that people could acquire from documentation or research papers. It obviously became easier to ask on SO than dig through documentation. LLMs (in theory) should be able to do that digging for you at lightning speed.

What ended up happening was people would turn to the internet and Stack Overflow to get a quick answer and string those answers together to develop a solution, never reading or internalizing documentation. I was definitely guilty of this many times. I think in the long run it's probably good that Stack Overflow dies.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: