Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is no AGI. An AGI is supposed to be the cognitive equivalent of a human, right? The "AI" being pushed out to people these days can't even count.


The AI is multiple programs working together, and they already pass math problems on to a data analyst specialist. There's also an option to use a WolframAlpha plugin to handle math problems.

The reason it didn't have math from the start was that it was a solved problem on computers decades ago, and they are specifically demonstrating advances in language capabilities.

Machines can handle math, language, graphics, and motor coordination already. A unified interface to coordinate all of those isn't finished, but gluing together different programs isn't a significant engineering problem.


You know what's not a "unified interface" in front of "different programs glued together"? A human.

By your own explanation, the current generation of AI is very far from AGI, as it was defined in GP.


The brain does have specialized systems that work together.


> The AI is multiple programs working together, and they already pass math problems on to a data analyst specialist. There's also an option to use a WolframAlpha plugin to handle math problems.

is quality of this system good enough to qualify for AGI?..


I guess we will know it when we see it. Its like saying computer graphics got so good that we have holodeck now. We dont have holodeck yet. We don't have AGI yet.


The duality of AI's capability is beyond comical. On one side you have people who can't decide whether it can even count, on the other side you have people pushing for UBI because of all the jobs it will replace.


Jobs are being replaced because they're good enough at bullshitting that the C-suites see dollar signs by being able to not pay people by using aforementioned bullshitting software.

Like that post from Klarna that was on HN the other day where they automated 2/3 of all support conversations. Anyone with a brain knows they're useless as chat agents for anyone with an actual inquiry, but that's not the part that matters with these AI systems, the amount of money psycho MBAs can save is the important part


We're at full employment with a tight labor market. Perhaps we should wait until there's a some harder evidence that the sky is indeed falling instead of relying on fragmented anecdotes.


Either clueless or in denial. GPT-4 is already superior to the average human at many complex tasks.


I would agree but the filing is at pains to argue the opposite (seemingly because such a determination would affect Microsoft's license).


The only reason humans can count is because we have a short term memory, trivial to add to an LLM to be honest.


LLMs already have short term memory: context window when they predict next token?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: