Hacker Newsnew | past | comments | ask | show | jobs | submit | throwaway122024's commentslogin

These problems are trivial when considering that we're already living in a sci-fi timeline where this technology was largely considered impossible 10 years ago. You're asking how it's going to know to ask the right questions, and to whom, when getting a system to ask questions in a general sense was the hard part. If you can't imagine humanity overcoming this hurdle, you lack in imagination.


There isn't a hurdle here. These problems are fundamental to the technology.

If you told me 20 years ago that you could, by connecting a nuclear power plant to a virtually infinite number of GPUs, produce a machine capable of predicting the next token in a sequence, I'dve believed you, but also would have said that it would be an astoundingly huge waste of resources to use to get to that result.

People seem to ascribe some magic here that isn't happening, because they're surprised by the results.


> People seem to ascribe some magic here that isn't happening, because they're surprised by the results.

I believe this is the result of drinking the kool-aid. At this point though, I'm really confused on how AI has any jobs left to do because blockchain was going to solve all the problems.


I don't think it's a useful way to frame it. Model training is very compute-intensive. Generation isn't. You can run it on locally on consumer hardware. It's just not very monetizable, so we're converging on a black-box "in the cloud" approach.

If we're focusing on energy consumption, using an LLM to generate a newspaper article probably uses less energy than a living journalist would use. The morning commute alone is probably "worth" a dozen articles.

The problem is different. First, that journalist is still alive and consuming resources, just out of a job. Second, because you now have a very cheap way to generate an infinite number of articles, and there are commercial incentives to do so, the "dead internet theory" has a good chance of coming true.


A major failing in futurism is not accounting for both quality and cost scaling, especially supra-linear.

If you'd told someone with one of the first automobiles that they'd eventually be massively personally-owned, have access the majority of countries via paved road networks, and be refillable along the way, they'd have laughed at you.

And yet, that's what we've all lived in.

It happened because automobiles were massively useful.

It's difficult to imagine a scenario where basic-level cognition doesn't also scale, because a lot of the mental tasks we do every day are dumb and low-value.


Nearly every example of futurism I’ve seen in my 40+ years of looking at future technology has been people over accounting for scaling rather than ignoring it.

For example they see a prototype and then assume in 10 years everyone will have one.

Or they look at a proof of concept and don’t realise that that last 10% of refinement actually takes 90% of the effort.

What people usually miss is that scaling is the problem, not the solution. Prototypes are easy in comparison.


The quip that stuck in my head (Kurzweil?) is that adoption is based on utility over existing practice.

Hence why cell phone took off (verse land lines) but flying cars didn't (verse ground cars).

Adding basic level creativity to automation seems like the former.

It's not going to change everything overnight, but it is going to impact everything into a new normal.


"sci-fi timeline" :rollseyes. Sonny, I had autocomplete > 10 years ago!


I am not joking when I say that the autocomplete on the Microsoft HD Zune in 2010 is the single best autocorrect implementation that I have EVER experienced. You could half pay attention to a conversation, hit the right QUARTER of the keyboard with your giant thumb, and it would ALWAYS guess right. It was literally magic, in a way that no technology I have experienced has compared to.

Meanwhile, my modern "AI" based Google keyboard autocorrect can't even handle the average case where only the first letter of the word was a mistake. It will only suggest corrections (and then auto-choose those corrections based on rules that change monthly it seems) that use the same first letter that I accidentally hit.

Also, my mother had a dirt cheap LG flip-phone in 1998 that, after 10 minutes of training, could dial any number and even dial any of your contacts by name with your voice.

In the mid-2000s, Microsoft had a BUILT IN function to Windows XP for dictation. You can see it do poorly in a demo, which is funny, but in actual use, without any cheating or prompting or massive compute resources, it correctly understood you 80% of the time. The state of the art in the industry could do much better than that without "AI" at the time, and cost a pretty penny for people with disabilities mostly. However, that dictation API is STILL IN WINDOWS, Microsoft just kinda hides it because they want you to use the Azure based one. It is super simple, with a powerful yet understandable API, available to any application that runs on Windows (like, say, games) and completely free to use. More importantly, if you have ANY knowledge about the expected use case for your app, you can feed the dictation engine an easy to generate "grammar" about the structure of commands, and it's accuracy can jump up to near perfection. When I played with it, I was able to build a voice controlled app with a single page of code and like an afternoon that easily turned my voice into text. It was literally harder to control the legacy Win32 APIs to inject input into queues for my use case: Voice controlled game input.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: