Hacker Newsnew | past | comments | ask | show | jobs | submit | percentcer's commentslogin

on, off, and the other thing


hi-z is one choice. Though I don't know how well that does past a certain speed.

It works poorly at any speed. Hi-Z is an undriven signal, not a specific level, so voltage-driven logic like (C)MOS can't distinguish it from an input that's whatever that signal happens to be floating at. In current-driven logic like TTL or ECL, it's completely equivalent to a lack of current.

I wasn't pitching it as a solid commercial idea. Just that you can get (perhaps fiddly) three states out into the real world with something cheap that already exists. Like: https://idle-spark.blogspot.com/2015/04/low-cost-n-ary-dacs-...

Using 8 way quadrature and high-z, you have 16 values packed into 1 quasi-hexalogic gate. May your hydration prove fruitful.

On, off, and ooh shiny!

null :)

Assistants can be taught

And these models get upgraded -- at a much faster averaged rate than humans. Continual vs punctuated improvement :)

Predatory AI-generated site feeding off parents' anti-screen anxiety, no thanks.

We need Klutz to come back https://en.wikipedia.org/wiki/Klutz_Press


I remember having the juggling one! Thanks for reminding me of Klutz. I'm hoping to finally have kids in the near future and, while I don't want to completely shield them from all tech, I do want to ground them in reality with "real" activities. I may order a bunch of their books in the future.

These time-to-complete estimates seem reallyyyyyy aggressive to me. I dunno maybe I'm just the slowest programmer in the world.


yeah, it's a good post overall, but the humblebrag factor undermines it


Why is it important to force reindustrialization?


Let’s axiomatically presume that vertical integration (of your supply chain, manufacturing, software development, labor force, etc.) is the key to “innovation” in the abstract. You can see this starting to happen with BYD, the Chinese electric car company, for example.

With the above in mind, let’s say your goal as a country is to develop your industries so you can achieve broad vertical integration.

Soft economic assets (financial markets, legal structures, software, pharma, IP, etc.) are easier to bootstrap once you have hard economic assets assets (commodities, energy, manufacturing, transportation, logistics, etc.). If you lose your hard economic assets and only have soft ones, you are at a strategic disadvantage because your opponent (in this case China), can bootstrap their soft assets relatively quickly. By comparison, it will take you much longer to get your hard assets back (through the mythical process of “reshoring/reindustrialization”).

More in my comment here: https://news.ycombinator.com/item?id=43589566.


What makes you think the United States has lost hard economic assets? Also, have you considered that there are different types of innovations, that are affected by the structure of an economy and its research institutions?

The US has dominated radical innovations, China has excelled in incremental innovations. We're potentially throwing away our advantage in the former by decimating our research capacity (something conspicuously absent in your concept of what drives innovation), and there is no clear plan to take the necessary steps to rebuild our capacity in the latter.


I’ve never said I agree with how the current administration is going about their business. But it’s instructive to understand their perspective.

My comments in this thread are mostly informed by everything I read about Bessent.


The Pandemic exposed the reasons why pretty clearly. Having e.g. critical medicines manufactured by a geopolitical and military adversary is a very bad idea.


Isn't this just a form of next token prediction? i.e. you'll keep your options open for a potential rhyme if you select words that have many associated rhyming pairs, and you'll further keep your options open if you focus on broad topics over niche


Assuming the task remains just generating tokens, what sort of reasoning or planning would say is the threshold, before it's no longer "just a form of next token prediction?"


This is an interesting question, but it seems at least possible that as long as the fundamental operation is simply "generate tokens", that it can't go beyond being just a form of next-token prediction. I don't think people were thinking of human thought as a stream of tokens until LLMs came along. This isn't a very well-formed idea, but we may require an AI for which "generating tokens" is just one subsystem of a larger system, rather than the only form of output and interaction.


But that means any AI that just talks to you can't be AI by definition. No matter how decisively the AI passes the Turing test, it doesn't matter. It could converse with the top expert in any field as an equal, solve any problem you ask it to solve in math or physics, write stunningly original philosophy papers, or gather evidence from a variety of sources, evaluate them, and reach defensible conclusions. It's all just generating tokens.

Historically, a computer with these sorts of capabilities has always been considered true AI, going back to Alan Turing. Also of course including all sorts of science fiction, from recent movies like Her to older examples like Moon Is A Harsh Mistress.


I don't mean that the primary (or only) way that it interacts with a human can't be just text. Right now, the only way it interacts with anything is by generating a stream of tokens. To make any API calls, to use any tool, to make any query for knowledge, it is predicting tokens in the same way as it does when a human asks it a question. There may need to be other subsystems that the LLM subsystem interfaces with to make a more complete intelligence that can internally represent reality and fully utilize abstraction and relations.


I have not yet found any compelling evidence that suggests that there are limits to the maximum intelligence of a next token predictor.

Models can be trained to generate tokens with many different meanings, including visual, auditory, textual, and locomotive. Those alone seem sufficient to emulate a human to me.

It would certainly be cool to integrate some subsystems like a symbolic reasoner or calculator or something, but the bitter lesson tells us that we'd be better off just waiting for advancements in computing power.



I think one of the massive hurdles, maybe, to overcome when trying to achieve AGI, is how do you solve the issue of doing things without being prompted, you know curiosity and such.

Let's say we have a humanoid robot standing in a room that has a window open, at what point would the AI powering the robot decide that it's time to close the window?

That's probably one of the reasons why, I don't really see LLMs as much more than just algorithms that give us different responses just because we keep changing the seed...


I'm not sure if this is a meaningful distinction: Fundamentally you can describe the world as a "next token predictor". Just treat the world als a simulator with a time step of some quantum of time.

That _probably_ won't capture everything, but for all practical purposes it's non-distinguishable from reality (yes, yes, time is not some constant everywhere)


Yeah, I'd agree that for that model (certainly not AGI) it's just an extension/refinement of next token prediction.

But when we get a big aggregated of all of these little rules and quirks and improvements and subsystems for triggering different behaviours and processes - isn't that all humans are?

I don't think it'll happen for a long ass time, but I'm not one of those individuals who, for some reason, desperately want to believe that humans are special, that we're some magical thing that's unexplainable or can't be recreated.


It doesn't really make explain it because then you'd expect lots of nonsensical lines trying to make a sentence that fits with the theme and rhymes at the same time.


In the same way that human brains are just predicting the next muscle contraction.


Potentially, but I'd say we're more reacting.

I will feel and itch and subconsciously scratch it, especially if I'm concentrating on something. That's an subsystem independent of conscious thought.

I suppose it does make sense - that our early evolution consisted of a bunch of small, specific background processes that enables an individual's life to continue; a single celled organism doesn't have neurons but exactly these processes - chemical reactions that keep it "alive".

Then I imagine that some of these processes became complex enough that they needed to be represented by some form of logic, hence evolving neurons.

Subsequently, organisms comprised of many thousands or more of such neuronal subsystems developed higher order subsystems to be able to control/trigger those subsystems based on more advanced stimuli or combinations thereof.

And finally us. I imagine the next step, evolution found that consciousness/intelligence, an overall direction of the efforts of all of these subsystems (still not all consciously controlled) and therefore direction of an individual was much more effective; anticipation, planning and other behaviours of the highest order.

I wouldn't be surprised if, given enough time and the right conditions, that sustained evolution would result in any or most creatures on this planet evolving a conscious brain - I suppose we were just lucky.


I feel like the barrier between conscious and unconscious thinking is pretty fuzzy, but that could be down to the individual.

I also think the difference between primitive brains and conscious, reasoning, high level brains could be more quantitative than qualitative. I certainly believe that all mammals (and more) have some sort of an internal conscious experience. And experiments have shown that all sorts of animals are capable of solving simple logical problems.

Also, related article from a couple of days ago: Intelligence Evolved at Least Twice in Vertebrate Animals


Great points, but my apologies I meant to say "sentience". Certainly many, many animals are already conscious.

I'm not sure about the quantitative thing seeing as there are creatures with brains much physically much larger than ours, or brains with more neurons than we have. We currently have the most known synapses though that also seems to be because we haven't estimated that for so many species.


Except that's not how it works...


To be fair, we don't actually know how the human mind works.

The most sure things we know is that it is a physical system, and that does feel like something to be one of these systems.



recursive predestination. LLM's algorithms imply 'self-sabotage' in order to 'learn the strings' of 'the' origin.


Should everyone wear the same size shoes?


Next you'll tell me people should deadlift from different heights


I think the equivalent would be same model of shoes and I wouldn’t be against that.


marcan addresses this argument in an adjacent thread:

> Out of tree doesn't work. People, both in the kernel and out of the kernel, want things in tree.

https://lore.kernel.org/lkml/1e8452ab-613a-4c85-adc0-0c4a293...


"arrowlike entities"


I read that and my mind filled in "...from outer space?"


Sounds terrible!


At least the benevolent supreme leader cares! Just look at the corpses!


Vlad the Impaler, without any irony, cared a lot.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: