Hacker Newsnew | past | comments | ask | show | jobs | submit | MakeAJiraTicket's commentslogin

The author explicitly states that there are other types of thought, this is about structured reasoning.


The Telexistence demo isn't so bad, but I have no idea why we're trying to make human robots generally. The human shape sucks at a most things, and we already have people treating roombas and GPT like their boyfriends or pets...


What form factor would be better at going up and down stairs? Reaching to a high shelf? Getting between the refrigerator and counter to grab a key?


Because human work is designed for humans. If you want a drop-in replacement for human workers, humanoid robots are your best bet.


That doesn’t even remotely follow. Human work is designed for humans so if you want human work done you need a human to do it.

If you want to replace the human the best bet is to redesign the work so that it can be done with machine assistance, which is what we’ve been doing since the industrial revolution.

There’s a reason the motor car (which is the successful mass market personal transportation machine) doesn’t look anything like the horse that it replaced.


<HN voice>Technically, the motor car replaced the coach. A more accurate and enjoyable analogy would that car engines don’t look like horses.


hence horseless CARriage


I don't think that it's human work this would target, but instead work in shared human spaces.


We already have robots that work in shared human spaces, and our experience in that domain has shown that you need to put a lot of thought into how to do this safely and specifically how to prevent the robot from accidentally harming the humans. Ask anyone with a robotic cnc machine how they would feel about running the machine without its protective housing for example. I expect they will start to throw up just a little bit. Flexibility is exactly the opposite of what you need until we have a CV and controller combination that can really master its environment. I could forsee a lot of terrible accidents if you brought a humanoid robot into a domestic environment without a lot of care and preparation for example.


Sure but robots don’t join unions or ask for a pay raise or benefits.


I have a function that compares letters to numbers for the Major System and it's like 40 lines of code and copilot starts trying to add "guard rails" for "future proofing" as if we're adding more numbers or letters in the future.

It's so annoying.


Defensive programming is considered "correct" by the people doing the reinforcing, and is a huge part of the corpus that LLM's are trained on. For example, most python code doesn't do manual index management, so when it sees manual index management it is much more likely to freak out and hallucinate a bug. It will randomly promote "silent failure" even when a "silent failure" results in things like infinite loops, because it was trained on a lot of tutorial python code and "industry standard" gets more reinforcement during training.

These aren't operating on reward functions because there's no internal model to reward. It's word prediction, there's no intelligence.


LLMs do use simple "word prediction" in the pretraining step, just ingesting huge quantities of existing data. But that's not what LLM companies are shipping to end users.

Subsequently, ChatGPT/Claude/Gemini/etc will go through additional training with supervised fine-tuning, reinforcement learning with reward functions whether human-supervised feedback (RLHF) or reward functions (RLVR, 'verified rewards').

Whether that fine-tuning and reward function generation give them real "intelligence" is open to interpretation, but it's not 100% plagarism.


You used the word reinforcing, and then asserted there's no reward function. Can you explain how it's possible to perform RL without a reward function, and how the LLM training process maps to that?


LLM actions are divorced from that reward function, it's not something they consult or consider. Reward function in that context doesn't make sense.


Reinforcement learning by definition operates on reward functions.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: