Hacker Newsnew | past | comments | ask | show | jobs | submit | abstractbg's commentslogin

Okay, I'll bite. For the record, I own Tesla stock and I am generally bullish about AI.

I'll try to provide some counter-points specifically regarding the rate of progress.

3. It's much easier to catch up in capability (ex. LLMs) than it is to achieve a new capability (ex. replace humans laborers with humanoid robots). You can hire someone from a competitor, secrets eventually leak out, the search space is narrowed etc.

4(c). To me, what's most important is whether or not truly autonomous humanoid robots happens in 3 years, 5 years, 10 years, etc. rather than in our lifetime.

These timelines will be tied to AI development timelines which largely outside the control of any one player like Tesla. I believe the world is bottlenecked on compute and that the current compute is not sufficient for physical AI.

It's extremely easy to be too early (ex. many of the self driving car companies of the past decade), and so for Tesla, there is a risk of over-investing in manufacturing robots before the core technology is ready.


Thanks, these are fair arguments!

Re: both 3 and 4(c) - agree that compute (or maybe even power for that compute) is likely to be a bottleneck in the next 3-5 years. However, I think Tesla/xAI are better positioned than many competitors as Tesla is a manufacturing company first and foremost; and this expertise (which is shared freely between Musk's companies) can help it to build it's own data centers, power generation (e.g., solar), or - in the most bullish case - even fab capacity.


Fixed! HexWiki has a full section on the controversy surrounding Nash's claim of rediscovering Hex independently https://www.hexwiki.net/index.php/History_of_Hex


Apologizes! Are you able to load it now? The activity finally calmed down and I just pushed a new version a few hours ago. I'm not sure if that's related to what you experienced.

If it's still down for you, I'm happy to debug further, you can reach me at my Discord https://discord.gg/cSmaVrJMYy


Very cool website! Also, check out 5D Chess with Multiverse Time Travel XD


Wow, beating a 100 bot is still impressive in my book.

Thanks for the feedback and suggestions!

Yes, the networks are size dependent right now. It's a great idea to copy-paste and then adapt the KataGo network architecture since it isn't size dependent and has been proven to reach superhuman strengths.


I'm about 1950 on littlegolem, but trying again, I just lost a couple of games in a row against the 100 bot.

A couple more questions/remarks:

- It seems everything is happening on the client? My CPU (I'm on a laptop without a real GPU) goes wild during analysis. But also I don't notice any big bag of neural-net-weights being downloaded. Mind sharing how it works?

- Care to share more about the networks? How long did you train the networks for and on what GPU? Circa how many params? Any and all details you'd like to share :)

- This Tumbleweed game is fun! This game seems somewhat inspired by both Hex and Go, and the author lives in Warsaw. Interestingly, I lived in Warsaw in 2020, am very active in the Go scene (4 dan), and at least know the names of the people who play Hex, and yet somehow never heard about Michał or Tumbleweed before...


The analysis happens on the AI server.

Sans proper profiling, I would guess that the CPU going wild during analysis is due to a combination of 1. analysis is streamed live to the client in 20 simulation intervals 2. some post-processing on the client side 3. the fact that I am using a global context and reducer in React which causes the entire page to re-render each time an update happens.

The networks are simple Resnets with a value and policy head. It's 20 layers with 128 channels per layer. I trained for several days on 2x 4090s. However, recently I trained a few networks (Hex 14x14, Amazons 10x10, Breakthrough 8x8) on a GH200 and it was 2x faster, roughly 100 ckpts per 24 hours for Hex 14x14. I'm not sure about the number of parameters but the .pt and .ts files are on the order of 30-90 MB. There's definitely room for improvement using tricks like quantization during selfplay inference.

I'm very happy you like Tumbleweed! If you're curious there's a Tumbleweed community run by Michał (the creator) https://discord.com/invite/wu6Xdtt497 They are currently playing through their 2025 World Championship.


Thank you so much for the detailed answer!


Sorry, thanks for pointing that out! Yes Nash rediscovered it. Will update once things calm down


Thanks, single device multiplayer is a great idea.

Someone else recently mentioned to me that Virus Wars is their favorite game! I'm glad to see it getting some love.


Here's the rules video used by the Tumbleweed community https://www.youtube.com/watch?v=mjA_g3nwYW4

Also, in case you are curious, Tumbleweed has a discord https://discord.com/invite/wu6Xdtt497

They are currently playing through the 2025 Tumbleweed World Championship. Lots of strong players there!


You should put that video in the info link, super helpful. I am getting crushed by the bot lol. Fun game!


Yeah, that's another nice one. I've got a Quoridor set myself.


Thanks! Will try to figure out a version that I can get permission to add to the website.


You're welcome to "Capture Pentominoes" if you want it!

https://club.cc.cmu.edu/~ajo/free-software/pento/pento.html


the usenet thread was a fun nostalgic read - something about the character of discussions was different from the ones you get on web based fora like reddit today. (also i recognise your name from rec.puzzles, which was another nice nostalgia moment :))


Perfect, thank you!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: