I'll look into what's going on with some of the other browsers.
To clarify, the game actually runs a quick validation when the timer runs out to check if your word is valid. If it is, the ball returns automatically—so you don't have to hit Enter or Space, but doing so early gives you a speed bonus.
As for getting rid of Enter/Space entirely, auto-submitting can be tricky with compound words (e.g., should it submit 'REGULAR' or wait for 'REGULARLY'?).
I was suspicious when I first coded this that it needed a better way to introduce the rules. I've gone ahead and buffed the HINT so that time doesn't resume until you get a chance to type out your first word. I also added a background hint on the return phase instead of just leaving it blank.
Another comment mentioned that the instructions that I put on itch.io made it clearer (and forgot to post on HN, whoops!) so I've pasted them below.
-------------------
HOW TO PLAY
1. Type HIT to serve
2. Type your opponent's word to line up your character with the ball, then type your word to send a volley back
3. Submit your word before the time runs out. The faster you submit your word, the faster your hit!
I recently rewatched a Tested Q&A where Adam Savage discussed his post-Mythbusters life; his framing of that duality was very similar: https://youtu.be/2tZ0EGJIgD8?t=322.
It aligns with a common design principle: constraints often make a problem space easier to navigate. I suspect life is similar. Having limited time creates a "specialness" that is easily lost when you suddenly have an infinite amount of time at your disposal.
The only thing I remember about Peek is how they sold "lifetime service" with the device for an extra $300 or so and a couple years later went "sike! Your device isn't supported on our network anymore".
Watching the announcement, every feature felt like something my phone already does—better.
With glasses, you have to aim your head at whatever you want the AI to see. With a phone, you just point the camera while your hands stay free. Even in Meta’s demo, the presenter had to look back down at the counter because the AI couldn’t see the ingredients.
It feels like the same dead end we saw with Rabbit and the Humane pin—clever hardware that solves nothing the phone doesn’t already do. Maybe there’s a niche if you already wear glasses every day, but beyond that it’s hard to see the case.
If executed well I think this could reduce a lot of friction in the process. I can definitely unlock my phone and hold it with one hand while I prepare and cook, but that's annoying. If my glasses could monitor progress and tell me what to do with what while I'm doing it, that's far more convenient. It's clearly not there yet, but in a few years I have no doubt it will be. And this is just the start. With the screens they'll be able to offer AR. Imagine working on electronics or a car and the instructions are overlaid on the screen while the AI is providing verbal instructions.
I'm oldish, so maybe I'm biased, but this sort of product seems like something no one will want, outside a few technophiles, but that industry desperately needs you to want. It's like 3d TV, a solution in search of a problem because the mfgs need to make the next big thing with the associated high margins.
To me the phone is a pretty good form factor. Convenient enough(especially with voice control), unobtrusive, socially acceptable, and I need to own one anyway because it's a phone. I'm a geek so I think this tech is cool, but I see zero chance I would use one, even if it were a few steps better than it is.
I get why it feels bleak—low-effort AI output flooding workflows isn’t fun to deal with. But the dynamic isn’t new. It only feels unprecedented because we’re living through it. Think back: the loom, the printing press, the typewriter, the calculator.
When Gutenberg’s press arrived, monks likely thought: “Who would want uniform, soulless copies of the Bible when I can hand-craft one with perfect penmanship and illustrations? I’ve spent my life mastering this craft.”
But most people didn’t care. They wanted access and speed. The same trade-off shows up with mass-market books, IKEA furniture, Amazon basics. A small group still prizes the artisanal version, but the majority just wants something that works.
I'm not sure if it's so much that most people don't care, but that hand crafted items are more expensive. As evidence of popular interest, "craftwashing"[1] mass produced goods with terms like "artisanal", and "small-batch" can be an effective marketing strategy. Using the example of a bible, a 1611 King James facsimile still commands a hefty premium[2] over a regular print. Or for paintings, who would prefer a print over an original?
There's also the "Cottagecore" aesthetic that was popular a few years ago, which is conceptually similar to the Arts and Crafts movement or the earlier Luddites.
It's been interesting reading this thread and seeing that others have also switched to using Codex over Claude Code. I kept running into a huge issue with Claude Code creating mock implementations and general fakery when it was overwhelmed. I spent so much time tuning my input prompt just to keep it from making things worse that I eventually switched.
Granted, it's not an apples-to-apples comparison since Codex has the advantage of working in a fully scaffolded codebase where it only has to paint by numbers, but my overall experience has been significantly better since switching.
Other systems don't have a bespoke "planning" mode and there you need to "tune your input prompt" as they just rush in to implementation by guessing what you wanted
reply