Hacker Newsnew | past | comments | ask | show | jobs | submit | clbrmbr's commentslogin

AmaZing job. My 6 and 8 yo could understand back to 1400.

I just use Jesse’s “superpowers” plugin. It does all of this but also steps you through the design and gives you bite sized chunks and you make architecture decisions along the way. Far better than making big changes to an already established plan.


I suggest reading the tests that Superpowers author has come up with for testing the skills. See the GitHub repo.



What would it take to put Opus on a chip? Can it be done? What’s the minimum size?

Maybe not today. Opus is quite large. This demo works with a very small 8B model. But, maybe one day. Hopefully soon. Opus on a chip would be very awesome, even if it can never be upgraded.

Someone mentioned that maybe we'd see a future where these things come in something like Nintendo cartridges. Want a newer model? Pop in the right catridge.


I keep a medium moleskine with the dots. Great for sketching UI designs or block diagrams. Dots are just enough guidance for technical drawing but not as distracting as lines.

So much more respectful in meetings to use pen and notebook than to use a digital writing medium. Not sure why but that’s the vibe I feel.


Yes should be tagged (2023) imo

I didn’t catch it until seeing these flag-raising comments… checking the other comments from the last 8 hours, it’s Claw for sure.

I’d love to hear from engineers who find that faster speed is a big unlock for them.

The deadline piece is really interesting. I suppose there’s a lot of people now who are basically limited by how fast their agents can run and on very aggressive timelines with funders breathing down their necks?


> I’d love to hear from engineers who find that faster speed is a big unlock for them.

How would it not be a big unlock? If the answers were instant I could stay focused and iterate even faster instead of having a back-and-forth.

Right now even medium requests can take 1-2 minutes and significant work can take even longer. I can usually make some progress on a code review, read more docs, or do a tiny chunk of productive work but the constant context switching back and forth every 60s is draining.


I won't be paying extra to use this, but Claude Code's feature-dev plugin is so slow that even when running two concurrent Claudes on two different tasks, I'm twiddling my thumbs some of the time. I'm not fast and I don't have tight deadlines, but nonetheless feature-dev is really slow. It would be better if it were fast enough that I wouldn't have time to switch off to a second task and could stick with the one until completion. The mental cost of juggling two tasks is high; humans aren't designed for multitasking.


Hmm I’ve tried two modes: one is to stay focused on the task at hand, but spin up alternative sessions to do documentation, check alternative hypotheses, second-guess things the main session is up to. — The other is to do an unrelated task in another session. I find this gets more work done in a day but is exhausting. With better scaffolding and longer per-task run times (longer tasks in the METR sense), could be more sustainable as a manager of agents.

Two? I'd estimate twelve (three projects x four tasks) going at peak.


3-4 parallel projects is the norm now, though I find task-parallelism still makes overlap reduction bothersome, even with worktrees. How did you work around that?


it's simpler than that - making it faster means it becomes less of an asynchronous task.

current speeds are "ask it to do a thing and then you the human need find something else to do for minutes (or more!) while it works". at a certain point at it being faster you just sit there and tell it to do a thing and it does and you just constantly work on the one thing.

cerebras is just about fast enough for that already, with the downside of being more expensive and worse at coding than claude code.

it feels like absolute magic to use though.

so, depends how you price your own context switches, really.


If it could help avoid you needing to context switch between multiple agents, that could be a big mental load win.


The only time I find faster speed to be a big unlock is when iterating on UI stuff. If you're talking to your agent, with hot reload and such the model can often be the bottleneck in a style tuning workflow by a lot.


The idea of development teams bottlenecked by agent speed rather than people, ideas, strategy, etc. gives me some strange vibes.


I do wonder about reasoning effort.


Reasoning effort is denominated in tokens, not time, so no difference beyond slowness at heavy load

(I work at OpenAI)


Yeah, same here. Work grinds to a halt across our entire business. HN always faster than Claude Status page. Seems to impact Claude Code (MAX) but not Claude.ai.


Same here, no Claude Code right now


This is so NOT a joke. Soon the preponderance of workers will be subcontractors for rouge AI too-big-to-fail entities.


How long until a AI builds an alternative economy made up of entities it controls?


a few days? The "scam" crypto in the AI-made spaces are worth millions.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: