Hacker Newsnew | past | comments | ask | show | jobs | submit | Stwerner's commentslogin

Yep, and the primary transport method is json rpc over stdio


Yeah that’s definitely been my experience too. It’s just as exhausting, compressed into a much shorter amount to time. I’ve gotten more endurance as I keep pushing, kind of reminds me how pair programming felt the first few months of doing it all day.


I look at it kind of similarly to what the rise of online poker did for texas hold'em. You had people who spent decades playing in person to learn enough to get to the highest tier, but when online poker came about people were able to play 4+ tables at once basically 24/7 at a higher hourly hand rate per table than was possible in person (let alone having access to analysis tools like Poker Tracker). People were able to get very good in a much shorter amount of time.

I suspect we're going to see something similar with Junior talent across the board. A lot of the barriers to actually getting to the core of software engineering for example are going away, and you're going to be able to get orders of magnitude more trial and error attempts in than you previously could in the same amount of time.


You'll have to forgive me because I know literally nothing about competitive poker. Are there players whose experience is primarily online, who show up at in-person tournaments and lack the "soft skills" necessary to excel in that setting? Preventing themselves from exhibiting tells, etc.

I'm not trying to relate this back to the AI/junior/senior developer question, I'm just curious about the dynamic in poker since you seem to know what you're talking about.


Yes. But it goes both ways. Online and in person have the same calculations for value, but lacking physical tells you learn to rely on those calculations more. As a result you’ll likely see a lot stronger players online (better at knowing and playing the odds) than an in-person game. This doesn’t even touch on the number of “cheaters” who use assistance online for calculations and bet placement.

Another analogy that might work is chess. I’ve only ever played “classical” chess and when my son got interested in playing I would crush him every time. Up until the point he got into bullet chess and was literally grinding out dozens of games a day where I’d casually play a game of classical chess like once or twice a month. His confidence and ability skyrocketed and I’m not even a challenge for him anymore. Now I’m not a real chess player, and there are areas of his game that are definitely weak compared to classical chess players who have played as many games as he has. But to turn around so quickly from not being able to win a game against me to dominating me in every game was impressive.


One striking thing about Gen Alpha and young Zoomers is how RAPIDLY they learn. Being young is like being on learning-focused anabolic steroid #1 + online programs and AI are like learning-focused anabolic steroid #2. It's really impressive. Take any "time to learn" estimate you have, like "2 years to become good at chess", and today's young people can slice it by 10x.


I agree with that, but one of the core lessons I've learned over my career in technology is that iteration rates are critical for learning. The shorter you can get the feedback loop, the faster you'll learn and advance. Companies that release software once a year or every other year are objectively terrible at it versus companies who release weekly or even daily. Bullet chess and online poker drastically shorten the feedback loop for those games compared to the "traditional" method of playing.


I played online poker successfully for years back when it was booming.

There are definitely online players who lack the skills you're describing, but that's not as much of a problem as you think. You can hide tells just by shutting up and staying still while you're in the action.

The other half of that is reading other people's tells, and online poker is more helpful there than you'd think. Most of reading other people (especially at relatively low-mid levels) is about reading the story they're telling with their action rather than reading their face/words/etc.

Classic example is: There are two hearts on the board on the flop and the person calls your bet. Turn comes, not a heart, person calls again. River comes, not a heart, person suddenly bets big to try to get you to fold, because they had two hearts and failed to make their flush.

Bigger picture, you read their style of play. Are they playing a lot of hands or very few? Passive or very active? None of these things require reading the person's mannerisms, and you can practice all of them very well online (though online you also run tracking software that gives you stats on opponents, which helps when you're playing a bunch of tables at a time).

Writing this out makes me miss online poker. Shame the games are terrible now (and I also have a child and business as opposed to the endless free time of my twenties, to be fair).


I can't speak for everyone, I've definitely seen people make the transition or at least bring in-person tournaments into the games they play. I suspect a lot do just prefer online because of how convenient it is and don't really explore in-person events.

For me, there was definitely a high level of anxiety and nerves when I sat down at a table for the first time again after playing online for a while. But it gets easier and easier to shake that off and just get into the flow of watching betting patterns (which is the main thing you have to work with online) which to me was always the primary source of tells rather than anything physical. So maybe in my case the answer to your question is yes haha :) though it didn't seem to impact me negatively much.


Yes, online poker players tend to be worse than players that learned in-person. Pros love to play online players in person; they consider it free money.

To put things bluntly: at any tournament, 90% of the players after the first round will be players who learned in-person. Only a handful of online players (like Moneymaker) have made a successful transition to professional poker.



Gave it a shot real quick, looks like I need to fix something up about automatically running the migrations either in the CI script or locally...

But if you're curious, task was this:

----

Title: Bug: Users should be able to add tags to a task to categorize them

Description: Users should be able to add multiple tags to a task but aren't currently able to.

Given I am a user with multiple tasks When I select one Then I should be able to add one or many tags to it

Given I am a user with multiple tasks each with multiple tags When I view the list of tasks Then I should be able to see the tags associated with each task

----

And then we ended up with:

GPT-4o ($0.05): https://github.com/sublayerapp/buggy_todo_app/pull/51

Claude 3.5 Sonnet ($0.09): https://github.com/sublayerapp/buggy_todo_app/pull/52

Gemini 2.0 Flash ($0.0018): https://github.com/sublayerapp/buggy_todo_app/pull/53

One thing to note that I've found - I know you had the "...and you should be able to filter/group tasks by tag" on the request - usually when you have a request that is "feature A AND feature B" you get better results when you break it down into smaller pieces and apply them one by one. I'm pretty confident that if I spent time to get the migrations running, we'd be able to build that request out story-by-story as long as we break it out into bite-sized pieces.


You can have a larger model split things out into more manageable steps and create new tickets - marked as blocked or not on each other, then have the whole thing run.


Thanks, and having another step for reviewing each other's code is a really cool extension to this, I'll give it a shot :) Whether it works or it doesn't it could be really interesting for a future post!


Wonder if you could have the reviewer characterize any mistakes and feed those back into the coding prompt: “be sure to… be sure not to…”


Yeah, totally agree that something related to this will likely be the next paradigm. I've been putting together experiments in different directions trying to find that thing that's missing but haven't really found a killer use case yet to pull it all together.

That's a really cool idea that once you can get something somewhat reliably consistent generated, you can kind of let your A/B tests start to run themselves with just rough guidelines on what you're trying to optimize for...


Wow, cool to see this make it on to HN!

Author here, happy to answer any questions about this or chat about the ideas behind it :)


I love that this is more art piece than serious software... more like offering someone an expedition than a product.

(Though I'm not sure I'll get on the expedition, I am a little worried about sandboxing and setup and getting distracted...)

If I was to start the expedition, I'd probably try to overshoot by describing a site that I could not myself fully imagine, or using attributes that lacked a single meaning. Like, "the artist's interactive portfolio, as though the artist is looking over your shoulder, the artist keeping a carefully neutral expression while seething inside." Then I'd probably continue, imagining just the outline of some site that satisfies some unarticulated desire, putzing around as I see a concrete articulation of that idea, as much reforming the idea in my head in response to those results to an equal degree that I am articulating the idea in more detail.


Ahh I absolutely love this idea of trying to infuse more emotional and fuzzy attributes to see what the LLM comes up with!

When I broke out the layout and style components I was thinking of being able to change the whole site aesthetic from something like "standard b2b" to "geocities fan page", but I'm excited to try getting fuzzier with the descriptions!


Very interesting to see this here!

I've been having a lot of conversations recently with people about LLMs and how a prototypal inheritance-like approach to working with them seems to be surprisingly effective. Especially for writing code.

Also, this quote jumped out at me: "Insights from the dynamic world can have application in the static."


Could you say a bit more about the link between prototypal inheritance and LLMs? Not obvious to me what it is.


Ahh yeah - so I've found prototypal inheritance (or what Hofstadter calls The Prototype Principle) is a useful way to think about what's going on with in-context learning and few shot prompting and ways to take it further.

If you provide a "prototype" to a model, you can operate on that prototype and create new instances or variations of it. This is useful for working with code by defining an abstraction or a class, and then having the LLM work with and modify it, generating new code that fits or inherits conceptually from that original prototype.


I've been working with Ruby + AI for the last year and couldn't agree more with this post. It feels like there are so many brand new ways to build software with LLMs that have yet to be discovered and I find Ruby's flexibility makes it easy to try out new ideas almost as quickly as you can think of them.

Obie also mentions my company's product Blueprints. Blueprints allows you to capture existing, known-good patterns in your code base, and then use them as a base for an LLM to generate variations from. We've got editor plugins for the major editors and we're also starting to roll out downloadable packages like this DaisyUI-styled Phlex component one: https://blueprints.sublayer.com/packages/phlex-daisyui

Happy to answer any questions about it!


I've been using Blueprints extensively. I see a future that includes curated collection Blueprints servers for many different programming languages and niches.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: