Hacker Newsnew | past | comments | ask | show | jobs | submit | yoavsha1's commentslogin

Well, thanks to all of the humans larping as evil bots in there (which will definitely land in the next gen's training data) - next time it'll be real


Both the CC api and their website -- hopefully related to the rumored Sonnet 5 release


That will be one strange way to release a model.


I mean, can you expect a vibecoding company to do stuff with 0 downtime? They brought the models down and are now panicking at HQ since there's no one to bring them back up


This made me laugh only because I imagine there could possibly be some truth to it. This is the world we are in. Maybe they all loaded codex to fix their deploy? ;)


it is not, sounds like an issue with AWS


OpenClaw agents on Anthropic API taking an unscheduled coffee break.


I had that exact same feeling during the US holidays where I got to enjoy 2x usage limits and everything just seemed to work well


I had terrible results during the holidays -- it wasn't slow but it was clear they were dealing with the load by quantizing in spots because there were entire chunks of days when the results from it were so terrible I gave up and switched to using Gemini or Codex via opencode.


I find that if I have my rabbit's foot and lucky socks on, I win working code ~1.2x more often.


Why base this on time? Using a simple HOTP which uses a rolling index for the "time value" seems like a much better choice for humans


I know it's one me for thinking this -- since the domain is fly.io -- but I was really hoping this is some local solution. Not self-hosted, but just local. A thin command line wrapper to something (docker? bubblewrap?) that gave me sort of a containerized "VM" experience for my local machine using CoW.


Check out LXC and the wider Incus set of projects: https://linuxcontainers.org/incus/.

Running IncusOS on some local hardware with ZFS underneath is a phenomenally powerful sandbox.


Yeah I can make an lxc container called "ai" that has an ssh read key and then a few pre cloned projects. When I want to work I can clone and start it then get the same effect on my own hardware and for free. Just need a small little wrapper to make this a bit more streamlined


If you are on mac, you can use Coderunner[1]. It will run locally on your and execute any AI generated code in an apple container.

1. Coderunner - https://github.com/instavm/coderunner


I strongly agree with this. At $WORK I usually work on projects comprising many small bits in various languages - some PowerShell here, some JS there, along with a "build process" that helps minify each and combine them into the final product. After switching from shell scripts to Just and having to deal with a ton of issues on the way (how does quoting work in each system? How does argument passing? Environment variables?) I simply wrote a simple script with Python, UV shebang and PEP723 dependencies. Typer takes care of the command line parsing, and each "build target" is a simple, composable, readable python function that takes arguments and can call other ones if it needs to. Can't be simpler than that and the LLMs love it too.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: