Hacker Newsnew | past | comments | ask | show | jobs | submit | nathants's commentslogin

coding agents, co-agents, and coco-agents.


Just exchange json.

Backend in python/ruby/go/rust.

Frontend in javascript/typescript.

Scripts in bash/zsh/nushell.

One upon a time there was a low amount of friction and boilerplate with this approach, but with Claude and Codex it’s changed from low to none.


I really like having a good old RESTful API (well maybe kinda faking the name because don't need HATEOAS usually)!

Except I find most front end stacks to lead to either endless configuration (e.g. Vue with Pinia, router, translation, Tailwind, maybe PrimeVue and a bunch of logic for handling sessions and redirects and toast messages and whatnot) and I feel the pull to just go and use Django or Laravel or Ruby on Rails mostly with server side templates - I much prefer that simplicity, even if it feels a bit icky to couple your front end and back end like that.


400 - 128 = 272. Codex cli source.


If you want to be able to generate up to 128k tokens in one go successfully, then yes, that math checks out.


Usable input limit has not changed, and remains 400 - 128 = 272. Confirmed by looking for any changes in codex cli source, nope.


Location: USA Remote

Remote: Yes

Relocate: For better remote timezone

Tech: All

Resume: https://nathants.com

Email: me@nathants.com


Just have SES put the email in s3, then do stuff.


Oh yeah, I'd love to hold on to people's emails and be responsible if they got leaked.


TTL=1day


Do something simpler. Backups shouldn’t be complex.

This should be simpler still:

https://github.com/nathants/backup


Cool, but looks like it's going to miss capabilities, so not suitable for a full OS backup (see https://github.com/python/cpython/issues/113293)


Interesting. I'm not trying to restore bootable systems, just data. Still, probably worthwhile to rebuild in Go soon.


Index of files stored in git pointing to a remote storage. That sounds exactly like git LFS. Is there any significant difference? In particular in terms of backups.


Definitely similar.

Git LFS is 50k loc, this is 891 loc. There are other differences, but that is the main one.

I don't want a sophisticated backup system. I want one so simple that it disappears into the background.

I want to never fear data loss or my ability to restore with broken tools and a new computer while floating on a raft down a river during a thunder storm. This is what we train for.


Is this a joke?

I don't see what value this provides that rsync, tar and `aws s3 cp` (or AWS SDK equivalent) provides.


How do you version your rsync backups?


I use rsyncs --link-dest

abridged example:

    rsync --archive --link-dest 2025-06-06 backup_role@backup_host:backup_path/ 2025-06-07/

Actual invocation is this huge hairy furball of an rsync command that appears to use every single feature of rsync as I worked on my backup script over the years.

    rsync_cmd = [
      '/usr/bin/rsync',
      '--archive',
      '--numeric-ids',
      '--owner',
      '--delete',
      '--delete-excluded',
      '--no-specials',
      '--no-devices',
      '--filter=merge backup/{backup_host}/filter.composed'.format(**rsync_param),
      '--link-dest={cwd}/backup/{backup_host}/current/{backup_path}'.format(**rsync_param),
      '--rsh=ssh -i {ssh_ident}'.format(**rsync_param),
      '--rsync-path={rsync_path}'.format(**rsync_params),
      '--log-file={cwd}/log/{backup_id}'.format(**rsync_params),
      '{remote_role}@{backup_host}:/{backup_path}'.format(**rsync_params),
      'backup/{backup_host}/work/{backup_path}'.format(**rsync_params) ]


This is cool. Do you always --link-dest to the last directory, and that traverses links all the way back as far as needed?


Yes, this adds a couple of nice features, it is easy to go back to any version using only normal filesysem access and because they are hard links it only uses space for changed files and you can cull old versions without worrying about loosing the backing store for the diff.

I think it sort of works like apples time-machine but I have never used that product so... (shrugs)

Note that it is not, in the strictest sense, a very good "backup" mainly because it is too "online", to solve that I have a set of removable drives that I rotate through, so with three drives, each ends up with every third day.


Sounds like “rsnapshot” :

https://rsnapshot.org/


Dirvish


Perl still exists?


Uh, who has the money to store backups in AWS?!


Glacier Deep Archive is the cheapest cloud backup option at $1USD/month/TB.

Google Cloud Store Archive Tier is a tiny bit more.


To quote the old mongodb video: If you don't care about restores, /dev/null is even cheaper, and its webscale.


Both would be pretty expensive to actually restore from, though, IIRC.


Quite expensive, but it should only ever be a last resort after your local backups have all failed in some way or another. For $1/mo/TB you purchase the opportunity to pay an exorbitant amount to recover from an otherwise catastrophic situation.


If you don't test your backups, they don't exist.


There is a free tier that accounts for testing, first 100GB of transfer out of AWS per month is free.


Yes, about $90USD per TB.

But I weigh that against data recovery from failed disks and the loss of the data I put in Glacier (family photos/etc). Then its dirt cheap.


Depends how big they are. My high value backups go into S3, R2, and a local x3 disk mirror[1].

My low value backups go into a cheap usb hdd from Best Buy.

1. https://github.com/nathants/mirror


Support for S3 means you can just have minio server somewhere acting as backup storage (and minio is pretty easy to replicate). I have local S3 on my NAS replicated to cheapo OVH serwer for backup


when i read threads like this, it seems no one had actually used o3-high. i’m excited to try 4-opus later.


the solution is obvious. stop grading the result, and start grading the process.

if you can one-shot an answer to some problem, the problem is not interesting.

the result is necessary, but not sufficient. how did you get there? how did you iterate? what were the twists and turns? what was the pacing? what was the vibe?

no matter if with encyclopedia, google, or ai, the medium is the message. the medium is you interacting with the tools at your disposal.

record that as a video with obs, and submit it along with the result.

for high stakes environments, add facecam and other information sources.

reviewers are scrubbing through video in an editor. evaluating the journey, not the destination.


Unfortunately, the video is a far cry from carrying all the representative information: there is no way you can capture your full emotions as you are working through a problem, and where did you get your "eureka" moments unless you are particularly good at verbalising your through process as you go through multiple dead-ends and recognize how they lead you in the right direction.

And reviewing video would be a nightmare.


there are only two options: - have more information - have less information

more is better.

you can scrub video with your finger on an iphone. serious review is always high effort, video changes nothing.


Not really: I love reading fiction where I can imagine characters the way I want to based on their written depictions. When I see a book cover replaced with a recent movie adaptation actor, it usually reduces the creative space for the reader instead of enlarging it.

Video in itself is not more information by definition. Just look at those automatically generated videos when you try finding a review on an unusual product.


are you trying to evaluate the author for some certification or test? this is contextual to evaluation.

books are great.

hundreds of hours of video of the author writing that book, is strictly more information.


> reviewers are scrubbing through video in an editor. evaluating the journey, not the destination.

Let's be real... Multi-modal LLMs are scrubbing through the journey :P


just as there are low value students, there are low value reviewers. same as it ever was.

not every review is important.


username checks out.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: