Hacker Newsnew | past | comments | ask | show | jobs | submit | lessaligned's commentslogin

...it's useful to (over)generalize sometimes to get more explanatory power for things.

I mean, it probably says nothing useful about programming, but the other way around, thinking of "uncolapsed" wave-functions as lazy-evaluation could be useful. I'm not up-to-date on theoretical physics, but I think there might be something like that in Deutsch's constructor theory.

In programming I'd prefer more a language that makes syntactically/visually obvious what's lazy and what not and allows you to pick (eg. like Rust does with &mut), with some sigil maybe, but that's probably a low-prio for many language designers nowadays...

EDIT+: and you could say you practically get this already in mainstream languages... lazy-vals are just functions and it's probably good enough or better for most programmers to have them distinct/explicit.


There's an even deeper way to thing about it: if you actually want to parallellize the simulation of multiple scenarios, or if you're running smth. that needs to compute smth in >4d, quantum mechanics + parallel universes" might be the computationally optimal way to do it!

...we don't think about it this way often because we'd be thinking about computational problems so huuuuge that we'd be like the quarks inside the atoms inside the transistors inside plannet-sized clusters spanning galaxies to even fathom computing it ...and it's not necesarily a feel-good perspective :)

I mean, even the speed-of-light limit and general relativity seem like optimizations you'd do in order to better parallelize something you need to compute on some unfathomable "hardware" in some baseline-reality that might not have the same constraints...

...and to finish the coffee-high-rant: if you want FTL you probably can't get it "inside" because it would break the simulation, you'd need to "get out" ...or more like "get plucked out" by some-thing/god :P (ergo, when we see alien artifacts UFOs etc. that seemed to have done FTL... we kind of need to start assuming MORE than _their_ existence and just them being 'more advanced' than us)


There’s something about this that swallows its own tail, the optimal way to simulate a universe using computing theory available in that universe.


exactly, the build step is there anyway, so why not get "what you're paying for"


Can you explain the choice of JS over TS?


Been using JS for 6+ years and never felt the need to upgrade, so I tried to keep the boilerplate minimal


Makes sense, is you're a 5+ years experienced JS dev.

But a more junior dev (or someone often switching languages between eg JS / Python / Go etc. all day who finds it hard to remember language details) would benefit a lot form the IDE suggestions and validations that can be offered with TS, and move faster at coding with all the extra help from autocomplete and stuff.

So your implicit tradeoff is to optimize for more senior developers. That's OK, but I'm not sure you intended that.


what other good recommendations do you have for auth?


...this could make some people VERY angry :)


...nothing happened to the markets ~2017 when the transformer architecture (behind the current AI boom) happened. Nor in 2020 when it was pretty obvious shit scales and generalizes well. ONLY when it got built into an actual product (ChatGPT) did the needle start to move.

Same with this, until someone figures out how to make WIRES from LK-99-like-stuff that can be made into useful MAGNETS, it will stay flat.

Markets may be very smart... but they're hyyyyyper-conservative :P (an it makes sense to be so, there's the $$$ powering agriculture and healthcare and pensions floating on the seas of markets nowadays).


Transformers in 2017 basically were just better at computing similarities between blocks of text. OpenAI really did take them and turn them into something new.


"room-temp semiconductors in 2023 were just small blocks of stuff levitating atop magnets without needing to be cooled. XYZ really did manage to produce them into useful shapes that still remain superconductive when you push more than 0.00001 amps through them"


I arrived at the same wisdom eg. the "Throw Away then Strategize/Plan" process, but... how the heck to you manage to sell/explain this to people at the same or higher levels?

Imo lots of people are very disgusted by this, mainly because the (a) concervatives/waterfall-heads are horrified by the idea of launching something no thoroughly engineered, while the (b) evolutionary-design folks never want a clean-rewrite from scratch, they'll cling to that "throw-away version" and try to "evolutionarily" refactor it gradually into what they now know it will be needed (and this always fails).


Good point - to some degree I got lucky in the organizational area; the situation where it really proved itself was where I was one of four co-founders & I was in the CTO chair.

I can see how it could degrade in the ways you mentioned (and more!) in more mature orgs. Which, to me makes it an even better idea there, but harder to politically navigate.


"the fork was very very bad for eating soup - this is a story about how we migrated to a spoon"

...firecracker does fine what it was designed to - short running fast start workloads.

(oh, and the article starts by slightly misusing a bunch of technical terms, firecracker's not technically a hypervisor per se)


it's not that simple many other companies running longer running jobs, including their competition, use Firecracker

so while Firecracker was designed for thing running just a few seconds there are many places running it with jobs running way longer then that

the problem is if you want to make it work with long running general purpose images you don't control you have to put a ton of work into making it work nicely on all levels of you infrastructure and code ... which is costly ... which a startup competing on a online dev environment compared to e.g. a vm hosting service probably shouldn't wast time on

So AFIK the decision in the article make sense the reasons but listed for the decision are oversimplified to a point you could say they aren't quite right. Idk. why, could be anything from the engineer believing that to them avoiding issues with some shareholder/project lead which is obsessed with "we need to do Firecracker because competition does so too".


..so is it more to support directly deploying functions to the cloud? Like, what AWS Lambda and CloudFront Functions might be built on?


I'm pretty sure firecracker was literally created to underlie AWS Lambda.

EDIT: Okay, https://www.geekwire.com/2018/firecracker-amazon-web-service... says my "pretty sure" memory is in fact correct.


That being said, firecracker also runs long-running tasks on AWS in the form of Fargate


As does the paper [1] with details in section 4.1.

[1]: https://www.usenix.org/system/files/nsdi20-paper-agache.pdf


yes, it was created originally for AWS Lambda

mainly it's optimized to run code only shortly (init time max 10s, max usage is 15min, and default max request time 130s AFIK)

also it's focused on thin server less functions, like e.g. deserialize some request, run some thin simple business logic and then delegate to other lambdas based on it. This kind of functions often have similar memory usage per-call and if a call is an outlier it can just discard the VM instance soon after (i.e. at most after starting up a new instance, i.e. at most 10s later)


what pants it only had a skirt with GDPR branded on it and nothing under...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: