Hacker Newsnew | past | comments | ask | show | jobs | submit | alooPotato's commentslogin

@mnazzaro have you seen fly.io's new sprites.dev offering?

I have! It's pretty interesting and handles a lot of the problems discussed here, but is a little young for us. For one thing, it doesn't have fly replay, so we'd have to build a separate proxy again.

If we were starting from 0, I would definitely try it. My favorite thing about it is the progressive checkpointing- you can snapshot file system deltas and store them at s3 prices. Cool stuff!


Do you think IDE's, type checking, refactoring tools and autocomplete make developers stupider too? Serious question.


not at all, I think these are valuable tools

would you agree that LLMs make developer stupider?

edit: answer my question


So what about Cursor's tab autocomplete? Seems like there is a spectrum of tooling between raw assembly all the way to vibe coding and I'm trying to see where you draw the line. Is it "if it uses AI, its bad" or are you more against the "hey build me something and I'm not even gonna check the results."


... they are not going to give you a satisfying answer to your totally reasonable line of inquiry.

Looking at the brief history of their account, I don't think anything they are saying or asking is in remotely good faith.


I have a latency sensitive application - anyone know if any tools that let you compare time to first token and total latency for a bunch of models at once given a prompt. Ideally, run close to the DCs that serve the various models so we can take out network latency from the benchmark.


why?`


Apple is based in Cupertino. And Apple once had self-driving car ambitions.


Cupertino is in there, no?


It appears not to be. Here are the ones in Santa Clara County:

- Milpitas

- Mountain View

- Palo Alto Santa

- San Jose

- Sunnyvale

- Unincorporated Area (Lexington Hills area, overlapping Santa Clara and Santa Cruz Counties)

I don't know why it says "Palo Alto Santa"

Edit: I guess it's "Palo Alto Santa" to disambiguate between Palo Alto, which is in Santa Clara County, and East Palo Alto, which is in San Mateo County (BTW the westmost point of East Palo Alto is east of the westmost point of Palo Alto, but the eastmost point of East Palo Alto is not east of the eastmost point of Palo Alto).


It looks like the map includes north of 280, so you can use it to go to Gamba Karaoke and Tea Era. And really, what else could you need from Cupertino?


It might be that you can start a trip in one of the cities, but you can travel out of the city to anywhere in the highlighted area.


Looking at the city limits, I don't understand why East Palo Alto isn't called North Palo Alto instead.


I always assumed it was because San Francisco is "North", and East Palo Alto is on the "East" side of highway 101.


It's more East than North of El Palo Alto - the tree Palo Alto is named after.


why is there an approved map? like i get having a pilot somewhere but once that goes well (and we're way past that point), why isn't it just blanket approval everywhere. Why would one county be allowed waymos but not another.

I get that they might not be approved in the high sierras but just make that a deny list not allow list. Or even just deny the specific conditions you're worried about (snow).


There's an approved map because the approval process requires the manufacturer to specify both areas and conditions they are applying for, and documents supporting that the vehicle is ready to be operated autonomously in those areas and conditions (which doesn't just include technical readiness, but also administrative readiness in the form of things like a law enforcement interaction plan, etc.)

> like i get having a pilot somewhere but once that goes well (and we're way past that point), why isn't it just blanket approval everywhere.

Because “everywhere” isn't a uniform domain (Waymo is kind of way out in one tail of the distribution in terms of both the geographical range and range of conditions they have applied for and been approved to operate in, other AV manufacturers are in much tinier zones, and narrow road/weather conditions.) And because for some AV manufacturers (if there is one that can demonstrate they don't need this, they'd probably have an easier lift getting broader approvals) part of readiness to deploy (or test) in an area is detailed, manufacturer specific mapping/surveying of the roads.


My question is why they even have to apply for specific areas to begin with? Just approve the manufacturer for certain conditions and let them operate wherever they want.


> administrative readiness in the form of things like a law enforcement interaction plan


I suspect it's limited by what the request was for. Waymo has to create the high res map before they can offer service.


I think laypeople vastly overestimate how much the maps are a bottleneck compared to boring things like infrastructure to charge, people to clean the vehicles, integrating with local governments to allow things like disabling Pickup/Dropoff in certain areas at certain times, etc.

Even with local partners that all takes a lot of time.


Right but what does that have to do with the DMV. Waymo should apply for certain weather conditions and then the DMV says yes or no, then they stay the hell out of the way. Let waymo operate whereever they want and expand however they see fit and whenever they feel ready.

Like the DMV is actually checking Waymos map of a new area is good to go or not. Its just administrative burden.


More of the state is not allowed than is... at least by geography.

Also, there's a practical element. If I have to specify where they can't go, the default position is they can go anywhere... if I inadvertently leave an area out of my black-list where it really ought to exist: the default is "permission granted". With a white-list, the worst case is a forgotten or neglected area can't be operated in as a default and the AV provider will have an interest in correcting.

But also politics. It's a very different message to say we're going to white-list a given AV operator to exist in different areas vs. black-listing them from certain areas.


so good.


From the post they claim 8 times more solar energy and no need for batteries because they are continuously in the sun. Presumably at some scale and some cost/kg to orbit this starts to pencil out?


You're trading an 8x smaller low-maintenance solid-state solar field for a massive probably high-maintenance liquid-based radiator field.


Can't be high maintenance if we just make it uncrewed, unserviceable and send any data center with catastrophically failed cooling to Point Nemo /s


If it can be all mostly solid-state, then it's low-maintenace. Also design it to burn up before MTTF, like all cool space kids do these days. Not gonna be worse at Starlink unless this gets massively scaled up, which it's meant to be (ecological footprint left as an exercise to the reader).


No infrastructure, no need for security, no premises, no water.

I think it's a good idea, actually.


> No infrastructure

A giant space station?

> no need for security

There will be if launch costs get low enough to make any of this feasible.

> no premises

Again… the space station?

> no water

That makes things harder, not easier.


This is not a giant space station ...

>There will be if launch costs get low enough to make any of this feasible.

I don't know what you mean by that.


> This is not a giant space station …

Fundamentally, it is, just in the form of a swarm. With added challenges!

> I don't know what you mean by that.

If you can get to space cheaply enough for an orbital AI datacenter to make financial sense, so can your security threats.


> Fundamentally, it is, just in the form of a swarm. With added challenges!

Right, in the same sense that existing Starlink constellation is a Death Star.

This paper does not describe a giant space station. It describes a couple dozen of satellites in a formation, using gravity and optics to get extra bandwidth for inter-satellite links. The example they gave uses 81 satellites, which is a number made trivial by Starlink (it's also in the blog release itself, so no "not clicking through to the paper" excuses here!).

(In a gist, the paper seems to be describing a small constellation as useful compute unit that can be scaled, indefinitely - basically replicating the scaling design used in terrestrial ML data centers.)


> Right, in the same sense that existing Starlink constellation is a Death Star.

"The cluster radius is R=1 km, with the distance between next-nearest-neighbor satellites oscillating between ~100–200m, under the influence of Earth’s gravity."

This does not describe anything like Starlink. (Nor does Starlink do heavy onboard computation.)

> The example they gave uses 81 satellites…

Which is great if your whole datacenter fits in a few dozen racks, but that's not what Google's talking about here.


> This does not describe anything like Starlink. (Nor does Starlink do heavy onboard computation.)

Irrelevant for spacecraft dynamics or for heat management. The problem of keeping satellites from colliding or shedding the watts the craft gets from the Sun are independent of the compute that's done by the payload. It's like, the basic tenet of digital computing.

> Which is great if your whole datacenter fits in a few dozen racks, but that's not what Google's talking about here.

Data center is made of multiplies of some compute units. This paper is describing a single compute unit that makes sense for machine learning work.


> The problem of keeping satellites from colliding or shedding the watts the craft gets from the Sun are independent of the compute that's done by the payload.

The more compute you do, the more heat you generate.

> Data center is made of multiplies of some compute units.

And, thus, we wind up at the "how do we cool and maintain a giant space station?" again. With the added bonus of needing to do a spacewalk if you need to work on more than one rack.


> The more compute you do, the more heat you generate.

Yes, and yet I still fail to see the point you're making here.

Max power in space is either "we have x kWt of RTG, therefore our radiators are y m^2" or "we have x m^2 of nearly-black PV, therefore our radiators are y m^2".

Even for cases where the thermal equilibrium has to be human-liveable like the ISS, this isn't hard to achieve. Computer systems can run hotter, and therefore have smaller radiators for the same power draw, making them easier.

> And, thus, we wind up at the "how do we cool and maintain a giant space station?" again. With the added bonus of needing to do a spacewalk if you need to work on more than one rack.

What you're doing here is like saying "cars don't work for a city because a city needs to move a million people each day, and a million-seat car will break the roads": i.e. scaling up the wrong thing.

The (potential, if it even works) scale-up here is "we went from n=1 cluster containing m=81 satellites, to n=10,000 clusters each containing m=[perhaps still 81] satellites".

I am still somewhat skeptical that this moon-shot will be cost-effective, but thermal management isn't why, Musk (or anyone else) actually getting launch costs down to a few hundred USD per kg in that timescale is the main limitation.


Mind answering the question here: https://news.ycombinator.com/item?id=45611301 ?


There is an open question about how file persistence works.

The docs claim they persist the filesystem even when they move the container to an idle state but its unclear exactly what that means - https://github.com/cloudflare/sandbox-sdk/issues/102


To me, the docs answer it pretty clearly. The defined directories persist until you destroy().

The part that's unclear to me is how billing works for a sandbox's disk that's asleep, because container disks are ephemeral and don't survive sleep[2] but the sandbox pricing points you to containers which says "Charges stop after the container instance goes to sleep".

https://developers.cloudflare.com/sandbox/concepts/sandboxes...

https://developers.cloudflare.com/sandbox/concepts/sandboxes...

[2] https://developers.cloudflare.com/containers/faq/#is-disk-pe...


Yeah thats basically the issue. If container disks are ephemeral, how are they persisting it? And however they are doing it, whats the billing for it?


Sandbox is built on top of their Durable Objects; the underlying storage is $0.20/ GB-month.


You’re saying the file system in the container is persisted to the durable object storage? That doesn’t sound right.


Whilst it is in the idle state. Not whilst it is stopped.


You can easily set an alarm in the durable object to check if it should be killed and then call destroy yourself. Just a couple lines of code.


Nice. Thanks for the tip. I did not know that this was a thing. I will look it up.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: