I have! It's pretty interesting and handles a lot of the problems discussed here, but is a little young for us. For one thing, it doesn't have fly replay, so we'd have to build a separate proxy again.
If we were starting from 0, I would definitely try it. My favorite thing about it is the progressive checkpointing- you can snapshot file system deltas and store them at s3 prices. Cool stuff!
So what about Cursor's tab autocomplete? Seems like there is a spectrum of tooling between raw assembly all the way to vibe coding and I'm trying to see where you draw the line. Is it "if it uses AI, its bad" or are you more against the "hey build me something and I'm not even gonna check the results."
I have a latency sensitive application - anyone know if any tools that let you compare time to first token and total latency for a bunch of models at once given a prompt. Ideally, run close to the DCs that serve the various models so we can take out network latency from the benchmark.
It appears not to be. Here are the ones in Santa Clara County:
- Milpitas
- Mountain View
- Palo Alto Santa
- San Jose
- Sunnyvale
- Unincorporated Area (Lexington
Hills area, overlapping Santa
Clara and Santa Cruz Counties)
I don't know why it says "Palo Alto Santa"
Edit: I guess it's "Palo Alto Santa" to disambiguate between Palo Alto, which is in Santa Clara County, and East Palo Alto, which is in San Mateo County (BTW the westmost point of East Palo Alto is east of the westmost point of Palo Alto, but the eastmost point of East Palo Alto is not east of the eastmost point of Palo Alto).
It looks like the map includes north of 280, so you can use it to go to Gamba Karaoke and Tea Era. And really, what else could you need from Cupertino?
why is there an approved map? like i get having a pilot somewhere but once that goes well (and we're way past that point), why isn't it just blanket approval everywhere. Why would one county be allowed waymos but not another.
I get that they might not be approved in the high sierras but just make that a deny list not allow list. Or even just deny the specific conditions you're worried about (snow).
There's an approved map because the approval process requires the manufacturer to specify both areas and conditions they are applying for, and documents supporting that the vehicle is ready to be operated autonomously in those areas and conditions (which doesn't just include technical readiness, but also administrative readiness in the form of things like a law enforcement interaction plan, etc.)
> like i get having a pilot somewhere but once that goes well (and we're way past that point), why isn't it just blanket approval everywhere.
Because “everywhere” isn't a uniform domain (Waymo is kind of way out in one tail of the distribution in terms of both the geographical range and range of conditions they have applied for and been approved to operate in, other AV manufacturers are in much tinier zones, and narrow road/weather conditions.) And because for some AV manufacturers (if there is one that can demonstrate they don't need this, they'd probably have an easier lift getting broader approvals) part of readiness to deploy (or test) in an area is detailed, manufacturer specific mapping/surveying of the roads.
My question is why they even have to apply for specific areas to begin with? Just approve the manufacturer for certain conditions and let them operate wherever they want.
I think laypeople vastly overestimate how much the maps are a bottleneck compared to boring things like infrastructure to charge, people to clean the vehicles, integrating with local governments to allow things like disabling Pickup/Dropoff in certain areas at certain times, etc.
Even with local partners that all takes a lot of time.
Right but what does that have to do with the DMV. Waymo should apply for certain weather conditions and then the DMV says yes or no, then they stay the hell out of the way. Let waymo operate whereever they want and expand however they see fit and whenever they feel ready.
Like the DMV is actually checking Waymos map of a new area is good to go or not. Its just administrative burden.
More of the state is not allowed than is... at least by geography.
Also, there's a practical element. If I have to specify where they can't go, the default position is they can go anywhere... if I inadvertently leave an area out of my black-list where it really ought to exist: the default is "permission granted". With a white-list, the worst case is a forgotten or neglected area can't be operated in as a default and the AV provider will have an interest in correcting.
But also politics. It's a very different message to say we're going to white-list a given AV operator to exist in different areas vs. black-listing them from certain areas.
From the post they claim 8 times more solar energy and no need for batteries because they are continuously in the sun. Presumably at some scale and some cost/kg to orbit this starts to pencil out?
If it can be all mostly solid-state, then it's low-maintenace. Also design it to burn up before MTTF, like all cool space kids do these days. Not gonna be worse at Starlink unless this gets massively scaled up, which it's meant to be (ecological footprint left as an exercise to the reader).
> Fundamentally, it is, just in the form of a swarm. With added challenges!
Right, in the same sense that existing Starlink constellation is a Death Star.
This paper does not describe a giant space station. It describes a couple dozen of satellites in a formation, using gravity and optics to get extra bandwidth for inter-satellite links. The example they gave uses 81 satellites, which is a number made trivial by Starlink (it's also in the blog release itself, so no "not clicking through to the paper" excuses here!).
(In a gist, the paper seems to be describing a small constellation as useful compute unit that can be scaled, indefinitely - basically replicating the scaling design used in terrestrial ML data centers.)
> Right, in the same sense that existing Starlink constellation is a Death Star.
"The cluster radius is R=1 km, with the distance between next-nearest-neighbor satellites oscillating between ~100–200m, under the influence of Earth’s gravity."
This does not describe anything like Starlink. (Nor does Starlink do heavy onboard computation.)
> The example they gave uses 81 satellites…
Which is great if your whole datacenter fits in a few dozen racks, but that's not what Google's talking about here.
> This does not describe anything like Starlink. (Nor does Starlink do heavy onboard computation.)
Irrelevant for spacecraft dynamics or for heat management. The problem of keeping satellites from colliding or shedding the watts the craft gets from the Sun are independent of the compute that's done by the payload. It's like, the basic tenet of digital computing.
> Which is great if your whole datacenter fits in a few dozen racks, but that's not what Google's talking about here.
Data center is made of multiplies of some compute units. This paper is describing a single compute unit that makes sense for machine learning work.
> The problem of keeping satellites from colliding or shedding the watts the craft gets from the Sun are independent of the compute that's done by the payload.
The more compute you do, the more heat you generate.
> Data center is made of multiplies of some compute units.
And, thus, we wind up at the "how do we cool and maintain a giant space station?" again. With the added bonus of needing to do a spacewalk if you need to work on more than one rack.
> The more compute you do, the more heat you generate.
Yes, and yet I still fail to see the point you're making here.
Max power in space is either "we have x kWt of RTG, therefore our radiators are y m^2" or "we have x m^2 of nearly-black PV, therefore our radiators are y m^2".
Even for cases where the thermal equilibrium has to be human-liveable like the ISS, this isn't hard to achieve. Computer systems can run hotter, and therefore have smaller radiators for the same power draw, making them easier.
> And, thus, we wind up at the "how do we cool and maintain a giant space station?" again. With the added bonus of needing to do a spacewalk if you need to work on more than one rack.
What you're doing here is like saying "cars don't work for a city because a city needs to move a million people each day, and a million-seat car will break the roads": i.e. scaling up the wrong thing.
The (potential, if it even works) scale-up here is "we went from n=1 cluster containing m=81 satellites, to n=10,000 clusters each containing m=[perhaps still 81] satellites".
I am still somewhat skeptical that this moon-shot will be cost-effective, but thermal management isn't why, Musk (or anyone else) actually getting launch costs down to a few hundred USD per kg in that timescale is the main limitation.
To me, the docs answer it pretty clearly. The defined directories persist until you destroy().
The part that's unclear to me is how billing works for a sandbox's disk that's asleep, because container disks are ephemeral and don't survive sleep[2] but the sandbox pricing points you to containers which says "Charges stop after the container instance goes to sleep".
reply