Every lift at Snowbird has the map printed on the bar. So you can plan your route on the way up. I agree that when you get lost, that map won't save you, but I think an offline PDF is also fine.
Absolutely not true lol. I dont think any of their lifts have maps on them right now. The maps also arent super helpful at snowbird because the cliffs often come out of nowhere
Wait, really? I haven't been up this season, but it's always been there! I understand removing the printed ones when the bars have them (and the big boards at the top). Is it all just ads now?
None of the bars have anything printed on them now if I remember correctly. I have a pass and have been around 10 times this season. At least they all have footrests unlike alta where they love foot pain
You can get by pretty well with the ~$20/month plans for either Claude or Gemini. You don't need to be doing the $200/month ones just to get a sense of how they work.
That's what I was thinking, but could not find the link. Here is it working on some standard tasks.[1] Grasping the padlock and inserting the key is impressive.
I've seen key-in-lock before, done painfully slowly.
Finally, it's working.
That system, coupled to one of the humanoids for mobility, could be quite useful. A near term use case might be in CNC machining centers. CNC machine tools now work well enough on their own that some shops run them all night. They use replaceable cutting tools which are held in standard tool holders. Someone has to regularly replace the cutting tools with fresh ones, which limits how long you can run unattended. So a robot able to change tool holders during the night would be useful in production plants.
See [2], which is a US-based company that makes molds for injection molding, something the US supposedly doesn't do any more. They have people on day shift, but the machines run all night and on weekends. To do that, they have to have refrigerator-sized units with tools on turntables, and conveyors and stackers for workplace pallets.
A humanoid robot might be simpler than all the support machinery required to feed the CNC machines for unattended operation.
> A humanoid robot might be simpler than all the support machinery required to feed the CNC machines for unattended operation.
A humanoid robot is significantly more complicated than any CNC. Even with multi-axis, tool change, and pallet feeding these CNC robots are simpler in both control and environment.
These robots don't produce a piece by thinking about how their tools will affect the piece, they produce it by cycling though fixed commands with all of the intelligence of the design determined by the manufacturer before the operations.
These are also highly controlled environments. The kind of things they have to detect and respond to are tool breakage, over torque, etc. And they respond to those mainly by picking a new tool.
The gulf between humanoid robotics in uncontrolled environments is vast even compared to advanced CNC machines like these (which are awesome). Uncontrolled robotics is a completely different domain, akin to solving computation in P by a rote algorithm, vs excellent approximations in NP by trained ML/heuristic methods. Like saying any sorting algorithm may be more complex than a SOTA LLM.
Most flexible manufacturing systems come with a central tool storage (1000+ tools) that can load each individual machine's magazine (usually less than 64 tools per machine). The solution to the problem you mention is adding one more non-humanoid machine. The only difference is that this new machine won't consume the tools and instead just swaps the inserts.
There is literally no point in having a humanoid here. The primary reason you'd want a human here is that hiring a human to swap tools is extremely cost effective since they don't actually need to have any knowledge of operating the machines and just need to be trained on that one particular task.
How would you distinguish the article from an honest write up about transistors? That is, you know about his crusade in ML, but if you didn't, how would you decide that this article is written in bad faith or not?
I agree that context matters, and I had the same thought as you. But does that mean that anything he writes on the topic of "who was first" is inherently tainted?
While I was sure they'd note that Nitro doesn't have this vulnerability due to its design, it seems weird not to talk about Firecracker and Lambda and so on. Maybe those are always on Cascadelake+ hardware? (I also haven't followed this space for 5 years, so maybe I'm asking the wrong question)
We've only verified EC2 during our research, but you do make a good point here. Nitro wasn't vulnerable. Firecracker might have been, considering that it is also built on top of KVM. Firecracker was not specifically designed to also defend against hardware vulnerabilities [1], so I don't see an immediate reason why it wouldn't have worked.
We had to limit the scope of the project somewhere unfortunately, but it would have been nice to check Firecracker and Lambda as well.
Ehh, PHP fits that bill and is clearly optimizable. All sorts of things worked well for PHP, including the original HipHop, HHVM, my own work, and the mainline PHP runtime.
Python has some semantics and behaviors that are particularly hostile to optimization, but as the Faster Python and related efforts have suggested, the main challenge is full compatibility including extensions plus the historical desire for a simple implementation within CPython.
There are limits to retrofitting truly high performance to any of these languages. You want enough static, optional, or gradual typing to make it fast enough in the common case. That's why you also saw the V8 folks give up and make Dart, the Facebook ones made Hack, etc. It's telling that none of those gained truly broad adoption though. Performance isn't all that matters, especially once you have an established codebase and ecosystem.
> Performance isn't all that matters, especially once you have an established codebase and ecosystem.
And this is no small part of why Java and JS have frequently been pushing VM performance forward — there’s enough code people very much care about continuing to work on performance. (Though the two care about different things mostly: Java cares much more about long-term performance, and JS cares much more about short-term performance.)
It doesn’t hurt they’re both languages which are relatively static compared with e.g. Python, either.
V8 still got substantially faster after the first team left to do Dart. A lot of runtime optimizations (think object model optimizations), several new compilers, and a lot of GC work.
It's a huge investment to make a dynamic language go as fast as JS these days.
I regret that we put my subdivision assignment as the last one, and we allowed students to skip one assignment. Most students skipped it, but those that did the work thought it was super cool to have their own subdivision tool for making smooth meshes.
Sadly I have lots of code that exclusively uses the dereference operator because there are older versions of macOS that shipped without support for .value(); the dereference operator was the only way to do it! To this day, if you target macOS 10.13, clang will error on use of .value(). Lots of this code is still out there because they either continue to support older macOS, or because the code hasn't been touched since.
Just a cursory search on Github should put this idea to rest. You can do a code search for std::optional and .value() and see that only about 20% of uses of std::optional make use of .value(). The overwhelming majority of uses off std::optional use * to access the value.
It is with a prior .has_value call. It's not correct without. It's simple, and covered by static analysis. This is not an issue in real code, it's a pathologic error that doesn't actually happen. Like most anti c++ examples.
reply