How does it compare to sidebery? I use the vertical tabs on that and quite like it but only found them because of another feature, per container socks5 (one for local ip and a few to strategically placed cheap vps to override my network default mullvad vpn tunneling as needed)
Yeah vertical tabs with tab groups have replaced Sidebery's hierarchy and panels for me. Mostly because it feels slightly smoother and more performant as a built-in feature.
Sidebery is awesome but afaik it doesn't seem that it syncs the state of your tabs across browsers. TabStash is not as visually polished but it achieves that capability by using Firefox bookmarks for the tabs and groups.
I've been using the Vertical tabs with Sideberry for a bit. The minified vertical tabs greatly declutters the top of the browser and feels pretty good, but I've got a lot of trees in Sideberry and several panels for organization structure that the built-in vertical tabs can't yet do, so for now most of my navigation is still Sideberry. Tab Groups are something to watch that may help some of what I use multiple panels for in Sideberry. I think native "tree-structure" is a lot more of what I'd particularly want, though.
Not a fan of vertical tabs as truncated titles hurts my fov aesthetics. These days I will just pin the tab when I want to go back to it later but even that has it's limits too.
After a certain number of open tabs, the titles are less truncated with vertical tabs than horizontal tabs. You also have more of the titles in your center view if you have the text distributed in a more rectangular shape than a technically-also-rectangular-but-much-more-elongated shape.
I recently switched to Sidebery for this. Guess I'll have to do a comparison. I really enjoy Sidebery though, It has made my workflow at work much more organized.
It's so much better than any of the extension-based XUL interface hacks. As soon as they can figure out when to auto-expand the sidebar it will be perfect.
Yes, because the APIs aren't perfectly fleshed-out. And they may never be, and yet that's still completely OK because the WebExtension model is obviously better along the performance, security, portability, and API stability axes.
This is blatantly false, and one of the most dishonest and manipulative claims that I've seen on HN.
Performance is performance. If one technology is more performant by removing features, useful or not, it it factually faster, and that performance absolutely does count. Features are completely irrelevant to performance measurements of a system.
If you have two cars, car A with top speed 160 MPH and a 0-60 of 3s, and car be with top speech 120 MPH and a 0-60 of 5s, some people may still prefer car B because it has better mileage or nicer features or is cheaper (which is the overall value judgement that you seem to be extremely confused about), but precisely zero sane people will tell you that car A "isn't clearly better from a speed standpoint" because it has less features than car B.
Crippling my ad blocker doesn't make my browser faster on average, even though dishonest benchmarks from advertising companies may claim otherwise. Removing XUL also didn't make TreeStyleTab faster; quite the opposite.
Aside from crippling ad blockers, are there any other theoretical performance improvements enabled by WebExtensions, or is it all about reducing opportunities for badly-written extensions to have an impact?
> Crippling my ad blocker doesn't make my browser faster on average, even though dishonest benchmarks from advertising companies may claim otherwise. Removing XUL also didn't make TreeStyleTab faster; quite the opposite.
OK, so now you're moving the goalposts, continuing to dishonestly redefine words, and cherry-picking specific instances of addons that support your point, while ignoring the fact that I soundly refuted your utterly insane previous argument.
> Aside from crippling ad blockers
No? WebExtensions clearly did not "cripple" ad blockers by any stretch of the imagination. Maybe you're conflating WebExtensions and Manifest v3?
> Removing XUL also didn't make TreeStyleTab faster; quite the opposite.
Cherry-picking items to try to support your point only proves that you don't have robust evidence to support it in general. This is the hasty generalizations fallacy. As someone who lived through the WebExtensions transition, I didn't perceive any slowdown in any of my dozen or so extensions.
> are there any other theoretical performance improvements enabled by WebExtensions
Yes - if you had any knowledge at all of the old addon model, you'd know that the old XUL-based addons prevented Firefox's move to the multi-process Electrolysis architecture, which significantly improved performance.
> is it all about reducing opportunities for badly-written extensions to have an impact
Yes, that is (on top of everything else) a performance benefit. Humans are not robots - all humans write bad and buggy code, and the XUL model not only made it much easier to write buggy and slow code, but the lack of a well-defined interface resulted in ossification that massively inhibited Mozilla's ability to develop Firefox.Even if it didn't, making changes that help/force the lower 99% of programmers to write better code while mildly inhibiting the ability of the top 1% of of programmers is absolutely worth it, and in practice has massively improved performance.
If you tried to run old Firefox on a modern CPU with a bunch of extensions, you'd very clearly see the performance difference due to the ability to actually take full advantage of more than one core, and due to the improvements that Mozilla was able to make by deprecating the old API.
Perhaps stop commenting unless you can stop committing numerous fallacies, making utterly insane statements, pretending that human factors don't exist, and making statements about things that you have no understanding of.
Firefox moved to multi-process before moving to WebExtensions. You've lost count of how many mass extinctions the Firefox extension has been through, but there were XUL extensions that were updated to be compatible with multi-process, and then later had to be totally rewritten with completely new UIs when XUL was killed. And the usability hit that extensions like AdBlock Plus and NoScript suffered was crippling, even if it wasn't quite as bad as MV3, and NoScript lost features that went beyond just UI.
> Firefox moved to multi-process before moving to WebExtensions.
And? Removal of XUL addons were still a prerequisite for the multi-process architecture. Mozilla just realized that WebExtensions was a sane, performant extension API that worked well with e10s, and would useful for compatibility with Chrome.
> You've lost count of how many mass extinctions the Firefox extension has been through
Two? Hardly a lot.
The XUL model is inferior to the Web Extensions model. No amount of trying to cherry-pick specific instances of extensions that had local functionality or performance losses will detract from the facts that (1) the XUL addon system was inferior and (2) had to be removed in order to make Firefox (both the browser and ecosystem as a whole) more performant, secure, stable, and easier to maintain.
I tried using vertical tabs and tab groups simultaneously, but there seems to be nothing like 'list all tabs' / Recent tab groups, so my tab groups are already lost amongst the other tabs. 'close duplicate tabs' is also missing from vertical tabs.
Since I keep having to go into that menu I just disabled vertical tabs.
Exactly, since the 'list all tabs' is a superset of vertical tab behavior, and I have to go there anyhow for things missing in vertical tabs, I disabled vertical tabs.
Been using vertical tabs to avoid floating toolbar gymnastics while sharing screen on Microsoft Teams. It’s been working great and now I don’t miss the horizontal tabs.
Sure. In a similar way as when the moon is low on the horizon and I stand in my back yard facing it. There's the moon. It's right in front of me... :-)
As I said, the debris was likely closer to around ~100km in altitude. Commercial airliners fly around ~10km in altitude. Appearing to be at a similar altitude as the plane and "in front" of it was an optical illusion because the debris was intensely bright, very far away, very high and moving several times faster than a bullet. While we don't have exact data yet, I believe it is highly likely there was zero chance of that plane ever hitting that debris given their relative positions. It couldn't even if the pilots weren't mistaken about how close the debris was and they had intentionally tried to hit it. The debris was too far, too high and moving at hypersonic speeds (hence the metal being white hot from atmospheric friction).
Starship's flight paths are carefully calculated by SpaceX and the FAA to achieve this exact outcome. In the event of a RUD near orbit, little to no debris will survive reentry. Any that does survive won't reach the surface (or aircraft in flight) until it is far out into the Atlantic Ocean away from land, people, flight paths and shipping lanes. For Starship launches the FAA temporarily closes a large amount of space in the Gulf of Mexico to air and ship traffic because that's where Starship is low and slow enough for debris to be a threat to aircraft. These planes were flying in the Caribbean, where there was no FAA NOTAM closing their airspace because by the time Starship is over the Caribbean, it's in orbit. If there's a RUD over the Caribbean it's already too high and going too fast for debris to be a threat to aircraft or people anywhere near the Carribean. The only "threat" in the Caribbean today was from anyone being distracted by the pretty light show in orbit far above them (that looked deceptively close from some angles).
> the debris was likely closer to around ~100km in altitude. Commercial airliners fly around ~10km in altitude
(Not wishing to ask the obvious, and depending on the size of the pieces) debris at 100km altitude pretty much always ends up being debris falling through 10km ... right?
At the incredible speeds Starship was moving (>13,000 mph) by the time it was over the Caribbean, debris from a Starship is expected to burn up by the time it reaches the surface. But you said "depending on the size", so let's imagine it's a different spacecraft carrying something that won't entirely burn up, like the Mir space station from several years back.
In that scenario, debris from 100km will survive to pass through 10km. The point is: if the mass becomes debris >143km high traveling at >13,000 mph over the Caribbean - it doesn't pass through 10km anywhere near the Caribbean. Even though the friction causing tempered metal to glow white hot is slowing it, the trajectory is ballistic so by the time it slows enough to get that low (10km) it's hundreds or thousands of miles East from where the explosion happened (and where that airplane was).
It's weird because given these orbital velocities and altitudes, our intuitions about up and down aren't very useful. Starship exploded in orbit over the Caribbean, so planes in the Caribbean were safe from falling debris. If it was Mir instead of Starship, planes hundreds or thousands miles to the East of the Caribbean would be at elevated risk. My high school astronomy teacher once said something like "Rockets don't go up to reach orbit. They go sideways. And they keep going sideways faster and faster until they're going so fast, up and down don't matter anymore." While that's hardly a scientific summary, it does give a sense of the dynamics. You'll recall that Mir was intentionally de-orbited so it would land in a desolate part of the Indian Ocean. So, did they blow it up right over the Indian Ocean? Nope. To crash it in the Indian Ocean, given the altitude and speed, they "blew it up" on the other side of Earth, like maybe over Chicago (I actually don't recall where the de-orbit began, but had to be very far away).
> so by the time it slows enough to get that low (10km) it's hundreds or thousands of miles East from where the explosion was seen
Appreciate that, the question would be, do we know that there won't be any aircraft at the right (wrong) altitude in that area(?!)
With aircraft regularly travelling thousands of miles, would be interesting to know whether route choices are made to avoid being "under"* the track of a rocket's launch?
There are people on HN far better qualified than I to discuss both orbital mechanics and spacecraft safety assessments but I'll give it a layman's stab based purely on the high-level concepts (which is all I know).
They know there's little to no risk to aircraft or people hundreds or thousands of miles to the East of a Starship RUD in orbit because they know exactly what's inside Starship and how it's built. They model how it will break up when traveling at these insane speeds and how the metal masses will melt and burn up during re-entry. They actually test this stuff in blast furnaces. It's a statistical model so it's theoretically possible a few small bits could make it to the ground on rare occasion, so we can't say debris will never happen - but there's been a lot of history and testing and the experts are confident it's extremely safe.
The case of the MIR space station was very different than a Starship. MIR was built a long time ago by the Soviet Union and they used a big, heavily shielded power plant. That lead shielding was really the part that had a significant risk of not burning up fully on re-entry. Starship, Starlink satellites and other modern spacecraft are now usually designed to burn up on reentry. However, there are still some things in orbit and things we'll need to put in orbit in the future that won't entirely burn up on reentry. There will always be a very small risk of an accidental uncontrolled reentry causing a threat. However, these risks are vanishingly small both because we design these spacecraft with redundant systems and fail-safes and because Earth is mostly uninhabited oceans, much of our landmasses are unpopulated or sparely populated, even in the unlikely event one of the few spacecraft with a large mass that won't entirely burn up has failed and is de-orbiting out of control, we can still blow it up - and timing that at the right moment will still put it down in a safe place (like it did with MIR). There's no such thing as absolute 100% perfect safety. But you're far, far more likely to die from a great white shark attack than be injured by satellite debris.
More to the point, a huge number of meteorites hit Earth every year and it's estimated over 17,000 survive to hit the surface. There are a bunch listed right now on eBay. Do you know anyone injured by any of the 17,000 space rocks that crashed into our planet this year or any airliners hit by one?
That description of heavy lead shielding of a power plant on Mir surprises me since photos show it as having solar arrays. Wikipedia also gives the power source as solar with no mention of lead components. Can you add further details of this?
It was just my off-hand recollection. I could be mistaken or possibly conflating Mir with some other space craft that was de-orbited in the past. I'm fairly confident that the Mir de-orbit was notable both because of its size and that it was expected to have an unusual degree of debris surviving re-entry and reaching the surface.
> At the incredible speeds Starship was moving (>13,000 mph) by the time it was over the Caribbean, debris from a Starship is expected to burn up by the time it reaches the surface.
Don't the heat tiles at least make it through? And possibly large hunks of metal like the thrust frame and engines.
As I said, I'm only familiar with the high level concepts of vehicle launch safety and not qualified to assess detailed scenarios. I'm just a guy interested enough to read some technical articles and skim a few linked papers several years ago when there was a lot of heat about launch safety from Boca and not much light. When there's a lot of heated rhetoric in the mass media, I find it's better to check directly in scientific and engineering sources.
I dove deep enough to a get sense that these questions have been extremely well-studied and not just by 2020s FAA and SpaceX but going back to the Shuttle and Apollo eras. The body of peer-reviewed engineering studies seemed exhaustive - and not just NASA-centric, the Europeans and Soviets did their own studies too.
Your question is reasonable and occurred to me as well. Components engineered to withstand the enormous heat and pressure of orbital re-entry should be more likely to survive a RUD scenario and subsequent re-entry burn for longer. From what I recall reading, this fits into a safety profile required to ensure very, very low risk because even if a tiny percentage of mass occasionally survives to reach the surface, the actual risk that surviving mass presents is a combination of its quantity, mass, piece size, velocity and, most importantly, where any final surviving bits reach the surface.
I recall seeing a diagram dividing the Boca orbital launch trajectory into windows, like: right around the launch pad, out over the gulf of Mexico, the Caribbean, Atlantic, Africa, Indian ocean, and so on. The entire path until it's out over empty Atlantic ocean has minimal land, people and stuff under it. The gulf of Mexico is by far the highest risk because the rocket is still relatively low and slow. A RUD there could potentially be a lot of stuff coming down. There's not a lot out there in the gulf, just a few ships and planes but the FAA closes a huge area because, while the statistical risk is very low in an absolute sense, it's still too high to take chances.
For later windows, they don't close the corridor underneath to plane and ship traffic because the rocket's much greater speed and altitude later in the flight allows more precisely modeling where the debris field will come down. There was another diagram showing a statistical model of a debris field impact zone as an elongated oval with color-coded concentric rings dividing the debris mass into classes. The outermost ring is the debris that breaks up into smaller, lighter pieces. It's the widest and longest but it's the stuff that's much lower risk because it's smaller and slower.
The smallest concentric ring in the middle is where the small amount of heavier pieces most likely to survive will come down, if any do survive. As you'd expect, that innermost ring is shifted toward the far end of the oval and is a much smaller area. The headline I took away was that there's a very small amount of higher mass debris that both A) is less likely to break up into tiny, lower mass pieces, and B) is less likely to completely burn up. This is the higher-risk mass and, due to its mass, it tends to stay on trajectory, go fastest, farthest and not spread out much. In short, the statistical model showed a very high probability of any higher risk stuff which survives coming down in a surprisingly tiny area. The overall safety model is based on a combination of factors working together so it meets the safety requirements in each window of the flight for each class of mass. The carefully chosen launch location, spacecraft design, component materials, flight path and a bunch of other factors all work together to put the small amount of higher risk stuff down somewhere that fits the safety profile of very, very low risk to people and property. Disclaimer: I've probably got some details wrong and left some things out but this is the sense I got from what I learned. I came away feeling that the safety work done on space launches is comprehensive, diligent and based on a long history of robust, peer-reviewed science backed up by detailed engineering tests as well as real-world data from decades of launches, RUDs and de-orbits.
A fun side story: a few months ago I was at the Hacker's Conference and Scott Manley ("Everyday Astronaut" on YouTube) was attending as he often does. He brought along some interesting space artifacts just to set out on a table for casual show and tell. I was able to pick up and examine a Starship heat tile that was fished out of the gulf of Mexico. It was surprisingly light weight. Sort of like a thick wall piece from a styrofoam picnic cooler. It had a very thin hard shell on one side. This shell was clearly very brittle as it had already been broken up and I was holding an index card-sized shattered piece that weighed maybe a couple ounces. This was clearly not something that was going to maintain structural integrity post-RUD. Once it wasn't packed tightly together into a smooth aerodynamic surface, it's gonna shred into tiny pieces. And that seemed by design - which apparently worked as intended because even without a RUD, at the low and slow speeds over the gulf and near the launch pad it did shatter into small, light pieces - assisted only by the rocket tipping over into the water followed by the relatively mild explosion of the remaining propellant (mild compared to an unimaginably violent orbital RUD, that is). Holding it I remembered the debris field oval diagram and thought, "this is smaller, slower, safer stuff in the outer zones."
Oops! I somehow conflated two space YouTubers who I sometimes watch and appreciate. Scott Manley is just Scott Manley on YouTube. Thanks for the correction.
I know this is just another reddit post so could be fake, but supposedly debris is already washing ashore in Turks and Caicos. Specifically in this post a heat shield tile, but the poster mentions other stuff on the beach. So at least some debris probably dropped nearby.
As I said above, the safety model predicts some lightweight stuff with a consistency not much stiffer than tin foil and coated styrofoam could fall not that far downrange. However, that's also the exact stuff that doesn't present serious risk to people or property.
So even if those claims are true, just finding a little debris doesn't invalidate the safety model or indicate there was ever unacceptable risk. The real question is if any debris from a higher risk class fell in a place the safety model didn't predict and why. That would certainly be notable and worth incorporating into future safety models.
In the absence of solid confirmation, I'm going to stick with the model and the basic physics. If the debris is just the expected stuff, I'm sure SpaceX regrets littering the beaches and should definitely pay for some crews to pick that stuff up and trash it.
No pictures or reports of anything falling in the Caribbean. People just love adding to the drama, they will later backtrack and explain that by “rain down” they meant the light show.
It would be extremely unlikely due to the laws of physics, last time I checked they were still in effect.
> Unfortunately mods are not very good quality, or too difficult to enjoy.
Most of my playtime in Factorio is with overhaul mods, there's a lot of quality content there. I'm not sure how many of them have been brought to 2.0 or even will be. I'm curious where you think quality lacks.
Space Exploration has been my favorite, but Ultracube, Krastorio2, and Nullius were all quite fun. I'll admit I haven't finished Ultracube and Nullius, but I have restarted my Ultracube run in 2.0.
> Space age is interesting because new planets are being added by fans, but I don't think they're going to be as well designed and really bring anything interesting.
Check out Maraxsis and/or Cerys, these were the kinds of planet mods I was hoping for in Space Age. Maraxsis is an ocean planet and you build underwater and in deep trenches, and the planet's building has a base 50% quality. Cerys is a moon of Fulgora that's really quite small and it's more of a puzzle than anything.
What I mean is that the game design is poorly made, unbalanced. Only vanilla and space age seems like they're accessible games.
Generally game design revolves the concept of a effort-reward loop: the player must feel he is regularly advancing in the game, at a steady pace, with a difficulty that slowly increases.
Mods are often badly designed because they are made by hardcore factorio fans who don't understand this or follow this golden rule.
There was a FFF who talked about a game designer they hired, and wube constantly had to make his designs simpler so the game can be a success. It requires a lot of work.
Making a great game is about "easy to learn, hard to master". A lot of people don't understand it, but a game must be attractive enough for casual to medium players, or it just will not sell.
The magic of factorio vanilla and space age is that the difficulty is well designed. Mods often just don't follow that rule, or have a lower quality.
There is more psychology behind game design that we want to admit.
I skipped it because it's 40 minutes long and I had lots of things to do.
I used the AI tool because I didn't expect anyone else to take their time to explain it and the 19 page pdf seems to be meandering without a coherent thesis.
So the answer to your question is 0 because that's not what happened.
Karpenter is great for managing spot fleet nodes as well. Most of our clusters run a small aws managed node group for karpenter and the rest of the nodes will be spot fleet and managed by karpenter.
I can't speak for the OP, but I've used a bare git repo for a number of years to manage my dotfiles, and in the few cases where I need to handle differences between machines, I've always been able to find a simple and straightforward, albeit ad-hoc, solution: for my shell, I `source local-config.${hostname}` when it exists; for my emacs config, I have a couple `cond` blocks in my config.el file; my preferred terminal emulator (kitty) can load multiple configuration files in sequence via passed arguments, so I can write a simple wrapper script and use a per-machine override .conf if necessary; etc etc. I don't currently put secrets in any text configuration files (nor can I envision myself doing so in the near future; I generally use a password manager, ssh/scp, or magic-wormhole to move secrets between workstations and servers where I have a user account), so I don't have a good answer for that one.
I imagine these solutions would scale poorly if I had a large number (dozens? hundreds?) of machines which all needed unique configurations, or if I used more tools which needed per-machine configuration, or if I needed to stick secrets in configuration files for some reason. Luckily for me, that's not the case, and my system has served me quite well as a result.
It doesn't seem like most Nobel laureates do their work with the sole intent of winning. I think it serves more as one of the highest forms of recognition, not necessarily a place to stand on a podium.
They're known for being pretty willing to kick out unruly guests, and I definitely appreciate it. I actually haven't had any issues at any of their locations in Denver, CO.
There's obvious exceptions, big fan service opening movies like Endgame were quite loud but it was a whole audience experience and it was fun, but typical movies and post-opening-weekend, most people are respectful.
Went to John Wick 4 at the Alamo drafthouse in NYC and had to wear AirPods the entire movie since it was so unreasonable loud. Totally bizarre experience, hated the extra 10 decibels
reply