I checked the source of the original (like maybe many of you) to check how they actually did it and it was... simpler than expected. I drilled myself so hard to forget tables as layout... And here it is. So simple it's a marvel.
A form of virtualization was first demonstrated with IBM's CP-40 research system in 1967, then distributed via open source in CP/CMS in 1967–1972, and re-implemented in IBM's VM family from 1972 to the present. Each CP/CMS user was provided a simulated, stand-alone computer.
VAX/VMS was originally virtualization of the PDP-11. Windows NT benefited from the loss of MICA/PRISM to virtualize/isolate what was once messy, unreliable, single-tasking, cooperative Windows 3.1/9x to be more isolated, reliable, concurrent, and parallel processing where the fundamental unit of isolated granular execution was the process like UNIX.
DOS mode "VM"s run within Windows 3.x/9x/NT aren't really isolated VMs because they can't replace the DPMI server or launch another instance of (386enh mode) Windows. All they do is semi-isolate real mode and DPMI client apps somewhat to allow multiple instances of them. They can still do bad things™ and don't have full control of the "system" in the way a real system, emulator, or hardware-assisted type-1 or type-2 hypervisor does. They're "virtual" in the way DesqView was "virtual".
Consumerized enterprise virtualization happened in the PC world with VMware Workstation->GSX->Server->ESX->ESXi/vCenter in relatively quick succession around 2001-2005. Xen popped up about the time of ESX (pre-ESXi).
IBM keeps quietly churning out z-based mainframe processors like the z17. Software from the 60's and 70's still runs because they don't do the incompatibility churn that's slowly being more and more adopted in the past 15 years to break everything, all the time, so that nothing "old" that had a long-lasting standard or compatibility ABI that was working will work now. I'm sure it's a lot of work, but churn is also work and especially when it breaks N users. Also, I don't think many folks from the PC-based enterprise server world appreciate the reliability, availability, and service features mainframes have/had.. although vMotion (moving VMs between physical machine linked to shared storage) when it came out was pretty cool.
> The basalt fibers typically have a filament diameter of between 10 and 20 μm which is far enough above the respiratory limit of 5 μm to make basalt fiber a suitable replacement for asbestos.
The source mentioned is a basalt fiber brand website, so not sure if that's enough for confidence.
So does fiberglass. I would dislike working with the aforementioned basalt fiber, I suspect it's like fiberglass or carbon fibers in that you'll end up itchy later, unless you do a really good job with your PPE e.g. taping gloves to your sleeves.
This is the exact way of behaving that facilitate conspiratorial thinking. You could have looked into it. Found sources that covers harmful effects of stone wool. Instead this 'just pointing out' that it might be problematic is lazy, dumb, and potentially destructive.
You want people to be curious and investigate? Then don't snap at them when they ask a question or express confusion. Respond and show your work and they'll learn by example. Snap at them and you'll raise the temperature of the discussion and make it more polarized and reflexive, exactly the opposite of your stated preference.
And they aren't wrong, inhaling basalt fibers is dangerous and long term exposure could injure or kill you. It's just a different mechanism than asbestos. https://en.wikipedia.org/wiki/Silicosis
> (NB: I do not know if or claim that basalt fibers are more dangerous than alternatives.)
For what it's worth, the ex-composite-shop guys I used to work with said that basically everything you can make a composite out of is horribly nasty: carbon fiber, fiberglass, basalt fiber, probably anything period. After repeated exposure you develop contact dermatitis to that type of fiber and the shop moves you on to working with something else, until it happens again. Contact dermatitis is just the first visible sign, it gets worse from there. Eventually you're probably going to want to get out of the shop entirely.
See uses here: https://en.wikipedia.org/wiki/Basalt_fiber
I am no material scientist, so cannot comment on actual facts why it might be better in specific cases than Kevlar, Dyneema or Carbon. But from experience there's a lot I don't know and especially in engineering there's a lot to consider when putting materials under stressful conditions that might put this in in a specific spot superior to those mentioned above.
I understand OPs sentiment fully - and the response is probably "it depends" :D
Culture and Art is a volatile thing and let's assume a game and it's mods are a piece of culture and art. Then an update of the original that interrupts the original aspects is basically the destruction of art.
In olden times, in those 90s, when games were offline, you could mod to your hearts desire and nobody could take it away. And by now it's recognized as cultural heritage - even though those old games become less and less appealing to the audience that is used to better game ux (This is a bold statement by me. My generation grew up with those graphics and love them - our grandchildren will ask us why we did that like they will never understand why people used those loud noisy typewriters when you can tell your phone to write the text up)
Still - typewriters are still usable. But copyright law and online only games and forced updates really destroy that game you played 10 years ago as you cannot (legally) access it anymore. Mods can be updated but that requires recreating that art - if still possible with changed APIs.
But then game developers need to life off something and updating and improving games should always be in their right, see no mans sky and how it changed over the years to be a completely different game in a way that would not have been possible otherwise.
IMHO it would be simple to keep significant old versions available for the general public like WoW did with their Classic rollback (not sure if this is the best example) - or like system shock, there's the rewrite and there's the original and everyone can use that version they prefer without preventing the original developer from publishing and improving.
I would really be interested in an actual comparison, where e.g. someone compares the full TCO of a mysql server with backup, hot standby in another data center and admin costs.
On AWS an Aurora RDS is not cheap. But I don't have to spend time or money on an admin.
Is the cost justified? Because that's what cloud is. Not even talking about the level of compliance I get from having every layer encrypted when my hosted box is just a screwdriver away from data getting out the old school way.
When I'm small enough or big enough, self managed makes sense and probably is cheaper. But when getting the right people with enough redundancy and knowledge is getting the expensive part...
But actually - I've never seen this in any if these arguments so far. Probably because actual time required to manage a db server is really unpredictable.
> Probably because actual time required to manage a db server is really unpredictable.
This, and also startups are quite heterogeneous. If you have an engineer on your team with experience in hosting their own servers (or at least a homelab-person), setting up that service with sufficient resiliency for your average startup will be done within one relaxed afternoon. If your team consists of designers and engineers who hardly ever used a command line, setting up a shaky version of the same thing will cost you days - and so will any issue that comes up.
Its a skillset that is out of favour at the moment as well but having someone who has done serverops and devops and can develop as well is a bit of a money saver generally because they open up possibilities that don't exist otherwise. I think its a skillset that no one really hired for past about 2010 when cloud was mostly taking off and got replaced with cloud engineers or pure devops or ops people but there used to be people with this mixed skillset in most teams.
I've never had a server go down. Most companies don't need a hot server because it's never going to be needed.
AWS + Azure have both gone down with major outages indivudually more over the last 10 years than any of the servers in companies I worked with in the 10 years before that.
And in comparable periods, not a single server failed or disk failed or whatever.
So I get SOME companies need hot standby servers, almost no company, no SaaS, no startup, actually does.
Because if it's that mission critical, then they would have already had to move off the cloud due to how frequently AWs/Azure/etc. have gone down over the last 10 years, often for 1/2 day or so,
I've had a lot of servers going down. I've had data centers going down. For various reasons - but normally not a failed disk but configuration errors due to human error.
And I've had enough cases where the company relied on just that one guy who knew how things worked - and when they retired or left, you had big work ahead understanding the systems that guy maintained and never let anyone else touch. Yes, this might also be a leadership issue - but it's also an issue if you have no one else with that specific knowledge. So I prefer standardized, prepackaged, off the shelf solutions that I can hire replacable people for.
Why games on Windows ship their own C++ Redistributable? Well, the same problem. And for the very same reason macOS app bundles come with a lot libraries and we still see a lot updates after every macOS release.
A lot of known issues can be avoided with more experience and cooperation before changes happen.
Before anybody mentions Proton. Because always somebody mentions Proton?
Proton is WINE. But maintained by Valve. Which requires a lot resources of Valve (not of the users). But the key is Steam! Valve is controlling the Steam store.
It is still bad and Valve shall press hard on native ports (e.g. Linux only Steam Awards). Reducing the long term workload for Valve. WINE is not a solution and remains a workaround. That is why we use Inkscape and not Adobe.
PS: Remember when Apple dropped iOS 32-Bit? And PPC? And the classic APIs? Microsoft is trying to remain bug compatible. The problem? They’re bug compatible! My thinking is similar to Torvalds, Linux, GNU (GLIBC/GLIBC++, Systemd and Wayland shall strive for compatibility when possible. Users love compatibility. Programmers love compatibility. But it is hard work. It becomes difficult when security implications are involved. As long only re-compilation is need for compatibility I’m fine. When we need to adapt code I’m getting unhappy.
People say that. Don’t call out the bad examples (there are some!). The never mention the good examples?
ioQuake3 - still work's
CS2 - still works
HL1, HL2, CS1,CSGO - still works
Unrailed - still works?
UT2003 - there it is getting hard, unmaintained since ca. 2003. But it is doable if you want it.
Quake3 - same as above.
Most bad ports were made by inexperienced developers. And honestly, these people need to learn! Especially Windows developers which aren’t Linux users are causing the problems. Linking weird 3rd party libraries which aren’t itself is a receipt for disaster. Which indicates planing mistakes in early stages. A bad sign is when they start to package for specific distributions…run as fast as you can.
I would look to applaud the high quality work id and Valve or Daedalic. Weirdly Microsoft ships a port of Minecraft. Valve now ships the Linux-Runtime to ease ports. And Flatpak allows developers which want to package itself (weird hill to die on…) doing it.
Most of these are actively maintained though. Older ports, such as UT2004, still work but a few upgrades give a much better experience: SDL2-compat (and now SDL3-compat) really helps, as it brings compatibility with newer APIs (pulseaudio, Wayland, newer controllers, etc).
I'm not familiar with retroshare but from a quick research it seems you only receive content from people you're connected with and their connections. So it should be pretty straightforward to remove toxic connections
And I can imagine it's possible like with ad block plus et al to have local blocklists in your client.
Its not like we have this situation elswhere (mail, web) so...
I just had a use case the other day: my mom sent me a photo of a handwritten recipe from my great grandmother a year ago. I only remembered asking about asking, not about the response, so I was happy to still have that pic in my history. Had I downloaded the Pic, it would be lost among all the other crap I store all ocer the place. This way it was preserved with the context and even a voice message from my grandmother (not great grandmother) remarking on it.
reply