I used multi seat in Linux with SystemD, i just threw in some old grapchics cards and sound cards in my gaming PC so that the children could play on separate monitors while I worked. Multi seat is very cool. When upgrading to a new gaming PC it was much cheaper to build 4 separate machines because cpu's and motherboards with enough pcie lanes are very expensive.
GPU's still run at decent performance with half the pcie lanes available, so if you already got a gaming PC with many slots and dont need top performance it could still be worth it to get two more cheap gpus and use multi seats - for those building a mini lan gaming room at home.
One annoying thing is that linux cant run many different GPU drivers at the same time, so you have to make sure the cards work with the same driver.
Properitary 3rd party multi seat also exist for Windows, but Linux has built in support and its free.
I strongly recommend watching/reading the entire report, or the summary by Sal Mercogliano of What's Going On In Shipping [0].
Yes, the loose wire was the immediate cause, but there was far more going wrong here. For example:
- The transformer switchover was set to manual rather than automatic, so it didn't automatically fail over to the backup transformer.
- The crew did not routinely train transformer switchover procedures.
- The two generators were both using a single non-redundant fuel pump (which was never intended to supply fuel to the generators!), which did not automatically restart after power was restored.
- The main engine automatically shut down when the primary coolant pump lost power, rather than using an emergency water supply or letting it overheat.
- The backup generator did not come online in time.
It's a classic Swiss Cheese model. A lot of things had to go wrong for this accident to happen. Focusing on that one wire isn't going to solve all the other issues. Wires, just like all other parts, will occasionally fail. One wire failure should never have caused an incident of this magnitude. Sure, there should probably be slightly better procedures for checking the wiring, but next time it'll be a failed sensor, actuator, or controller board.
If we don't focus on providing and ensuring a defense-in-depth, we will sooner or later see another incident like this.
I'm still ambivalent on the rest of the AI features, but the AI translation is absolutely amazing. The translation quality isn't perfect, but being able to seamlessly translate 20+ languages 100% locally is amazing.
> 1. Assemble a small council of trusted senior engineers
> 2. Task them with creating a recommended list of default components for developers to use when building out new services. This will be your Golden Path, the path of convergence (and the path of least resistance).
> 3. Tell all your engineers that going forward, the Golden Path will be fully supported by the org. Upgrades, patches, security fixes; backups, monitoring, build pipeline; deploy tooling, artifact versioning, development environment, even tier 1 on call support. Pave the path with gold. Nobody HAS to use these components … but if they don’t, they’re on their own. They will have to support it themselves.
The difference is that in a small company, it's the owner who is abusing you (or not). It's all down to the qualities of the person itself.
In a large company, it happens regardless of the qualities of the people involve, because it's baked into the processes. Good-natured people can mitigate it to some extent, but they cannot prevent it.
The data is just Binance's application logs for observability.
Typically what a smaller business would simply send to Datadog.
This log search infra is handled by two engineers who do that for the entire company.
They have some standardized log format that all teams are required to observe, but they have little control on how much data is logged by each service.
I still have my original PalmPilot in a box in the attic. Its existence was a huge life lesson for me.
I asked my boss to pay for it (he did) but he said: do you use anything to organize your life and projects right now? If you don't, I don't think a PalmPilot will help you.
As someone who quit Google in large part because of all the stuff like readability that I ran into there (red tape everywhere in sight, low productivity due to process, zero urgency bc everyone is fat off the money-printer, no deadlines for the same reasons, etc.), I was about to strongly disagree with you, and write yet another excoriating take on why readability is AlwaysBad(tm) and etc, etc. I did already snark elsewhere here...
After taking a walk and reflecting, though, I'm remembering something that my manager said to me when I gave notice. Google is not for everyone, for a lot of reasons, and especially a lot of people who came up in startups really have problems (which is ironic since so many startups come out of people leaving Google). How you feel about readability may actually be a pretty good test of whether you will fit in at Google in the current era: it's not a small, scrappy company anymore that gets shit done quickly using whatever tool is most efficient RIGHT NOW and ships it as fast as possible to see if it gets product/market fit. It's a behemoth that runs one of the most prolific money-printing machines that has ever been built, and fucking that up would be a DISASTER. It'd be better to have half the engineers at the company do literally nothing for 10 out of 12 months in the year than to let someone accidentally break the money-printing machine for a day while they figure out how to fix it.
And obviously, it's better if everyone is productive even as they're shuffled around from project to project (which they will be, a lot), which means that you want as little "voice" as possible in their code. At a lot of companies you can tell exactly who wrote a line of code just by the style (naming, patterns, etc.), without even checking git blame, but at a place like Google individual styles cause problems. So the goal is to erase as much individual voice/style/preference as possible, and make sure that anyone can slot in and take over at any point, without having to bother the person that originally wrote the code to explain it (they might be at another project, another division, another company, and even if they're still at Google there is a very strong sense that once a handoff is complete, you should not be bugging people to provide support for stuff they've moved on from).
In that sense coding at Google is a lot closer to singing in a choir than being the frontman in a band: you need to commit to and be happy with minimizing what makes you unique or quirky, rather than trying to accentuate it and stand out. Some top-tier singers just can't force their vibratos down, or hide their distinctive timbres, or learn to blend with a group, and are absolute trash in a choir; it's not their fault or some ego failure, it's just that there are some voice types that don't work in groups, and that's fine, you just don't add those people to a choir.
At least below director-level (or maybe L7 equivalent on an IC track), Google doesn't need individuals to come in and shake things up, bust apart processes and "10x" a codebase. That's startup shit, and even if it might sometimes be worth some risk for the high payoff, it's too dangerous for them to allow for the thousands upon thousands of (still quite senior, sometimes 15+ years of experience) L4 or L5s at the company. The same process that prevents that from happening also makes sure that the entire machine keeps humming along smoothly. If being a part of that smoothly functioning machine while painting within the lines is exciting, then Google can be one of the best places on the planet to work; if you would be driven crazy because you can't flex and YOLO a prototype out to users in a couple days, then it's really not going to be for you.
I'm in the latter camp, I couldn't handle almost anything about the process and was so desperate to move quickly that I started talking to investors to line up my own funding a few months after I joined, but even as a quick-quit (<1 year), I have the utmost respect for the company and the people, and highly recommend it to almost everyone who applies there (the exception being people like me who TBH should just be doing their own startups). Everything they do has a pretty well thought out reason, even if I don't like following those rules myself.
I really like the approach of Netflix of 10 years ago when it was still small. They hired mature people so they could get rid of processes. Indeed, they actually tried to de-process everything. As a result, things just happened. Non-event was often mentioned and expected in Netflix at that time. Case in point, active-active regions just happened in a few months. A really easy to use deployment tool, Asgard, just happened. The VP of CDN at that time said Netflix would build its own CDN and partner with ISPs. Well, it just happened in merely 6 months with 12 people or so. Netflix said it was going to support streaming and move away from its monolithic Tomcat app, and it just happened. And the engineers there? I can't speak for others but I myself had just one meeting a week -- our team meeting where we just casually chatted with each other, to the point that the team members still stayed close to each other and regularly meet nowadays. I also learned that the managers and directors had tons of meetings to set the right context for the team so engineers could just go wild and be productive. At that time, I thought it was natural, but it turned out it was a really high bar.
The sad fact of software projects is that some part of the work is always a bit of bribery to keep skilled labor engaged. You’re always throwing them bones, letting them work on a few things that don’t really need to be done but make them happy. That’s lost sometimes when I’m lamenting NIH or wheel reinvention, but it’s a matter of degrees. Let them have one big thing and a bunch of small things.
Companies go off the rails and get trashed talked when they clamp down and go for zero fun, 100% financially justifiable task lists.
As a tech ages out you can’t find anyone good to work on it. And when you make them, morale craters.
The original "meet in person" model was so good. All the bank transfers, paypal, etc. never made any sense.
Just a shout out to LocalBitcoins. When I was living in Argentina during the almost-hyperinflation of 2014 or so, I got all of my living cash in pesos and USD by selling BTC to local traders who I met on Localbitcoins.com. Usually I would meet them in a coffeeshop, have a coffee, wave my phone and transfer some BTC and they would hand me an envelope with $1000 USD and ARS$9000 that would cover my rent and food for the next month. The cost was +5% for the BTC-to-cash trade, but the exchange rate was "blue" meaning, at the time, one peso was officially worth $0.33 but the blue market had it trading at somewhere closer to $0.06... and trading for BTC got you the live blue rate. One kid I dealt with a lot had a wonderful business model. He was a math student at the university and opened an office buying and selling Bitcoin. When I asked him where he got all this USD cash which was almost impossible to find in Argentina, he explained:
People gave it to him. Argentines who got 20% of their paycheck in dollars, or who had dollars stuffed under their mattresses that they couldn't get out of the country. They paid him 5% to move it out.
He took their cash and sold it to people like me - expats, tourists - for BTC, and sent the BTC to his partner in Miami, who changed it out to USD and put it into American bank accounts for them. So he got 5% on either side of the trade.
This was the moment, around 2013 or so, when I really believed BTC would be a unit of exchange that would break the artificial currencies around the world and serve as a leveler that would allow anyone with skill to work anywhere and go anywhere; it would demolish the old sclerotic monetary regimes.
Then it got regulated [edit: all its endpoints got pwnd], and Localbitcoins has not for a long time shown any P2P meetups for unregistered, silent trading-for-cash. That was its original purpose, and it served very well at the time.
I no longer work at Firebase / Google, but two points:
1. There may be issues with the GCP integrations & UX/DX, but GCP integration is good for many customers and necessary for the future of the business.
One of the common failure modes for the 2011-2014 crop of Backend-as-a-Service offerings was their inability to technically support large customers. The economics of developer tooling are a super-power-law. So, if you hope to pay your employees you'll need to grow with your biggest customers.
Eventually, as they become TheNextBigThing, your biggest customers end up wanting the bells and whistles that only a Big Cloud Platform provide.
This was a part of the reason we chose to join Google, and why the Firebase team really really really pushed hard to integrate with GCP at a project, billing, and product level (philosophy: Firebase exposed the product from the client, GCP from the server) despite all the organizational/political overhead.
> For example, if a user calculates "89 x 15 = 1335" on one calculator and taps the arrow key, the result "1335" will be displayed on the other calculator, allowing the user to continue a problem while the previous equations are still shown on the screen. This makes it easy to notice errors.
While the UI is very different, the key benefit described here reminds me a lot of Soulver: https://soulver.app/
I love Soulver for how quickly it lets you throw together quick guesstimates and sanity checks. The ability to incorporate previous results by reference and update those on the fly greatly improves the clarity and my confidence in my experience.
"I feel there is a lot of room to develop better tools that are not so horrid"
Probably not. SAP is an everything-to-everyone product. Using Fred Brook's terminology for essential versus accidental, the problem with everything boxes is that even just the essential complexity of such a product is so insanely large that they are sufficient to produce a "horrid" product. Of course they don't have just the essential complexity and add a healthy dose of accidental complexity, but so would any putative "less horrid" replacement by the time it successfully solved the same essential problems as SAP.
Closer to home, "bug tracking" is an example of an everything-to-everyone product. I can't even count the number of times I've seen a "simpler, less horrid" bug tracker get started due to the perceived horribleness of the current bug tracker, but the new bug tracker simply became the old one once it tried to grow into the same niche. This may sound strange to 2022 ears, but I remember when JIRA was being pitched as the simpler solution and developers were pretty keen on it. There will never be a mature bug tracker that is any fun to use, because by the time it's everything to everyone it'll just be the same morass of being everything to everyone again. It turns out that the "less horrid" bug tracker wasn't actually intrinsically less horrid... it just wasn't everything to everyone. Instead it was a bug tracker for developers, so developers love it. But then if developers are going to be allowed to use it everywhere, it has to satisfy all the other stakeholders too, and it inevitably mutates into an everything for everyone product.
(See also: Salesforce. Letting users configure custom DB schemas is a key indicator of being an everything to everyone product. To some extent, programming languages too; there's a lot of people who pine for something simpler (such as the people who think that if we just went to visual languages, or tried to jam everything into "no code") and don't understand that by the time you have a general purpose language that is truly general purpose you've got a large amount of irreducible essential complexity whether you like it or not.)
This is very inaccurate in many aspects, not sure if actually anything accurate in this except the fact that Airbnb did use and eventually dumped RN (source: I worked at Airbnb 2015-2019 on the platform team).
As others have linked, Airbnb actually did write a summary about this.
Basically what went down: For years Airbnb had actually pretty strong native mobile engineering both on iOS and Android (some of the best engineers I’ve worked with) there was no issue investing more in to it. The had the resources, platforms were working well.
The main challenge however was that anything the company needed to launch, needed to be built on 3 platforms at the same time (4 if you count tablet). Timelines were often tight which made the overall coordination between the platforms tough.
One the main examples of this was when 2015-2016 team I was part of basically redesigned and rebuilt the mobile apps, and created a design system/component/layout library at the same time for each platform. It was a huge effort.
At the same time, there was one guy basically hacking on React Native and ended he built the whole component system by himself in few weeks, compared to the months it took in native platforms. This was obviously impressive and company decided to experiment if we could use RN more to speed up development. I think the idea was never to fully replace the native platforms or even core flows, but there were lot of random flows and screens in guest and host apps which only minority of the users see.
So 2016-2019 there was a core team working on a RN adoption, and different people and teams used it in different projects. There were some technical issues at times with performance and navigation that got eventually solved.
However where it all came down was there wasn’t that much adoption and there wasn’t clear group who would actually use RN. Mobile engineers didn’t want to write RN code and web engineers didn’t want to build on mobile. RN also requires understanding of the native platforms, and if the point is build both for iOS and Android, then you need to understand both platforms which is a rare skill set.
So in the end it was just more complicated than it was worth.
I think new company/product can build the team and culture around RN will have better success.
I’d still think if it’s the right choice given what you are building. No matter how good you are I think the RN/Flutter apps can be ok it’s really hard to make them feel fully native.
The article describes a predictive project-management approach, and it's the same misguided focus on deadlines you hear from people with a predictive mindset. But an adaptive approach is so much more effective.
The adaptive approach: Products should ship as soon as they're ready for market, not according to some deadline. To ship sooner, they should be sliced into pieces that have compelling individual value. Each piece is shipped when ready. And those pieces ("valuable increments") should be sliced into even more pieces that could be shipped sooner in case new business opportunities come along. That way, when that new business opportunity comes along, you're not left with a choice between abandoning work or ignoring the opportunity in favor of finishing what you started.
There are times when deadlines matter, such as regulatory requirements, tax season, the Christmas shopping season... but people who are "setting deadlines" aren't talking about real deadlines; they're talking about artificial deadlines because they think it's the right way to manage software teams. (They don't understand that managing by deadline causes teams to compromise internal quality, and that hurts productivity, which creates stress, which forms a vicious cycle that further harms quality and productivity.)
If you do have a real deadline, the correct way to manage the risk of missing that deadline is to, once again, use an adaptive approach: identify the smallest thing that will mitigate the risk, build that first, and then incrementally expand from there in as small pieces as possible. Keep the software shippable at every step of the way and ship on the deadline with whatever you finished.
The most important message here is "take care of yourself" near the bottom. I want to share a story about how I did a moonlight effort the wrong way.
I treated it like a second job. When my day job was finished, I started my second job working on a game with a team of coworkers and friends. We were working crazy hours, but we were crushing it. Before long we got the attention of Amazon who wanted to acquire our game as part of their Fire phone launch. Our game focused heavily on motion controls which was perfect for the direction they were taking the phone (they were really pushing the envelope with motion + cameras + other sensor fusion things). We worked even harder. Before long we had a meeting with Jeff Blackburn. He showed it off to Bezos and got signoff to acquire us. We worked even harder. Contracts were signed and due diligence started.
Then our lead dev died.
Amazon backed out as we had no way of completing the game on time. We had poured everything into this game with the intention that the payoff would be worth it. We never prioritized enjoyment of what we were doing or our own health. Our mistake was we hadn't left room for failure.
Whatever you do, ensure you have gas left in the tank for when things go wrong. Things will always go wrong in ways you'll never be able to plan for. If you stretch yourself to the limit, when a bump in the road hits you you'll break and everything/everyone around you will suffer.
I now have a far, far healthier approach to moonlighting. I try and work a little bit every day on something. It doesn't need to be 5 hours of work - 20 minutes is enough. I've been working on something for the last 3 years or so and while it doesn't have the velocity that that game did, it makes me happy while I work on it. If it fails, it's OK because I find joy in doing it. Success isn't a requirement.
My biggest mistake, that I made again and again, was not leaving a job when it was time. I thought I had something to prove, but there was never any point to it. You don't owe anything to an employer. You can't prove anything to an employer. They have absolutely no loyalty to you, and care less than nothing about what is right or wrong, wise or foolish.
So: If you ever think things might not turn out as well as you hoped, move on. There is so much else going on in the world that is at least as interesting as what you are doing, where you have a much better chance of making a difference, that spending time on things that you might not end up proud of is a terrible waste of your short time on Earth.
Finally, after about 100 meters of semi-infinite scrolling, he says something important.
"Why Does DARPA Work? " covers DARPA program managers extensively, but the key points are worth reiterating both as a refresher and to emphasize that empowered PMs are the core thing one should not mess with. "What about managing programs through a committee?" NO. "What if performers just submitted grants and coordinated among themselves?" NO. "What about a more rigorous approval process to make sure money isn't wasted?" NO. "What if people could be career program managers?" NO. You get the point."
The point being that DARPA program managers are people who've done something good but are not primarily managers. Their role is somewhat like VCs, without the greed. The problems DARPA works on tend to be rather specific. Many reflect specific military goals. Here's the current project list.[1] Examples: Automated air combat. A transportable linear accelerator. "Persistent, wide-area surveillance of all UAS operating below 1,000 feet in a large city." "Gun-hard, high-bandwidth, high-dynamic-range, GPS-free navigation." Note how different this is from a list of start up companies. These are technical goals, not business goals. Most are hard engineering problems, not pure science. They reflect specific military problems.
The author's article doesn't reflect that tight focus.
DARPA has a customer - the US Department of Defense. The purpose of DARPA is to solve hard problems for DoD, sometimes if DoD doesn't know it has them yet, and sometimes because DoD has a big problem and needs it dealt with. A "private ARPA" as proposed by the author has no customer. That defocuses the organization. It's not clear what a "private ARPA" is for. Even after reading all that verbiage.
Then there's the problem, who does the work? DARPA funding generally goes to small parts of companies that do major work for DoD, or who have expertise making some specific thing. Not post-doc researchers. Not startups. DARPA does not create organizations to work for them. They use little pieces of existing organizations.
The paper looks almost entirely at US institutions. This needs more reach. China has been opening lots of research organizations. Some are boondoggles, some produce good results. Take a hard look at that. Look at what Korea is doing. Figure out why Japan's R&D stagnated.
There are two good books about DARPA - "The Pentagon's Brain", and "The Imagineers of War". I've read the first, but not the second. The first gives a good sense of how the organization works.
(It's been a long time, but I've worked on a DARPA program.)
> We pay for this, even if we are the ones making it, to test our Stripe Integration.
This is so important! This can go wrong even at large companies. At Netflix we once had a billing issue and it took a while to even notice, because no one in the company was paying for it (it was just free for everyone), but it looked like just general attrition, not people slowly having failed payments.
After that incident, the company gave everyone a $16/mo raise (which is what it cost to have streaming and DVDs at the time) and then asked us all to set up our own payment. The goal was to have everyone paying and hopefully using different payment methods, so that if something went wrong at least a few employees would be aware of it.
When I was at EFF, I helped run the Cooperative Computing Awards project (https://www.eff.org/awards/coop/), which was funded decades ago by an individual donor and provides cash rewards for finding prime numbers of certain sizes.
Because of EFF's advocacy for cryptography, people always thought that there was a more direct connection between the Mersenne prime search and cryptographic techniques, but the connection is at best distant and tenuous. The record primes that GIMPS finds and that can receive EFF's awards are
* much (much much) bigger than the primes that we use for cryptography
* too big to do almost any kind of practical computation with
* not secret, so couldn't be used as part of a secret key
The distribution or explicit values of Mersenne primes (and, correspondingly, perfect numbers) also don't really help in our assessment of the difficulty of the RSA problem or the security of public-key cryptography, because they aren't closely related to any of the fastest factorization algorithms that apply to arbitrary numbers.
(Mersenne primes are much easier to deterministically test for primality than numbers in general, which is why the record primes that GIMPS finds are Mersenne primes. The runtime complexity of primality-testing algorithms for special-form numbers is sometimes much less than algorithms for an arbitrary number, and this is a great example.)
I think the main contribution of projects like GIMPS to cryptography and computer security lies in getting people more interested in number theory and mathematics in general. Hopefully that will lead to faster improvements in humanity's knowledge of mathematics in ways that lead to more secure public-key algorithms or to better public assessments of the algorithms that we have now. (Although a lot of the important current research on public-key cryptography is not about factorization but rather about elliptic-key systems and post-quantum systems.)
I always felt I had a hard time explaining to people that the awards existed in order to publicize the idea of cooperative distributed computing projects, which were quite novel when EFF's awards were first created, and which were a demonstration of the power of the Internet to help people who've never even met each other work together. (The awards, contrary to the hopes and dreams of math enthusiasts and cranks around the world, didn't anticipate progress through new research or ideas so much through more volunteer computer time.)
Probably today we have ample other demonstrations of that power, and that concept is no longer in much doubt, and the GIMPS project probably no longer even registers for most people as a particularly impressive example. But it is impressive in continuing to set multiple world records on one of the simplest, purest, most basic mathematical problems.
Peter Higgs, of the boson and Nobel: “It's difficult to imagine how I would ever have enough peace and quiet in the present sort of climate to do what I did in 1964. […] Today I wouldn't get an academic job. It's as simple as that. I don't think I would be regarded as productive enough.” — https://www.theguardian.com/science/2013/dec/06/peter-higgs-...
I will just offer some unsolicited advice about PhDs, as it came up a few weeks ago also. And the author touches on this. Of course this mostly applies to empirical sciences, but maybe some theoretical ones too.
An effective PhD advisor/thesis isn't wandering the woods to find something. It's a guided coaching exercise, with an outcome in mind. Test whether the PhD advisor you select knows this and has a history of taking this approach with former students.
An advisor (and the PhD research he/she guides you on) should be targeting a very conscious choice of incremental versus breakthrough research results. And for most students, you're not going to have a whole lot of mindblowingly field-changing results -- as much as you may think you're a rockstar. So it's just sensible or an insurance policy to make sure you've got solid steady work that you're progressing on.
Learn to enable your research to have visible incremental gains on a known path every day, rather than hoping for some breakthrough at the end. Amazing breakthroughs have high risk, and make it highly likely you'll have a crisis when it doesn't happen.
Concretely, even if you don't know what the answer is going to be at the end of your research, you must think about, or have an idea about, the format of what that amazing answer is going to be. Write the outline of your thesis and "ghost out" what the major charts will be. Write the intro sentences of each chapter -- what are they? (and I don't just mean the boring review of the field part, but your findings part)
You should know what major type of finding, plot, or table your research is going to output. What are the columns and rows of that table, or axes of that plot? How many data points are required? How many of them can already be guessed? Where is the surprise going to be? What is the conclusion going to be?
Draw out the answer you're aiming for, now. If you can't even articuate what the answer will look like, you may be in for a bad time, so work on fixing that. It will also push you and your advisor to be specific about what the output of your thesis will be -- and set you up for a much better PhD experience.
- You have little experience and you are using this to break into the industry, and get experience on many different technologies ("wear many hats").
- They are working on a very specific problem or using a specific technology that you strongly desire to work on and it's difficult to do it anywhere else.
- You want to work a certain way (remote, on the beach, whatever) and they are willing to go this route.
The thing i like most about this vaccine is that it's a T-cell vaccine.
Very broadly speaking, the adaptive immune system has two arms.
One arm, humoral immunity, is about detecting foreign substances in the spaces outside cells: B-cells make antibodies, antibodies inspect molecules floating around, a match triggers the B-cells to proliferate and make more antibodies, and then antibodies tell macrophages and various other brutal cells to destroy and eat everything in sight, resulting in inflammation, but hopefully killing the invader.
The other arm, cellular immunity, is about detecting foreign proteins inside cells: T-cells make T-cell receptors, T-cell receptors inspect peptides presented on the surface of cells through some amazing machinery culminating in the MHC I protein, a match triggers the T-cells to proliferate, and the T-cells themselves go round triggering self-destruction of cells presenting the foreign peptides.
(I'm leaving out MHC II and helper T cells here, let alone T-regulatory cells, gamma delta T cells and all sorts of other things i don't know about)
Both wings are useful in response to a viral infection, but ultimately, the cellular immunity is key. Viruses commandeer and replicate inside cells, so to stop an infection, you need to find and kill those cells. Cellular immunity does that.
Vaccines based on injecting proteins or dead viruses can only develop humoral immunity. Vaccines based on modified viruses, like this one, can also develop cellular immunity.
Thanks for the interest in our shared video gaming past. I had a lot of fun making that video. The PS1 was a fun machine as it was capable, complex enough that you felt it had secrets, but not so bizarre or byzantine that you felt learning them was a waste of time. And you were pretty much the only one in there as the libraries were just libraries, not really an OS. Still true of the PS2 although that was a complex beast, but by the PS3 there was more of a real OS presence. If you want some more, slightly different, slightly overlapping info on the PS1 or making Crash, I have a mess of articles on my blog on the topic: https://all-things-andy-gavin.com/video-games/making-crash/
The cynic in me just thinks that it saves the game developers from the need to develop good AIs.
Perhaps it's just my style of video gaming... I play games to relax, definitely not to compete. People sometimes get so wound up in these games, it's supposed to be fun!
So, yes, if a game is multiplayer only, it's a reason for me not to play.
One annoying thing is that linux cant run many different GPU drivers at the same time, so you have to make sure the cards work with the same driver.
Properitary 3rd party multi seat also exist for Windows, but Linux has built in support and its free.