I would even argue, that a lot of AI services in the future will be close to free for the public. That is because in a lot of cases, the data received from user interactions is more valuable, than the data generated by the AI service.
Bug resolution time depends on how familiar a developer is with the system, how complex the issue is and how impactful the bug is. Not everything can be solved in 24 hours. Not everything has to be solved in 24 hours.
Saying that your developers will solve every problem in 24 hours seems like a toxic pr move.
Yes, that was my reaction as well. I'm still traumatized by an insidious bug in a distributed system that took me around 3 months of nearly exclusive work to diagnose and fix (one-line fix, of course). ENG-168. Never forget.
At a previous company, we were in the early stages of building a massively distributed simulation platform that would power MMOs and government/military simulations. The platform was written in Scala and used Akka extensively (because of reasons). We had a test environment that spun up a decently big game world, and had a bunch of bots run around and do things. It would run overnight.
At some point it was discovered that every once in a while, bots that were supposed to just go back and forth the entire game world forever would get stuck. It was immediately obvious that they were getting stuck at machine boundaries (the big game world was split into a grid, and different machines would run the simulation for different parts of the grid). This suggested the bug was in the very non-trivial code that handled entity migration between machines.
This was a nightmare to debug. Distributed logging isn't fun. Bugs in distributed systems have a tendency to be heisenbugs. We could reproduce the bug more or less reliably, but sometimes it took hours of running the simulation until it manifested; worse, not manifesting for a few hours wasn't a clear signal that the bug had been fixed.
My investigations were broad and deep. I looked at the Kryo serialization protocols at the byte level. I scrutinized the Akka code we were using for messaging. I rewrote bits and pieces of the migration code in the hope it would fix the bug. Many other engineers also looked at all this and found nothing. A Principal Engineer became convinced this had to be a bug in Scala's implementation of Map. I was very close to giving up multiple times.
At some point there was a breakthrough -- another engineer discovered a workaround. A violent but effective one: flushing every cache and other bits of internal state except the ground truth would get the entities unstuck. We added a button to the debug world viewer appropriately labelled YOLO RESYNC. We were so desperate about this bug, we seriously discussed triggering a YOLO RESYNC periodically.
But if YOLO RESYNC fixed the issue, it meant that there was some sort of problem with the state of the system. I spent some more days and weeks diffing the state before and after YOLO RESYNC (more difficult than it sounds in a not-entirely-deterministic distributed simulation) and narrowed it down more and more until I finally found a very subtle bug in our pubsub implementation. I don't remember exactly what the issue was, but there was some sort of optimization to prevent a message from being sent to a recipient under certain conditions that would "guarantee" the recipient would have gotten the message in some other way -- and the condition was very subtly buggy. Fixing it was an one- or two-line change.
I still remember the JIRA ticket: ENG-168. It tested my sanity and my resilience for longer and harder that anything else before or after.
[EDIT] I saved this ticket as a PDF as a traumatic memory. It was in January 2015 so I got some details wrong, the main one being that it only took about two weeks from the bug report (Jan 28) and the fix (Feb 10). I swear it felt like 3 months.
Well you could at least start with solving everything in 24 hours that can be solved in 24 hours. More often than not such bugs take days, weeks, months not because of time in IDE, but because of backlogs, prioritization, time in test and longer release cycles. Streamlining that sounds mostly a win to me.
Note that a bug fixed in 24hrs is also a bug that doesn't have to be fixed later. I mean the development work has to be done at some point anyway, and this may even save some time discussing and bouncing around the issue.
Speaking as one of those developers, we suggested the topic. We are proud of this!
I've worked at several companies you know (https://www.linkedin.com/in/macneale) - and this is the least toxic company I have ever worked at. Hands down. We take pride in running a tight ship.
I only tend to see inheritance in engines and libraries, where it makes sense to create more generic, reusable and composable code, since most of the functionality in these is defined by technical people.
It makes no sense to use inheritance in the business layer, because a single feature request can make a lot of the carefully crafted abstractions obsolete.
I've never seen it put quite like this, but it feels right and is refreshingly concrete. Trust the abstractions you can actually design/control for, treat all the other ones as suspect. One still needs the wisdom to tell the difference, but at least focusing on "feature request" focuses the mind. This is at least simple even if it is not "easy".
An argument against OOP where you first need to define/explain differences between composition/inheritance, Liskov Sub, compare and contrast with traits, etc is not really that effective when trying to mentor junior folks. If they understood or were interested in such things then they probably wouldn't need such guidance.
The problem is that there are no qualities, that distinguish a simulation from reality. For all we know, we might as well be living in the worst simulation with unrealistic physics and poor graphics, which was made by a regular student on a weekend, who received a C- for effort.
If the simulation is already simulating all the particles in the universe, then it doesn't matter what humanity does with all those particles. With access to all the particles in the universe humanity could easily simulate every single particle in a smaller universe at a high tick rate. A simulated humanity could even easily simulate every particle in a bigger universe, if we remove the requirement to render the world at a high tick rate.
"If the simulation is ,already simulating all the particles in the universe, then it doesn't matter what humanity does with all those particles"
That would be very unlikely, again, mathematics would likely be the same and this limits options. The guys simulation us would be bound to mathematics and limits (e.g. computational crunching power).
In "grand theft auto" you also just simulated/render, what is necessary at a given moment. By the way, the rendering would be a nice interpretation for the "observer" of the Copenhagen interpretation of quantum mechanics :-)
You assume, that time has to pass at the same speed in the simulation as in the simulating world. Even with our current computing power we could simulate very complex scenarios, if we spend a year of continuous computing on a nanosecond of said scenario.
You assume, that the simulating world is bound by the same restrictions as the simulation. Maybe the difference between our simulated world and the real world is the same like the difference between minecraft and our world.
We have trouble to predict, what the world will look like in a hundred years. And we have thousands of years of data on both humanity and our world. What hope do we have to state even one true fact about a world, that is simulating ours?
Nick Bostrom’s simulation argument makes those assumptions and others in order to prove that the likelihood we are being simulated approaches unity. A core component of the argument is that they’re enough like us that we are a simulated version of them in a simulated version of their universe.
Without those assumptions there isn’t any basis for a claim at all. “We are in a simulation” isn’t any more coherent than saying “a dog is dreaming us.” It could be, but there’s no reason to believe it is at all.
So, I don’t think you can just wave away constraints and say “Well maybe those are just local” because it raises questions about utility. Why would beings in a world without constraints simulate us in a world with constraints? It wouldn’t be necessary to constrain us. Likewise, an ancestor simulation that runs at a rate less than real time seems to have very little utility.
Unfortunately, once you introduce constraints you suddenly have some minimum condition for which you can’t actually simulate and actually have to just do. And you not only have to do those things for your simulation, but for their simulations, and their simulations… (And all these simulations are required to make the claim that we are almost certainly in a simulation because they’re part of the math.)
I am not saying, that the simulating world has no constraints. Only that there is no reason for the constraints present in the simulation to perfectly mimic the real world.
I would argue, that a perfect simulation is most of the time less efficient than a simulation with very specific parameters. For example, humans studying game theory with artificial agents create very specific environments.
There are also plenty of reasons to run a simulation on a slower tick rate than a real world. For science, we simulated a black hole, which required hundreds of hours to simulate a single frame. For entertainment, we made movies, for which it is not uncommon to require thousand hours of cpu time just to render a single minute.
The stacking problem can be easily solved by applying the concept of entropy to it. You can't expect to receive the same amount of energy you put into a system. Therefor a simulation can't perfectly simulate the world running the simulation. Which means, that at the end of every single simulation chain exists a simulation not yet capable of generating a simulation. But this statement provides us no more information about the relationship between the simulator and the simulation.
We can introduce whatever constraints or assumptions we want. It makes no difference. My argument was against the statement, that one can say something is more likely or more reasonable, when debating if we are living a simulation. One can't.
Isn't the point that if pure Python was faster they wouldn't need to be written in other [compiled] languages? Having dealt with Cython it's not bad, but if I could write more of my code in native Python my development experience would be a lot simpler.
Granted we're still very far from that and probably won't ever reach it, but there definitely seems to be a lot of progress.
Since Nim compiles to C, a middle step worth being aware of is Nim + nimporter which isn't anywhere near "just python" but is (maybe?) closer than "compile a C binary and call it from python".
Or maybe it's just syntactic sugar around that. But sugar can be nice.