EDIT: End to end we do not give enough information from the application to OS to the hardware for perfect optimizations. We do not start with enough information and we lose too much down the stack. Especially around concurrency. I think that at the end of the day there is some atomic information needed by hardware and OS to make optimizes about all code running on the system.
I see the approach taken here in an effort to retain needed information by integrating compilation and OS level functionality. I think this is correct but we will see a shift to OSes having there own byte code as a translation layer between higher level languages. I also think high level languages lack some needed information in the first place...
I can't recall the details or find them searching online, but I recall reading back in around 2000 about an emulator that was actually faster than bare metal when emulating its own hardware. I think maybe it was a Sun Microsystems project, but I'm not sure. Does anyone else recall this?
It was almost definitely HP Dynamo. (Edit: if you combine ideas from HP Dynamo, SafeTSA JIT-optimized bytecode, and IBM's AS/400's TIMI/Technology Independent Machine Interface, you get a better version of the current Android Run Time for bytecode-distributed apps that compile ahead of time to native code and self-optimize at runtime based on low-overhead profiling.)
The really nice thing about Dynamo was that it was a relatively simple trace-based JIT compiler from native code to native code (plus a native code interpreter for non-hotspots). This meant that it would automatically inline hotspots across DLLs and through C++ virtual method dispatches (with appropriate guards to jump back to interpreter mode if the virtual method implementation didn't match or the PLT entry got modified). They didn't have to do any special-casing of the interpreter to handle virtual method calls or cross-DLL calls, it's just a natural consequence of a trace-based JIT from native code to native code.
The only downsides of something like Dynamo are (1) a bit of complexity and space usage (2) some startup overhead due to starting in interpretive mode and (3) if your program is abnormal in not having a roughly Zipf distribution of CPU usage, the overhead is going to be higher.
Ever since I read about Michael Franz et al.'s SafeTSA SSA-based JVM bytecode that more quickly generated higher-performing native code, I've had a long-term back-burner idea to write a C compiler that generates native code in a particular way (functions are all compiled to arrays of pointers to strait-line extended basic blocks) that makes tracing easier, and also storing a SafeTSA-like SSA bytecode along with the native code. That way, a Dynamo-like runtime wouldn't use an interpreter, and when it came to generate an optimized trace, it could skip the first step of decompiling native code to an SSA form. (Also, the SSA would be a bit cleaner as input for an optimizer, as the compilation-decompilation round-trip tends to make the SSA a bit harder to optimize, as shown by Franz's modification of Pizza/JikesRVM to run both SafeTSA and JVM bytecode.) Once you have your trace, you don't need on-stack replacement to get code in a tight loop to go into the optimized trace, you just swap one pointer to native code in the function's array of basic blocks. (All basic blocks are strait-line code, so the only way to loop is to jump back to the start of the same basic block via the array of basic block pointers.)
The background for HP Dynamo is that during the Unix wars, there were a bunch of RISC system vendors vying for both the high-end workstation and server markets. Sun had SPARC, SGI had MIPS, DEC had Alpha AXP (and earlier, some MIPS DECStations) and HP had PA-RISC. The HP Dynamo research project wanted to show that emulation via dynamic recompilation could be fast, so to get an apples-to-apples comparison for emulation overhead, they wrote a PA-RISC emulator for PA-RISC.
This project grew into an insanely powerful tool. It's called DynamoRIO and is still under active development and use today. It's one of the coolest technologies I've ever worked with.
It's used by the winafl fuzzer to provide basic block coverage for black box binaries.
Yes, I have poked around DynamoRIO a few times. It's now geared toward dynamically modifying binaries for various purposes from code coverage to fuzzing to performance and memory profiling.
There doesn't appear to currently be a turn-key solution similar to the original Dynamo. DynamoRIO could be used to put a small conditional tracing stub at the start of every basic block at application startup time, and then do some binary rewriting, similar to the original Dynamo, but it doesn't seem there are downloadable binaries that currently do this.
This dynamic optimization would be much easier and lower overhead (but less general) with cooperation from the compiler.
Could such a compiler include the runtime for this in the binary as an option? That might make it a lot more likely to be used by people, because it is all nice and stand-alone.
Who would benefit from this most? Is the benefit so diffuse it would almost have to be an open-source project without funding? Or could there be parties that see enough of an advantage to fund this?
I guess you could try and get a certain instruction set vendor (probably RISC-V, maybe ARM or x86 based) to have this as a boost for their chips. I guess the "functions are pointers to blocks" compilation could benefit from hardware acceleration.
You could presumably statically link in the runtime. Also, without the dynamically-optimizing runtime, it would run just fine, just a bit slower than normal native code due to the extra indirection. Lots of indirect calls also increase the chances of address mispredictions due to tag collisions in the BTB (Branch Target Buffer).
Function calls/jumps through arrays of pointers are how virtual method calls/optimized virtual method tail calls are executed. Though, in this case, the table offsets would be held in a register instead of immediate values embedded within the instruction. I'm not aware of any instruction set where they've decided it's worthwhile making instructions specifically to speed up C++ virtual member function dispatch, so I doubt they'd find optimizing this worthwhile.
Also, if things go according to plan, your hot path is a long strait run of code, with only occasional jumps through the table.
I should add that the GP only asked about CPU instructions for faster indirect jumps, but I should add that there are at least 4 things that would help a system designed for pervasive dynamic re-optimization of native code:
1. Two special registers (trace_position pointer and trace_limit pointer) for compact tracing of native code. If the position is less than the limit, for all backward branches, indirect jumps, and indirect function calls, the branch target is stored at the position pointer, and the position pointer is incremented. Both trace_position and trace_limit are initialized to zero at thread start, disabling tracing. When the profiling timer handler (presumably SIGVTALRM handler on Linux) executes, it would do some heuristic to determine if tracing should start. If so, it would store the resumption instruction pointer to the start of a thread_local trace buffer, set trace_position to point to the second entry in the trace buffer, and set trace_limit to one after the end of the trace buffer. There is no need to implement a separate interrupt for when the trace buffer fills up, it just turns of tracing; instead, re-optimizing the trace can be delayed until the next time the profiling timer handler is invoked.
2. Lighter weight mechanism for profiling timers that can both be set up and handled without switching from user space to kernel space. Presumably it looks like a cycle counter register and a function pointer register that gets called when the counter hits zero. Either the size of the ABI's stack red zone would be hard-coded, or there would need to be another register for how much to decrement the stack pointer to jump over the red zone when going into the signal handler.
3. Hardware support for either unbiased reservoir sampling or a streaming N-most-frequent algorithm[0] to keep track of the instruction pointer of the instructions causing pipeline stalls. This helps static instruction scheduling for those spots where the CPU's re-order buffer isn't large enough to prevent stalls. (Lower power processors/VLIWs typically don't execute out of order, so this would be especially useful there.) Reservoir sampling can be efficiently approximated using a linear feedback shift register PRNG logical-anded against a mask based on the most significant set bit in a counter. I'm not aware of efficient hardware approximations of a streaming N-most-frequent algorithm. One of the big problems with Itanium is that it relies on very good static instruction scheduling by the compiler, but that involves being good at guessing which memory reads are going to be cache misses. On most RISC processors, the number of source operands is less than the number of bytes per instruction, so you could actually encode which argument wasn't available in cases where, for instance, you're adding two registers that were both recently destinations of load instructions.
4. A probabilistic function call instruction. For RISC processors, the target address would be ip-relative with an offset stored as an immediate value in the instruction. The probability the function is taken would be encoded in the space usually used to indicate which registers are involved. This allows lightweight profiling by calling into a sampling stub that looks back at the function return address. Presumably some cost estimation heuristic would be used to determine the probability embedded in the instruction to make the sampling roughly weighted by cost.
Unfortunely when these things actually suceed on the market, we only get IBM and Unysis mainframes, Android, watchOS and .NET Native/WinRT, which are always a kind of compromise of the whole idea.
I’m being somewhat facetious or jocular when I say this—and somewhat serious—but…I wonder:
Is that a reflection of the quality of the software implementation? Or is it a reflection of the hardware it’s trying to implement? Or perhaps it’s related to the hardware the emulator is running on?
Or did the emulator emulate the hardware while running on /that/ hardware? Did it pull efficiency gains out of seemingly thin air?
Maybe VMware? There was a paper about how, in some cases, emulation was faster than hardware virtualization support. This was way back when intel's hardware virtualization support was new and VMware had already spent years optimizing software virtualization.
The name alludes to the Ship of Theseus, a thought experiment where, over the course of many repairs, every piece of a ship is eventually replaced. Given that, I found it interesting that this is applied to a novel greenfield OS effort, instead of a project to rewrite an existing OS in Rust piece-by-piece.
The goal appears to be to design an architecture that supports in-place updates of most components due to them having tightly constrained state management and inter-component dependencies.
Yes, after watching the presentation linked in another comment, it appears the building blocks of Theseus are small code units called "cells", which correspond 1:1 with Rust crates. These can be swapped out because the dynamic loader ensures there are no memory boundary overlaps.
The simple answer is that “identity” is pseudoscience and does not exist; it's a man-made delusion created by the human mind to simplify reality and deal with it more easily.
“identity” does not even apply to truly atomic particles, because there is no way to say that they aren't “different” in a next instance of time.
There are far too many “thought experiments” and “quæstions” that simply arise by trying to treat as rigorous man-made delusions and distinctions such as this one. “Is the glass half full or half empty?” — that is a non-distinction created by men, not by nature.
By that logic you would need to call like half of all useful phenomena that most people's lives consist of and depend on, as "man-made delusion". We still need to interact with our lives, and many of the abstractions help us do that.
BTW if you just replace the word "delusion" with "abstraction" you'll probably see how it makes a lot of sense. Every abstraction is a delusion by definition, is it not? Because the abstracted thing doesn't really exist in the same way as source observation?
> By that logic you would need to call like half of all useful phenomena that most people's lives consist of and depend on, as "man-made delusion". We still need to interact with our lives, and many of the abstractions help us do that.
And they are, and they might help you, but trying to ask scientific or metaphysical quæstions about it, is an exercise in futility.
> BTW if you just replace the word "delusion" with "abstraction" you'll probably see how it makes a lot of sense. Every abstraction is a delusion by definition, is it not? Because the abstracted thing doesn't really exist in the same way as source observation?
The difference is of course that some abstractions have rigorous definitions rather than purely based on human intuition, and then attempting to reason about them rigorously, quickly leading to reductions to the absurd such as here.
The quæstion raised in The Ship of Theseus assumes that there even be a meaningful, rigorous distinction between “same identity” an “different identity” — I reject that and attempting to reason about this with actual logic and empiricism is ridiculous.
It is as foolish as trying to have a scientific investigation about who is beautiful and who isn't.
You seem to assume that any process reasoning or thinking has to be scientific to be useful. A very dubious proposition.
Given your example, thinking about who is beautiful and who isn't (not "Scientific Study", just thinking) would be pointless is well, and in reality, in practical objective reality of a lot of people, isn't.
> quickly leading to reductions to the absurd such as here.
That is the whole point of the original thought experiment, it shows exactly that.
But it leads there by showing not that the question itself is wrong. But by showing that if you try to apply scientific thinking to everything, you end up with absurdity. That there are many areas of life where intuition is a much more suited, practical, and result-rich method of thinking.
> You seem to assume that any process reasoning or thinking has to be scientific to be useful. A very dubious proposition.
No, in fact, I clearly said that they were useful but not scientific.
My problem with The Ship of Theseus, is that it prætends to be scientific, whereas it is merely a futile quibble of semantics.
> Giving your example, thinking about who is beautiful and who isn't (not "Scientific Study", just thinking) would be pointless is well, and in reality, in practical objective reality of a lot of people, isn't.
Indeed it isn't. Now imagine the existence of some thought experiment by a philosopher who tries to use deductive logic to decide what is and isn't beautiful absent any rigorous definition of beauty and thus indeed ends up stuck.
I would indeed call that a very futile exercise, so I called The Ship of Theseus.
> My problem with The Ship of Theseus, is that it prætends to be scientific, whereas it is merely a futile quibble of semantics.
It doesn't pretend to be scientific (it does not purport to offer or relate to testable empirical hypotheses), it is a philosophical thought exercise illustrating that the concept of identity of a composite of mutable composition (pretty much every concrete thing in the real world) is arbitrary.
The Ship of Theseus is a quite old thought exercise. At that time, science and philosophy were not as separated as today. Actually, after reading the nice comic book Logicomix [0], I learned that the philosophical thought exercise of Wittgenstein, Russel and others, on trying to rationalize the world at the beginning of the 20th century, is actually what lead to a fundamental axiomatic redefinition of mathematics themselves. So it seems far stretch to call philosophical thought experiment not scientific.
I agree that it is not scientific in the modern sense, after Karl Popper introduced the concept of falsifiability in 1935 [1], shortly after Hilbert advocated for rigorous proofs in mathematics in 1917 [2]. Although at that point, it is mostly a matter of vocabulary, thought experiments seem necessary for the advance of science.
> The Ship of Theseus, is that it prætends to be scientific
How? Doesn't seem to me that most people think of it that way, but maybe I am not aware of some things. How can it even pretend be scientific when even the basic category in which it's placed is "a thought experiment"?
> And they are, and they might help you, but trying to ask scientific or metaphysical quæstions about it, is an exercise in futility.
What non-futile metaphysical questions are left, then? Is metaphysics completely futile? If so, is philosophy of science itself pseudoscience?
As far as I understand, most logical positivists would indeed reject philosophy of science. On the other hand, philosophy of science mostly put an end to logical positivism — and showed that we do need metaphysics after all. This paradigm shift also brought useful concepts such as falsifiability.
Of course, we are all free to choose whatever philosophy we want. I just personally find logical positivism and its rejection of metaphysics unconvincing.
You guys realize you're simply arguing about preferred definitions of the word, right?
Here, let me join the party. I disagree with both of you, because mathematically it should be the case that applying "identity" to x should yield back x for all x.
I would say it is worth to have a conversation about definitions of words. Having your definitions straight is a huge chunk of being able to think properly, and thus to act in the world in the way that we truly want.
I agree. But this isn't what this is. This is a conversation AROUND definitions, not ABOUT them.
In fact, it is surprising how many arguments (online or otherwise) degrade onto "according to my preferred definition of X" once you nitpick them, revealing that there is no actual argument behind the conversation beyond the "I prefer this definition" one.
> It is as foolish as trying to have a scientific investigation about who is beautiful and who isn't.
We actually do have these investigations, and there is a scientific quantification of what is and isn't beautiful. Symmetry is a significant factor, but there are others - many others. Nancy Etcoff's research delves into this - she even wrote a book about it - Survival of the Prettiest: The Science of Beauty.
Being able to refer to my car, or ship, or other vessel, as an identified single object, regardless of what parts have been changed - yes, I would say it is very useful thing for living with it and operating it.
> easy to copy and to rember
That's a very important, useful and valueable property of an abstraction, yes. Though certainly not the only important property.
Or do you want one that takes you hours to understand and that you have to remind yourself of it and re-learn every week?
This is also an argument that is worth considering.
Many such distinctions that cannot be made rigorous that stick in various cultures but never existed in others simply exist because a man copies his fellow man, not because they are useful in achieving any goals.
Though, as I said, they exist to simplify reality. I assume he human brain works so it does, simply to conserve energy and admit an inaccurate though usually useful enough solution. The problem is when such fuzzy logic is mixed with actual rigorous logic and attempts are made to analyse the former with the latter — that I find a futile waste of time.
There is a very good reason that no clear answer has been found to the quæstion raised in The Ship of Theseus even after millennia of contemplation — there is none, and the quæstion is bereft of sense.
Fun thing to do is comparing different cultures and seeing how very popular concepts that supposedly usefully describe common reality in one culture are completely missing from some others with no ill effect.
> The simple answer is that “identity” is pseudoscience and does not exist; it's a man-made delusion created by the human mind to simplify reality and deal with it more easily.
I agree, identity doesn't exist really, not in the physical, natural sense of existence...
Which is what the "silly thought experiment" is pointing at. He wouldn't have taken it that far, but the philosopher was trying to make a point in that direction.
Just because you reached such a level of enlightenment without silly thought experiments is no reason to be dismissive of the ladder that takes others to enlightenment. /s
That's a fair point, but I have rarely seen it interpreted as such.
I have usually seen extensive arguments and debates for whether the ship remains the same identity or not, and rarely do I see the opinion “The quæstion does not make sense, and the concept of “identity” is one of men, not nature.”
You can see in this comment tree already that many are defending the concept rather than admitting it's incorrect nature.
It's funny that your comment is fading out due to downvotes, as if disappearing from the existence.
It reminded me of Douglas Adams' quote from Hitchhiker's Guide to the Galaxy: "There is a theory which states that if ever anyone discovers exactly what the Universe is for and why it is here, it will instantly disappear and be replaced by something even more bizarre and inexplicable. There is another theory which states that this has already happened."
One thing I have found interesting about this concept is lets say I take all the parts and put them back together. Now I have 2 things. Which one is the 'thing'? Clearly both are and are not.
It is a weird quirk of some logic items. They can be both true and false at the same time. Such as things 'this statement is false' which is both true/false at the same time. Is one such example of that. However, sometimes it is clear our test is lacking. With the 'ship' you could say as soon as you switched out anything it was no longer the original thing. What happened to the original thing? Well it went to the same place as when I created it. Which creates its own set of questions.
I made no claim as to the entirety of “philosophy”.
I made a claim as to the dealing in identities and the assumption that they can actually be reasoned about.
They have no definition and it's merely fuzzy human gut feeling what is and isn't a different identity and different men can't always see eye to eye on it either.
One thing I remember well is that I watched a television series as a child that had an magical object that turned everything it touched into gold. It eventually landed on a ship, and the ship turned to gold, and sank.
It defied my intuition, from where I sat only the plank on which it fell should be turned to gold, not the entire ship, for if the entire ship did, apparently it's power could transfer simply to objects pressed close enough to that which it touched, and that was inconsistent with past portrayal.
Clearly however the ship as an entire identity satisfied the intuition of the script writers, but my intuition felt that the plank was a single identity, not the ship.
There are no definitions and rigor here — it is simply every man's individual intuition.
You are using identity yourself all the time. You just mentioned philosophy, this TV series, gold, that ship and that plank, and on close inspection there would probably be a thousand more cases in your last post where you relied on identity.
Identity is one of the most fundamental concepts, without it you would have a hard time splitting the universe into more than one distinct piece in order to reference and talk about them. You can only refer to this TV series because it is an entity with identity, equal to itself and not equal to any other thing in the universe.
And yes, it is a very hard problem, as Theseus illustrates one can not simply define identity using the atoms or some other kind of parts something is made of in the general case to define identity. But that just shows the difficulty of the problem, not that identity is a useless idea.
I am not a philosopher but it seems to me that without a notion of identity or equality you can not have more than one distinct thing. Well, probably someone has spend his entire life thinking about that, so maybe there are alternatives to identity and equality?
Quine: "no entity without identity" & "to be is to be the value of a bound variable". IMHO the neatest summation of a rigorous approach to the interface between mathmatical logic and ontology. That's why "Quine is our hero", as one of my lecturers back in the 80s, Jeremy Butterfield, used to say.
> You are using identity yourself all the time. You just mentioned philosophy, this TV series, gold, that ship and that plank, and on close inspection there would probably be a thousand more cases in your last post where you relied on identity.
And none of those are scientific inquiries where I proffer rigorous methodology.
> Identity is one of the most fundamental concepts, without it you would have a hard time splitting the universe into more than one distinct piece in order to reference and talk about them. You can only refer to this TV series because it is an entity with identity, equal to itself and not equal to any other thing in the universe.
Yet various exact scientific inquiries work very well without assuming it's existence or that there be a difference between same and different identity.
> And yes, it is a very hard problem, as Theseus illustrates one can not simply define identity using the atoms or some other kind of parts something is made of in the general case to define identity. But that just shows the difficulty of the problem, not that identity is a useless idea.
It is a very simple problem. The answer is: “identity is a pseudoscientific concept that is “not even wrong”.”.
It is so vague, so bereft of any definition that the claim of whether that two references share the same identity is “not even wrong”.
> I am not a philosopher but it seems to me that without an notion of identity or equality you can not have more than one distinct thing. Well, probably someone has spend his entire life thinking about that, so maybe there are alternatives to identity and equality?
And that is why the various models of exact sciences typically to not require that such arbitrary lines be drawn of what is and isn't a different “thing”.
And none of those are scientific inquiries where I proffer rigorous methodology.
But it obviously works, I did not confuse the ship and the plank or you and the TV series, which tells us that identity is a useful concept we can use to navigate the world. The fact, that it may be hard to formalize the concept and deal with all the edge cases, does not change that.
Yet various exact scientific inquiries work very well without assuming it's existence or that there be a difference between same and different identity.
Can you provide any example?
It is a very simple problem. The answer is: “identity is a pseudoscientific concept that is “not even wrong”.”.
It is so vague, so bereft of any definition that the claim of whether that two references share the same identity is “not even wrong”.
I am pretty sure people doing homotopy type theory [1] would disagree. And I am just picking this because it has gotten quite a bit of attention in recent years but people have put a lot of work into thinking about identity and equality over the centuries.
And that is why the various models of exact sciences typically to not require that such arbitrary lines be drawn of what is and isn't a different “thing”.
Being man-made doesn't make identity any less real. As a particular example, there's a big semantic difference in Python between `foo is bar` and `foo == bar`, with the former based on the interpreter's memory allocation and the latter allowing a custom class to decide its own answer. And the former is actually really implementation-dependent, with the following being True in cpython for any integer in the range [-5, 256]:
I did a ctrl+F for the word "redox" here, since that's already modern experimental OS being written in Rust that's existed for some time, and I was surprised not to see it mentioned at all; I figured there would be a section for modern OS neophytes like me comparing and constrasting the two. Anyone have any insight into how this project differs from Redox?
Redox is a unix style operating system with a unix style design and ABI.
This operating system shares none of those goals, it seems to focus on a particular challenge related to the abstraction between subsystems of the operating system, following a term they call "state spilling". The goal appears to be to design an architecture that supports in-place updates of most components due to them having tightly constrained state management and inter-component dependencies.
> The goal appears to be to design an architecture that supports in-place updates of most components due to them having tightly constrained state management and inter-component dependencies.
Strongly reminds me of Erlang-style hot reload.
Have any OS projects based on that architecture ever got off the ground?
Yeah totally valid implementation of which there have been a great many attempts, far too many to list here.
Precisely none of them in 3-5 decades has managed to deliver on the promise of microkernels prompting the question:
"Were huge microkernel advantages actually greatly oversold, is posix a massively debilitating factor that prevents microkernel based systems from delivering on their raison d'etre or was every single one of those implementations a poor one? [1]"
Eh, stuff like older vxworks plays loose and fast with the term microkernel that wouldn't fly today. I don't even think they try to claim that it is one anymore until their most recent release where they actually added (optional) process isolation.
Microkernel as cpu architecture abstraction layer. Such are only accidentally microkernels and not OSes at all as i understand them. They deliver on what of all those wonderful lists of microkernel benefits?
The POSIX layer of QNX is almost exclusively a (collection of) optional services or wrappers in userspace. In what way is it a failure or a poor implementation of a microkernel?
It's an inferior implementation of posix that does not display all those wonderful microkernel os benefits. If it did, we'd hardly be using competing OSes that are insecure, unstable, big-ridden etc. etc. I've never heard of a webserver running on QNX with an enhanced security benefit as a result, have you?
I work at QNX. Of course I've heard of webservers running on QNX with enhanced security as a result. I suspect the lack of widespread use of QNX is because it's a costly, commercial, and not marketed toward personal or cloud computing and not due to technical issues. Marketing trumps technical when it comes to purchasing decisions.
There are plenty of cases in the world where security trumps cost by a massive, massive amount. Not sure why you're talking about personal cloud computing, probably not interesting there.
How many licenses have you sold for this? How secure is it? Got any kind of metrics to share there?
In the past every single time I've probed claims about QNX (claims not made by QNX themselves at all, I hasten to add) as the example of delivering on the promise of microkernel OSes they have turned out to be unsubstantiated.
Needless to say if, you can substantiate them that would be utterly fantastic! Also a massive, massive marketing opportunity I would have thought. Link us up!
Providing _a posix filesystem_ as a kernel abstraction is what I said. Which is not very standard for microkernels, and makes it (redox) pretty unix like at the kernel level.
The overall point being that both, microkernels can be unix likes, and redox particularly and at the kernel level leans in to being a unix like.
> So if Microsoft bothers to certify WSL2 then Windows is now UNIX?!?
Of course, the WSL2 layer would be UNIX. There is more than it so calling Windows exclusively unix would be wrong (in the same way that running Windows under a VM, or Winde, on an unix system doesn't make it Windows.
Redox is cited in their OSDI paper. Basically they have a bunch of new ideas for writing an OS that are implemented here; Redox is only similar because they're both in Rust.
I've searched for redox in Google and this is the first time I see this layout (https://imgur.com/a/tVoKxBE) in Google search pages. Anyone knows if they planning redesign?
One of the most interesting projects I've seen on here in a while. Thinking of the massive efforts of thousands of developers who get Linux to where it is today, its a daunting undertaking, but definitely one that could be worthwhile, though I think it will depend on rallying lots and lots of skilled engineers.
In the early days of Linux, getting something like your wifi to work could be a hassle, I wonder if enough have changed in the driver space that we can quickly power through some of these challenges today.
I do wonder about single address space, I've seen a few experimental systems follow this strategy lately and it feels reliant on hardware less defective than we presently have in commodity user gui performance systems.
IBM i (OS/400) has had single address space, tagged pointers, and an number of other "novel" things in the loop for eons. Many people interact with it daily, indirectly, in most of your big box retail stores (think stores like Costco, Lowes, etc).
In what concerns ARM and SPARC platforms, memory tagging is happening and being adopted by Apple/Google platforms, in what concerns Intel, it was yet again another from their screwups.
I love the idea of using Rust for this kind of thing.
Is there any chance that specific drivers for modules for the Linux kernel be written in Rust? Has there been any interesting research or discussion on the topic?
I finally finished reading the three relevant papers. This is absolutely brilliant. This isn't my area of expertise, but to me this looks like the future of OS design. I would love to use this as my base OS, if/when it can be brought to the point of being able to run virtualized linux applications.
The concept of state spill also explains why OOP code bases are often a nightmare of complexity, despite pervasive modularization and encapsulation.
I was wondering how the project relates to the name - i.e. if there's some single holdover remnant or concept about it or if it's a hodgepodge of parts or something. Seems like it might be due to the modular nature of the kernel being spread across several Rust Crates. Clever reference!
From the "High Level Overview" doc (not the readme) :
> The Theseus kernel is composed of many small entities, each contained within a single Rust crate, and built all together as a cargo virtual workspace.
I think its because they want to enable hot-swapping elements of the kernel, so you could have replaced every part of the kernel without ever having to have rebooted.
I wonder if writing things in Rust just to put that in the headline is becoming a viable strategy for software adoption. There is often very little reason why programming languages matters to end users, but it strongly correlates with visibility in some communities.
That, and I think it's also fair to acknowledge that language choice can be a serious barrier of entry to contributing to a project or using it in tandem with other technologies. Also if I'm looking to learn from real-world projects I would, personally, prefer to know ahead of time.
"Hi, my project does X, I wrote it in Y," seems a reasonable way to introduce something.
Looking at the maintainer’s comments on the open pull requests, I’m not surprised there hasn’t been a lot of progress on this project. Major attitude detected.
That's the original and etymological definition of the word. We are used to progress so we see modern thing as better but modern things are just new or trendy not necessary better. Programming everything in javascript is quite modern but is it a progress ? We will see in the long run.
imo, "modern" would mean it removes or replaces features and characteristics which were exclusive to or popular in a previous time period but are not essential or can be replaced with "modern" alternatives.
https://www.youtube.com/watch?v=j4ZPZoPNjkw