1) How does a unikernel differ from a library OS? Is a unikernel a kind of library OS?
2) Is the specialization the important aspect? I see OSv referred to as a unikernel but it does no specialization, it supports almost the entire Linux ABI. I'd argue it is a library OS, though.
1) According to the source article, a unikernel is a kind of library OS and I think you hit the right spot with
2) specialization. AFAIK the the concept of a library OS is from a time when single purpose computing was not on the agenda. We could argue a lot about the different ways of multiplexing and isolating different workloads and how maybe privilege levels were meant to solve the same problem as hardware virtualization but the spirit of unikernels (and library OSs) is to use language and program against an API that specializes to whatever ABI your target happens to be.
The name unikernel is used more in the context of specialized applications in the sense of what a UNIX process is supposed to be (do one thing) but I'd say the important aspect is still the use of language technology which then enables specialized code.
I'm not too familiar with OSv but I think the goal is to provide a runtime for off the shelf java applications. It should be possible to fuse an application with just the needed parts of libc and drivers, though.
Yes I think 1. is true, it is a library OS and your application, but very much in the library OS model.
OSv is a bit unusual architecturally, it is almost a very small OS that runs applications in a single address space. It is perhaps a hybrid architecture.
I've described it by saying unikernels are constructed by library operating systems. ie, the unikernel is the artefact but the libOS is what you (as a dev) actually work with.
The biggest value is that you don't have a link-time dependency, only compile-time. This means that you don't have to concern yourself with a lot of the versioning problems that plague C++ application deployment simply because your dependency is compiled into the binary.
Just to add to this point, the difference is that Rust can only guarantee this for the standard library. I can similarly write a library with safe interfaces that can be (ab)used to cause UB and there's little that the Rust team can do. This is different from other "safe" languages.
This is why it's so important to establish what the responsibility and expectation is of library developers to uphold the safety guarantees that everyone else relies on. It only takes one bad library to destroy the safety guarantees everyone who is transitively using that library relies on.
This is not all that different from Java (or Python, etc.), where it is quite easy to hide a call to a native function behind a seemingly-safe interface. The real difference is that native methods in Java must be written in a different language (C), while Rust supports both modes in the same language. (Edit: Or, if you prefer, two different but very closely related languages.)
I would argue, at any rate, that this sort of safe/unsafe boundary is still useful for the purpose of auditing code. Conceptually, memory bugs are interactions between two points in the program: e.g. one location deallocates a pointer, then another tries to dereference it. With Rust's implementation of unsafe, you are guaranteed that any bad interactions must have at least one endpoint in an unsafe block. You still can't completely ignore the safe code, because unsafe code can reach arbitrarily far out of its box (so to speak), but in general this constraint does help significantly in limiting the amount of code that needs to be audited.
Agreed. We had a segfault in Servo due to upgrading the compiler (and some internal representations changing). I wasn't able to track it myself (unfamiliarity with the code), but someone else was able to find its origin and fix it without much trouble because of `unsafe`. (That aside, we very rarely have segfaults in Servo, and Servo's huge)
Here's[0] an interesting idea for forcing crates using unsafe code to be handled specially, while allowing some "blessed" crates through without the special handling.
That seems like absolutely great idea. I want it already. I wonder why it wasn't processed further, while being 6 month old already.
It is different notion of "blessed" than the original proposal and, IMHO, a much better one. Steve's proposal is quite dubious, I guess. The problem is low standard for the word "blessed" here: in his proposal this badge has no technical meaning, but big social impact. Crate doesn't have to be superior in a technical sense to get this badge, it just has to be "famous", and stuff gets "famous" for various reasons. That's really bad and will get worse when Rust/Cargo will become more popular.
But that "safe/unsafe" isn't a matter of opinion anymore. If a library is known to be unsafe through the "safe" interface: it's a bug, and a crate shouldn't have this badge from a moment the bug has been discovered and until the bug is fixed again (or even longer, if there have been 5 such bugs over the last 2 weeks, even though now they seem to be fixed). It somewhat serves the original purpose (I assume), because it still means that a crate that isn't used heavily enough ("was downloaded N times over the last month") cannot be "blessed" — we don't have all necessary information to mark it as "blessed" yet, so it will help to set "junk crates" aside.
It's worth nothing if a library is made by Github, but is known to be buggy and to leak memory. But it is worth something if a "github API" crate made by John Doe is used by hundreds of people and didn't have a single memory leak for quite a while.
Oh, I didn't mean any offense. I'm just commenting on the idea: the original wording of the proposal seems dangerous, but both reem's and yazaddaruvala's ideas definitely have potential.
So you did a good job by starting that discussion. I hope it will have results.
I wonder if this shouldn't be on the "rust" side, rather than on the "crates.io"-side. Considering[1], I think I'd prefer either/or:
extern unsafe crate phrases; // It's all UNSAFE!
Or:
extern crate phrases; // It's mostly safe
use unsafe phrases::english; // But not English
The idea being, that either phrases wouldn't be imported/give an error -- or everything in phrases except "english" would be imported -- and english would only be imported if qualified with "unsafe".
Either way... I can see this going the way of try/catch/throws in java -- where the usefulness diminishes as lazy programmers (we're all lazy) end up polluting everything with unsafe (just like "throws Exception e..").
It can be used in a freestanding (nostd) environment such as a kernel. It uses unsafe code but provides a safe interface. The primary technique is to embed the type that is iterated over in a larger struct which contains the links. This way I can give out references to the inner type without fear of invalidating the iterator.
How is it possible to write a no-allocation intrusive linked list without move constructors ? When you move one of the nodes (or sentinel if you use one) it would invalidate the links in other nodes.
The list takes an OwningPointer to the object to be inserted. One of the requirements of this trait (which is unsafe to implement) is "the object cannot be moved while in the `LinkedList`." So Box would work, or a mutable reference would as well (the library implements the trait for these types already). The list itself doesn't do the allocation though.
> The primary technique is to embed the type that is iterated over in a larger struct which contains the links. This way I can give out references to the inner type without fear of invalidating the iterator.
I see. I was too fixated in embedding a small struct with the links within the larger struct (in the style of the Linux kernel "struct list_head"), and didn't think of doing the opposite.
It probably makes being in several intrusive structs at the same time much more complicated, but I think it's possible to do without breaking anything.
I think that the requirement is that anyone who releases a program that was linked with my library must also release it in a form that allows it to be relinked without the library. I'm not sure how that applies to Rust's linkage model.
You can look at OSkit as one attempt to do this. I think the overwhelming reason its not common is that no one wants to build an OS. If they do it's for the educational experience not practical use. This means hobbyists aren't interested in just using a framework.
I am not sure what exactly is meant by "Ruby on Rails of kernel". Probably "an opinionated framework to build your own kernel". In this case OSkit is probably the best answer to the question. Maybe look at micro- and exokernels.
I think another problem is that an OS is not analagous to a website, it's more analagous to a web framework. And it's not like there are any framework building frameworks floating around
The SYN packet can contain data, but the spec requires that it not be passed down to the application until the three-way handshake is complete (so a SYN-with-data from a spoofed source address won't elicit a response).
The TCP Fast Open proposal gets around this by using a cookie, so that the first connection requires a normal three-way-handshake, but subsequent connections between the same client and host can use an expedited handshake that eliminates a round trip.
Yeah but in practice no browser does this. There is no system call on Linux or Windows to push data as part of the SYN packet. You would have to craft TCP/IP packets and their headers with a raw socket...
Linux does support this for "TCP Fast Open" - the system call used is sendto() or sendmsg() with the MSG_FASTOPEN flag set, in place of the usual connect().
A lot of the issues surrounding M:N threads are a result of poor operating system support. The central problem is that a syscall blocks a kernel thread even when there are more user level threads to run.
If operating systems supported something like scheduler activations (a 20 year old technique), then this becomes less of a problem. The gist of it is that every time the kernel thread would block, or gets scheduled, instead of returning back to where it was executing, it upcalls into a user level scheduler which can then choose to schedule user threads. Its a shame that this technique isn't more common.
Because if I use M:N threading then logically one of my user threads blocked and a different one should run during my timeslice. The kernel however is unaware of how I use the kernel thread and will block, believing that I cannot proceed.
1:1 threading lacks this problem but operations such as creating threads or deleting them require syscalls and are therefore relatively expensive.
Head's Up Hold'em has a nash equilibrium. Therefore there is at least one mixed strategy (I do X with probability P in Y situation) which cannot be negative expected value to any other strategy. In this sense, there is an optimal strategy. It doesn't mean that it is the maximum expected value against a particular opponent, but no opponent can win by playing (which is largely the goal of a casino).
Opponent modeling is purely advantageous, but not necessary
I found this artile by Bryce Paradis that elaborates on using a Nash equilibrium for optimal play. He is known for bringing advanced mathematics to the game of limit poker and winning a small fortune because of it. Here is his take:
*
Q: What’s a Nash Equilibrium or “game theory optimal” strategy?
– Failed Math, Port Perry, Ontario
A: An equilibrium strategy is one that wins the most money possible against a perfect opponent (this does not mean an opponent who can see your cards, but one who always knows your range whenever you take an action and makes the best choice against that range). In the game “rock, paper, scissors,” the equilibrium strategy is to randomly choose between the three options, choosing each one a third of the time in the long run. Finding equilibriums in poker is much more complicated, but the concept can be useful when you’re playing lots of hands against tough opponents. For example, if your opponent bets half the pot on the river after a particular series of actions, the pot is offering him 2-1 on his bluff. If he were a perfect player, the right thing to do would be to call his bet a third of the time, since if you called more he’d exploit you by never bluffing and if you called less he’d exploit you by always bluffing. In reality, of course, our opponents are never perfect, and so the idea of playing an equilibrium strategy at the table is usually pretty academic.
*
You can certainly use Nash equilibrium when you have figured out the strategies your opponents are using. This is what Bryce Paradis is talking about. It can have practical value when playing Heads Up.
But If we are talking game theory and "solving poker", there is no single winning strategy that works against all other strategies and you can't calculate single Nash equilibrium that would be optimal in actual game against specific strategies.
Your opponent has 1352 different hand combinations. Assume he is playing the Nash equilibrium strategy. Make the perfect plays based on this. If he plays worse than the Nash equilibrium strategy, you beat him. If he plays perfectly, you tie.
Assuming your opponent plays perfectly works in chess. Chess programs are stronger than the best humans now.
>Assume he is playing the Nash equilibrium strategy.
You you can't do that assumption because you don't know what the strategy is. You can calculate Nash equilibrium only if you know the strategy your opponent is using. In full no limit hold em there is no single strategy winning strategy, so you don't know the strategy your opponents are using.
No you don't need to know what the opponents strategy is. You calculate based on worst case (ie op playing perfect) and worst case is you break even. There is no way to maximize profit but you can play unexploitable, ie at a minimum not lose and possibly win assuming op doesn't play perfect.
In poker optimal strategy is not winning strategy.
An optimal strategy’s goal is to loose the least against any arbitrary strategy. It is a strategy that is impossible to exploit in poker because poker has antes.
Poker players must seek maximal strategy. A maximal strategy’s goal is to win as much as possible against a specific strategy.
Yes I tend to agree here, that "optimal" strategy could be defined as making the least amount of mistakes. While a poker player also needs to minimize mistakes, sometimes to increase your expected value in future betting rounds or future hands one can make a mistake and get more value from it.
That's just not accurate. An optimal strategy beats everything but itself, against which it ties. Are you talking about rake? Because antes are included into the poker strategy.
For example, you would raise more often when the antes are higher, regardless of the other player's strategy.
I think the point he is trying to make is that if your goal is to maximise your profits then it is not always optimal to play the nash equilibrium strategy. That is true.
The optimal strategy beats everything but an equally good strategy, and ties against itself, but it doesnt necessarily maximise profits against other bad strategies.
If you are able to identify flaws in your opponents strategy then you can play non-game theory optimal to increase your profits against that perceived strategy. Doing so comes at the cost of you yourself no longer playing the best strategy though.
There exists strategies that gives higher yields vs certain unbalanced strategies than the game theory optimal strategy (or strategies - for all we know there are several optimal strategies in limit hold'em).
For instance - in limit poker if your opponent will never raise, call every street, but not call the river with anything less than a pair, regardless of what you do, then bluffing every river is a more winning strategy than the game theory optimal strategy. The game theory optimal strategy would include times when you do not bet the river, for balance, but knowledge of your opponent's flawed strategy would tell you that betting 100% of the time has a higher yield.
Yeah in heads up play using a Nash equilibrium to "balance your range" and play a more unbeatable strategy while also using it to calculate probable hands your opponent holds might be helpful, but it's far from a solution to the problem of devising a winning strategy that always works. I think that's why he calls it academic. Because it won't work in the real world.
Nash equilibrium certainly applies to No limit hold 'em. It's a zero-sum game with finite choices over finite time. Could you explain why you think otherwise? Are you just saying it's practically impossible to calculate?
You can calculate Nash equilibrium only when you know the strategies of your opponents. There is no single winning strategy in complete No limit Holdem, so you don't know how your opponent is going to play.
It's theoretically possible to find Nash equilibrium over all possible strategies but that's not winning strategy. You just lose as little as possible. You lose against most/all strategies.
No, you will tie or beat all strategies because your opponents will make huge mistakes like calling when you are almost never bluffing and folding when you are frequently bluffing.
Take an integer value x. Flip the sign bit. Do you now have the value -x? On a 2s complement architecture, you do not. In 2s complement, you have to flip all the bits and add one to get -x.
I think his point is that changing a sign bit doesn't affect the absolute value, just its sign. A sign bit has no value itself - it's just a flag. If the representation has a sign bit, you'd have a negative zero.
But the top bit in 2s complement has a value - it's just a negative one (that's large enough to make any value with it set negative). That's not a sign bit! If you change it, the absolute value most definitely changes, and quite substantially.
Nice introduction. I think it is worth pointing out that much of what you discuss is implementation dependent, the c standard doesn't require an implementation to lay out data in memory in any particular way. Instead it requires that access semantics behave in a particular way. These semantics, in turn, align with easy, low level implementations.
2) Is the specialization the important aspect? I see OSv referred to as a unikernel but it does no specialization, it supports almost the entire Linux ABI. I'd argue it is a library OS, though.