Suppose the price of Amazon stock is going to be 20% higher tomorrow than it is today. If everyone knew this, the price would already be 20% higher, because the existing owners wouldn't sell at the lower price. If some people know this but not everyone, they'll keep buying Amazon stock until the price increases by 20%, which again causes the price to immediately increase by 20% instead of waiting until tomorrow.
The arbitrage opportunity is available to anyone who knows the information, at the expense of anyone trading the stock who doesn't. If everybody knows then there is no arbitrage opportunity because the gap is already closed.
Arbitrage exists because of inefficiencies in price discovery, and reducing that to “someone has information but another person doesn't” trivializes what traders do and demonstrates narrow thinking about how markets, and how business works in general.
Information isn’t the sole reason someone might be able to make money in a market, most times it’s the least important factor. Finance, like any other business relies on execution, not knowledge.
For example, you have some information, but it’s worthless because you’re reading into it the wrong way. Or the information is material, but the market doesn’t believe it. Or macro conditions negate the information. Or you don’t have the ability to transact on the information. Or you’re too risk averse to act on the information. Or the classic “you’re right, but it’s the wrong time”, like many companies were in the dot-com era.
> For example, you have some information, but it’s worthless because you’re reading into it the wrong way. Or the information is material, but the market doesn’t believe it. Or macro conditions negate the information. ... Or the classic “you’re right, but it’s the wrong time”, like many companies were in the dot-com era.
These are all part of knowing what's going to happen. If you think you know something but you're wrong, you're wrong, and the person who does know (or makes a better guess) is the person who takes your money.
> Or you’re too risk averse to act on the information.
At which point you might as well tell other people or publish it and then someone else can.
> Or you don’t have the ability to transact on the information.
This is extremely unusual for publicly traded stocks. Random individuals off the street can open a brokerage account if they think they know something the market doesn't. Even people with no money could sell the information to someone else for whatever they could get, or just tell their friends to have someone richer than them owe them a favor, and then that person trades on it.
Probably the most common case you can't use it is when it would be insider trading. But why would acting on some LLM output be insider trading?
Its crazy how many people don't understand this. I can't believe how many people think they could predict the market with candle light sticks or whatever. If a method for predicting the market is so readily available that someone is selling it to you, it eouldnt work!!
Sure, and that can happen whether there is a government or not. In fact, in the US, courts up to and including the Supreme Court have ruled that the government has no affirmative duty to protect individuals from crime.
I don't think laws against crime were the sort of regulation of markets that the post I responded to was talking about. But in any case, if the existence of crime is sufficient to make markets not free, then free markets don't exist with government any more than they would without it.
You don't need to become Manhattan to have density and mixed-use? I used to work at a place with a restaurant next to it (allowed by a reduction of parking minimums). Guess where a lot of my fellow employees are?
What? As an obvious example if this were to be correct (it in no way is), why couldn't Nvidia then buy Intel cards to retain the lead? Why could Nvidia not keep building its own AI?
See Google's gvisor as an attempt at reducing the attack-surface of a container to make things more secure.
I think the general advice is that a single container can never be a robust security boundary because the OS surface area they involve is so large that the isolation layer is ripe for possible vulnerabilities. You also really have to avoid screwing up, there are lot of fiddly little security mistakes you can make when attempting to use a container to run untrusted code.
Typically you might use something like gvisor, or a VM. Systems where isolation is simpler to reason about and the attack surface is smaller.
In any case a single isolation boundary can have a vulnerability and my understanding is that more advanced systems typically involve multiple layers of isolation to sandbox untrusted code.
> Everywhere I read treats them as a security boundary for say, untrusted code.
who's everybody? There's special kind of VM hosts for that, containers is like your kitchen jars, if someone is vomiting with Ebola in your kitchen - your jars will not help you
I think that's a great analogy, because yes, if you have a live sample of Ebola in a sealed glass jar then that will very much help you. (I would not recommend leaving the lid open by giving it SYS_ADMIN, but that doesn't mean that glass isn't a fine material for containing pathogens.)
Thinking further on the analogy, yeah it's better than nothing but I would _not_ recommend people leave Ebola in a glass jar in their kitchen. What if someone accidentally knocks it over, cutting themselves on a shard while doing so? What if someone, looking for cookies, fumbles around inside and gets it on their hands? Sure these are not "best practices" but the point is that it should be difficult to do the dangerous things, not easy and certainly not recommended by tutorials everywhere.
Do "level 4 biosafety facility" not use glass vials? I imagine the security isn't provided by the choice of containers itself (plastic?), but rather the entire lab design.
Yes, my point was vials is not all they use. There are many layers of protection: biosuits, negative air pressure, decontamination procedures at the exits, etc.
Containers should never run untrusted code, at least not without other layers of protection added on top. Otherwise stuff like https://github.com/google/gvisor would not need to exist. And even then as long as processes are sharing a host kernel they are always vulnerable. A full VM is really the minimum acceptable boundary.
>"Everywhere I read treats them as a security boundary"
The people writing those articles are wrong. Containers are insufficient for untrusted code. containers should not be treated as a security boundary. A virtual machine or something similar (Firecracker) can be treated as a security boundary, but not a container.
People just keep repeating the same assertion, over and over, without elaborating on exactly why containers shouldn't be used to run untrusted code. I think that's what the GP is complaining about.
The distinction is getting more and more fuzzy, so this is almost a meaningless point (as is GGP). It's very vendor-specific, let alone that I'm pretty sure Spectre attacks can work across VMs.
I wouldn’t consider the distinction “fuzzy”. Assuming we’re talking Linux (I don’t know about the Mac and Windows world) containers are implemented using namespaces and cgroups, and always have been. Whether you are talking docker, containerd, some more minimalistic thing built on runc, it’s all Linux namespaces and cgroups. And those things were explicitly not designed to act as security boundaries when running untrusted code.
Does anyone actually use Kata containers? I've tried recently to run them on a current Ubuntu platform and couldn't get it working at all after a few days of work.
If it's Ubuntu, possible you had docker inside snap unintentionally and issues because of that? I had a bit trouble getting it integrated with certain versions of Podman, but that aside setting up kata was pretty straightforward.
I have so far only used it for
hosting some gameservers which I don't trust, i.e some simple containers, but I really want to try it in a new k3s cluster once I get it setup and move some services there. I like the idea of putting internet facing ones into it as an additional layer of separation and could imagine it being useful in production.
The distinction is in fact pretty clear. Conventional container runtimes are shared-kernel isolation. Virtual machines aren't: every tenant has their own running kernel.
Sure, anything that is misconfigured could eliminate a security boundary. That doesn’t mean that containers are even in the same ballpark as VMs in terms of providing a security boundary.
Why? Both are supposed to keep whatever is inside trapped unless you poke holes in that protection (say, using virtio or even just 9P to hand it real storage)
I'm willing to believe that Linux containers were not initially designed to be a security boundary, but I struggle to see why that means they aren't now; it's been over a decade and they have an awful lot of security features for something that doesn't care about security.
EDIT: For that matter, they're clearly being used for security; the features in Linux that are used by runc et al. are the same features used by eg. Chrome to isolate components in order to contain vulnerabilities.
"On Linux, Docker manipulates iptables rules to provide network isolation. While this is an implementation detail and you should not modify the rules Docker inserts into your iptables policies, it does have some implications on what you need to do if you want to have your own policies in addition to those managed by Docker.
If you're running Docker on a host that is exposed to the Internet, you will probably want to have iptables policies in place that prevent unauthorized access to containers or other services running on your host. This page describes how to achieve that, and what caveats you need to be aware of."
I mean, there was this story[0] ("How a Docker footgun led to a vandal deleting NewsBlur's MongoDB database") about how the Docker rules allowed a hacker to delete someone's database.
>Turns out the ufw firewall I enabled and diligently kept on a strict allowlist with only my internal servers didn’t work on a new server because of Docker. When I containerized MongoDB, Docker helpfully inserted an allow rule into iptables, opening up MongoDB to the world. So while my firewall was “active”, doing a sudo iptables -L | grep 27017 showed that MongoDB was open the world. This has been a Docker footgun since 2014.
Story was previously discussed on HN[1]. Sure, you could argue the author should have done more to secure the endpoint, but this was 100% a failure mode due to how Docker prioritizes convenience over security.
Most containers do not switch the CPU VM context when the CPU switches between containers. VMs do. This is necessary to prevent attacks which can leak data through the CPU cache.
I agree that they have a bunch of security features, because they've been playing security whack-a-mole for a decade. Retrofitting a security boundary onto an existing system is very difficult. Personally Linux containers are still not at a level where I'd trust them for something that was meant to be a hard security boundary rather than a damage mitigation exercise, though obviously that's a subjective judgement.
Linux was designed as a multi-user operating system from the get go. By definition, programs from different users are supposed to run without access to data that they shouldn’t have access to.
I wouldn't say "designed from the start", I would just say virtual machines are naturally more isolated than containers. Since virtual machines are simulating hardware, compared to containers which are isolated user space instances.
But I guess that doesn't mean virtual machines aren't easily escapable without extra work, same as containers.
This doesn’t answer the question. How were VMs designed with security in mind and not with host emulation? Untrusted code? I’m confused, people are talking about vulnerabilities.
VMs don’t share the kernel with the host. Any host escape would need to happen through a device driver exposed to the guest (virtio, etc.) Containers use the same kernel, obviously a much larger set of code. More code means a greater chance of vulnerabilities.
How do you think the VM itself is spawned? Something in the host instantiate it.
But even if we ignore that, I would argue that concentrating in one location only (e.g., some exposed driver) to escape is also easier, since an attacker would need to spend time trying to find only one vulnerability rather than several.
I think the hardware can help bridging the gap between containers and VMs by enabling userspace processes behave as VMs, which is more or less what QEMU+KVM try to do, except that it still comes with some overheads and less flexibility.
I have to disagree. With a VM, there is less shared code shared with the host to review and audit for vulnerabilities. The developers can go over those device drivers, system calls for VM management, etc. with a fined-tooth comb. With containers, you have essentially the entire kernel, much more surface area to potentially exploit.
Maybe I am wrong. We can wait for a security professional to comment.
When you switch between VMs and the host the CPU executes an instruction to isolate the data of each VMs entirely (flushing caches). This doesn't happen with containers.
Although this feature does improve security, it is due to the CPU, the VM and the main idea was to reduce performance overheads. I do not see adding layers and layers of abstractions the same as more secure. In fact, meltdown and spectre serve as counter-examples.
Also, one can achieve similar effects with containers as well, just think AppArmor, capabilities, permissions, etc., all layers of administrative privileges between some untrusted code and the host.
There is nothing you can do in the OS that replaces the VMENTER instruction. You need this precisely because it mitigates SPECTRE and Meltdown, assuming your microcode is up to date.
That video really isn't giving me the vibe this is any more real than Mill, just better funded. I think they failed to mention how their new CPU will also fix world hunger.