That's mostly accurate. Lots of people (at Google, IBM, SWSoft and elsewhere) had been working on approaches to get resource isolation into the Linux kernel since around 2000, but none had achieved general support. The main debate was around the abstractions to be used for defining/controlling the sets of processes being isolated and the isolation parameters, rather than the actual mechanisms used for isolation.
Around about the same time (~2005?) SGI got cpusets merged into the kernel; this was initially just intended for pinning groups of processes on to specific NUMA nodes on big-iron systems. At the suggestion of akpm we started using it internally at Google to do coarse-grained CPU and memory isolation, by making use of the fake-NUMA emulation support to split the memory on our servers into chunks of ~128MB each and pinning each job to some number of fake nodes. This worked surprisingly well, but required painfully-complex userspace support to keep track of memory usage of each job, and juggle memory node assignments (particularly since we wanted to be able to overcommit machines, so we had to dynamically shift nodes around from low-priority jobs to high-priority jobs in response to demand).
The cpuset API and abstractions turned out to fit the resource control problem pretty well, and they had already been merged into the kernel, which gave that API a kind of pre-approval compared to the other generic resource control approaches. So we worked on separating out the core process/group management code from cpusets, and adapting it to support multiple different subsystems, and multiple parallel hierarchies of groups. The original cpusets became just one subsystem that could be attached to cgroups (others included memory, CPU cycles, disk I/O slots, available TCP ports, etc). It turned out that this was an approach that everyone (different groups of resource-control enthusiasts, as well as Linux core maintainers) could get behind, and as a result Linux acquired a general-purpose resource control abstraction, and other folks (including some at Google) went to town on providing mechanisms for controlling specific resources.
The namespace work was going on pretty much in parallel with this - it wasn't something that we were interested in since it was just added overhead from our point of view. The jobs we were running were fully aware that they were running in a shared environment (and mostly included a lot of Google core libraries that made dealing with the shared environment pretty straightforward) so we didn't need to give the impression that the job had a machine to itself. IP isolation would have been somewhat useful (and I think was later added in Kubernetes) but wasn't very practical to provide efficiently given Google's networking infrastructure at the time.
We weren't really interested in LXC since we had our own userspace components that had developed organically with our container support (and which as others have commented were so entwined with other bits of Google infrastructure that open-sourcing them wouldn't have been practical or very useful).
Around about the same time (~2005?) SGI got cpusets merged into the kernel; this was initially just intended for pinning groups of processes on to specific NUMA nodes on big-iron systems. At the suggestion of akpm we started using it internally at Google to do coarse-grained CPU and memory isolation, by making use of the fake-NUMA emulation support to split the memory on our servers into chunks of ~128MB each and pinning each job to some number of fake nodes. This worked surprisingly well, but required painfully-complex userspace support to keep track of memory usage of each job, and juggle memory node assignments (particularly since we wanted to be able to overcommit machines, so we had to dynamically shift nodes around from low-priority jobs to high-priority jobs in response to demand).
The cpuset API and abstractions turned out to fit the resource control problem pretty well, and they had already been merged into the kernel, which gave that API a kind of pre-approval compared to the other generic resource control approaches. So we worked on separating out the core process/group management code from cpusets, and adapting it to support multiple different subsystems, and multiple parallel hierarchies of groups. The original cpusets became just one subsystem that could be attached to cgroups (others included memory, CPU cycles, disk I/O slots, available TCP ports, etc). It turned out that this was an approach that everyone (different groups of resource-control enthusiasts, as well as Linux core maintainers) could get behind, and as a result Linux acquired a general-purpose resource control abstraction, and other folks (including some at Google) went to town on providing mechanisms for controlling specific resources.
The namespace work was going on pretty much in parallel with this - it wasn't something that we were interested in since it was just added overhead from our point of view. The jobs we were running were fully aware that they were running in a shared environment (and mostly included a lot of Google core libraries that made dealing with the shared environment pretty straightforward) so we didn't need to give the impression that the job had a machine to itself. IP isolation would have been somewhat useful (and I think was later added in Kubernetes) but wasn't very practical to provide efficiently given Google's networking infrastructure at the time.
We weren't really interested in LXC since we had our own userspace components that had developed organically with our container support (and which as others have commented were so entwined with other bits of Google infrastructure that open-sourcing them wouldn't have been practical or very useful).