That reminds me of Singularity OS, an experimental OS from Microsoft Research, which has no distinction between kernel and user space, thanks to writing everything in a variant of .NET bytecode.
There have been Java OSes too, with the idea that verifiable bytecode does away with the need for "expensive" memory protection etc. Running everything in one memory space is error-prone, but it's the default way of the monolithic kernel. Having services delegated out to processes with separate memory spaces is architecturally more stable; a single bug in an obscure driver in a less actively maintained corner of the kernel shouldn't lead to a system-wide vulnerability. The approach I mention preserves the ability to keep things separate (with appropriate controls and verification at the boundary); the Singularity / Java OS approach seems rather different.
Yet another variant on the idea was the AS 400. It had no memory protection and didn't need it, since there was no way to address anything that you weren't supposed to.
Given the impressive stability and security record of the AS 400, they had a point.
I'm pretty sure Lisp OSes (you know, on the bytecoded Lisp Machines from companies such as Symbolics and TI) worked like that, and I know the CISC variants of the AS/400 architecture worked like that as well. (I don't know the RISC AS/400 or iSeries that well.)
An interesting aspect of that is what it does to your security model: In the AS/400 world, where applications are compiled to bytecode and then to machine code, the software that compiles to machine code is trusted to never emit dangerous machine code, as there are no further checks on application behavior. In the Lisp Machine world, anyone who can load microcode is God. In Singularity OS, the .Net runtime is effectively the microcode and the same remarks about Lisp Machines apply.
http://en.wikipedia.org/wiki/Singularity_(operating_system)