SQLite is so hardcore about being totally unbreakable that they have a section in their documentation dedicated to explaining all the horrible things the OS has to do just to get it to fail: http://www.sqlite.org/lockingv3.html#how_to_corrupt
1000 writes in 5 seconds for mongo? Running 1.4, I get ~2500/sec on an old P4 desktop that I was using to test on. I've only recently started working with mongo, and I have no experience working with 1.3, the unstable development version that he lost his data using. I'm having a hard time buying into his comparison.
I used SQLite until I found a query that locked it up seemingly forever. It had 2 inner joins. It worked with 1 inner join - NOT 2. The author of SQLite said "Not my problem buddy, it's yours". So I'm switching to PostGRE (which CAN handle those 2 inner joins) or a Microsoft SQL.
That's a query optimizer complexity trade-off. SQLite is targeted for embedded usage, and a smarter query optimizer would make SQLite much larger, cutting into one of its main strengths. You're better off using SQLite as a scaled-down RDBMS and moving to postgres when necessary.
Lua's design makes similar trade-offs, and they comment on this in http://www.lua.org/doc/sblp2005.pdf: "For instance, just the control-flow analysis of Bigloo, an optimizer Scheme compiler, is more than ten times larger than the whole Lua implementation." It's the same reason the core language doesn't have regular expressions. (Well, that, and LPEG.)
You can use :memory as a filename for SQLite and that's just what it'll do. Of course if it's in your OSs filesystem cache then it is in memory even if it's on disk.
Or you can roll your own and write your own query language too...
Reading about their testing process is also informative: http://www.sqlite.org/testing.html