This is a really cool technique and warrants some investigation, but I can't let this go unaddressed:
> and the upper bound on the time taken to join all three tables will be the square of that
These kinds of from-principle assertions about what Postgres's (or other DBs') performance will be like sound helpful but usually aren't. The kinds of queries you issue can change everything. Indexing can change everything. Postgres's configuration can change everything. Actual size of the table can change everything. For example, if the table is small, Postgres will keep it in memory and your plans will have scary looking but actually innocent sequential scans, which I think actually happened in his join table example.
Anyway, it's good to have a lot of tools in your toolbox, and this is an interesting tool with interesting uses. I just think it would be a grave error to take the performance ratios here as fixed.
This is why DBAs are a thing. Just sit back and enjoy the rediscovery of relational datastores in the blogosphere. The "thought leaders" are going to slowly figure out things like your post that were worked out properly in the 80s.
The lack of an index on the junction table definitely did have a major effect. By just doing the following:
CREATE INDEX ON movie_person (movie_id);
CREATE INDEX ON movie_person (person_id);
The junction query speeds up to around 2ms—it's comparable to or faster than the bloom filter query. But the trade-off is revealed when you see the total size of the movie_person table (including indexes):
Whereas, by my calculations, the total size added by the bloom filters and hashes on movie and person is just 2094kB in total.
I plan on adding an erratum to my article explaining my error, the time/memory trade-off, and ideas for further improvement or exploration, potentially including bloom-based GiST indexes and the opportunities for parallelization.
Also look at GIN. Although the documentation claims it is typically larger than GIST, in certain cases (a small lexeme set, which is not true in the general case full-text) it can seem to be smaller and faster. Mostly this can be true over constrained symbolic domains (which is a lot of computing also).
We've recently replaced a GIST index with a GIN one and got a lot more than the 'rule of thumb' speedup. More like an order of magnitude. Or two. So much, that Tom Lane suggested we might have stumbled onto a bug in GIST, so we'll see.
Now that's the kind of content that I'd love to see more here. It covers basic tools like sed and awk, to nice concepts I didn't know, like BIT field in Postgres.
Any recommended book or set of articles for starting with Postgres?
I like the exploration of this method, but I would have liked to see the actual comparison of any false positives. Bad data can be acceptable in statistical analysis, but if you were showing someone a list of their ratings or the actors who were in the latest Kevin Bacon movie, false positives have a much stronger impact.
Is there any chance that the bloom could be used as a short-circuit filter but still follow-up with the m2m join to filter out the false positives? If the query optimizer can take advantage of that, then you could likely balance the size and cost of the bloom field.
This is exactly right. Bloom filters used in this way should generate candidates not results to reduce the total amount of work you do when it is expensive to confirm membership in the set. The best used cases check the opposite, e.g. Kevin Bacon was definitely not in this movie, since there are no false negatives.
Using a custom index implementation that uses bloom filters internally is probably going to work out better in the long run. It should be way more efficient than storing the data in a bit field, using app-layer code to generate the bloom filter values, then doing bitwise comparisons on-the-fly at query time.
The Postgres query planner can also recheck constraints automatically to recover from bloom filter false positive matches at query time.
FYI -- bloom filters are already used internally within the PostgreSQL intarray contrib module and the full-text search functionality.
I saw that too... I'm suspicious of the `SET person_filter = (SELECT bit_or(person.hash) ...);` line, personally. If the hash function is supposed to produce roughly random bits (I'm not familiar with murmur hash though, this may not be true), and you're simply OR-ing them together, you lose zero bits pretty quickly.
But I need to brush up on murmur hash, and bloom filters in general, so I may be entirely wrong. Maybe it's valid, and maybe that does represent a 0.5% error.
Regarding the space consumption, it seems like this is a tradeoff of storing two integers (plus the row overhead) per rating versus storing 335 bytes per movie/user.
For the join table, that's 2 integers * 575,281 ratings * 4 bytes = 4,602,248 bytes used in the join table.
With the filter, in each movie row, you need to store 1632 bits for the person_filter and 1048 bits for the hash, so 3,883 movies * (1632 bits + 1048 bits) = 1,300,805 bytes.
In each user row you need to store the same number of bits for the filter and hash, so 6,040 users * (1632 bits + 1048 bits) = 2,023,400 bytes.
Is my math here wrong? With this approach you save about 1.22MB, or about 27% over the join table approach (ignoring how much overhead there is for each row of the table and each page to store the table in).
Depending on the dataset it doesn't seem like the space savings would be worth the sacrifice in accuracy.
or you could add an index to the join table and see a similar speedup without the accuracy tradeoff. I'm not saying that the approach doesn't have applications, but it would be much more useful if the post discussed a case that couldn't be addressed easily by standard means.
Note the sequential scan on movie_person that accounts for 2.5 to 60 seconds. If there were an index on movie_person(person_id), this could be an index scan.
(EDITED TO ADD: I totally misread the timings in milliseconds for timings in seconds, so most of what I originally wrote is off by a factor of 1000. I'm leaving it here for entertainment value and because you might want to play with the data set in SQLite. But my point is still valid: a vanilla join is comparable in performance to the OP's bloom-filter method.)
I'm having a hard time believing that the straightforward join on a data set as small as the OP's sample is really going to take 65 seconds on PostgreSQL. Maybe that's what EXPLAIN predicts (with spotty stats, I'd wager), but EXPLAIN is not a reliable way to measure performance. For this data, I'd expect real queries to perform much better.
EDITED TO ADD: The OP's article shows the results for EXPLAIN ANALYZE, which ought to have performed the queries. So I'm not sure why the results are so slow.
Heck, even SQLite, when processing a superset of the data set on my 4-year-old computer, can do the OP's final query (and return additional ratings data) almost instantly:
$ time sqlite3 ratings.db '
select *
from
users
natural join ratings
natural join movies
where user_id = 160
' > /dev/null
real 0m0.006s
user 0m0.002s
sys 0m0.004s
If you want to try some queries yourself, here you go:
$ wget http://www.grouplens.org/system/files/ml-1m.zip
$ unzip ml-1m.zip
$ cd m1-1m
$ sqlite3 ratings.db <<EOF
CREATE TABLE movies (
movie_id INTEGER PRIMARY KEY NOT NULL
, title TEXT NOT NULL
, genres TEXT NOT NULL
);
CREATE TABLE users (
user_id INTEGER PRIMARY KEY NOT NULL
, gender TEXT NOT NULL
, age TEXT NOT NULL
, occupation TEXT NOT NULL
, zipcode TEXT NOT NULL
);
CREATE TABLE ratings (
user_id INTEGER REFERENCES users(user_id)
, movie_id INTEGER REFERENCES movies(movie_id)
, rating INTEGER NOT NULL
, timestamp INTEGER NOT NULL
, PRIMARY KEY (user_id, movie_id)
);
.separator ::
.import movies.dat movies
.import users.dat users
.import ratings.dat ratings
EOF
Fully enumerating the joined data takes under a second:
$ time sqlite3 ratings.db '
select count(*)
from
users
natural join ratings
natural join movies
'
1000209
real 0m0.953s
user 0m0.925s
sys 0m0.021s
And even going so far as to print the joined data takes under six seconds:
$ time sqlite3 ratings.db '
select *
from
users
natural join ratings
natural join movies
' > /dev/null
real 0m5.586s
user 0m5.497s
sys 0m0.059s
Sometimes, the old stuff works better than we give it credit for.
If he had an index on movie_person, that would be a ~ 1ms index lookup, not a table scan. You can't do that sort of database layout without putting indexes on the M2M mapping table.
(edit, oh, yeah, it's right there in the parent post....)
and you probably want to include these indices in any case to enforce that a given (user, movie) pair in the join table is unique (unless you want to track if a rating was changed and when, etc.)
further, you might decide to cluster on the index or partition.
Thanks for pointing that out. I totally misread the bottom line. Once I had it in my head that those were seconds, no amount of looking at that little "ms" behind the numbers was going to show me anything different.
Skimmed through it, but it seems that there is still an improvement with the bloom filter with a factor of 2~3. Is that correct? (estimated 6(?)ms once corrected with indices, vs 2ms with the bloom)
This sample set is way too small to be arguing about a millisecond here or there. The variation from what's cached, or if something else happens on the machine, or whatever is going to affect the benchmarks. For a 'real' benchmark, I'd like to see a few gigs of data, or something that's not going to fit in cache. The real gain from the bloom filter is going to be in a 4-10x reduction in the working set for the same amount of data. But when the working set is trivial, then the fluctuating and fixed overheads are going to dominate.
I'd also like to see how good the bloom filter's indexing is at scale, or if it winds up being some sort of table scan.
For a "real" benchmark, I'd like to see the query repeated until the response-time distribution stabilizes. Then take the response time at the 90th, 95th, and 99th percentiles, and also the mean. Actually, I'd like to see the CDF plot for the whole distribution.
My hunch is that, in the original environment, PostgreSQL with the right indexes would be as fast as the bloom filter. I emailed the OP about adding an index on movie_person(person_id), and he (besides being very cool and gracious about everything) said that he was going to add it and re-run his timings. I suspect will see new results soon.
This is what I was wondering. What's the performance difference compared to using a junction table with indexes and what are the other trade-offs of indexes vs bloom filters. I'm guessing there are cases when you wouldn't want to maintain the indexes. I'm not sure what they would be though.
And it happens to use bloom filters under-the-hood to do it.
From the docs:
"Two GiST index operator classes are provided: gist__int_ops (used by default) is suitable for small- to medium-size data sets, while gist__intbig_ops uses a larger signature and is more suitable for indexing large data sets (i.e., columns containing a large number of distinct array values). The implementation uses an RD-tree data structure with built-in lossy compression."
I saw that, but it doesn't seem to implement any of the indexes you'd need in order to run a bloom query efficiently.
That being said, if you stored your bitvector as an array of powers of two, then it would work. But that would be horribly inefficient in terms of space usage.
Which is why PostgreSQL is scriptable: the various contrib modules are often better looked at as examples of how to build your own indexes using GIN/GiST than "this is what we provide".
In your case, though, a strict immutable function mapping the bitvector to an int array as part of a functional index should be sufficient to use the existing contrib module: you don't need to store the things you index in the table.
I don't see how you could theoretically index this in a way that supports the bloom filter query. Indexes rely on data being ordered, and b-tree (similar to binary tree) lookups. A where clause that's like "where col & val = col" can never be supported by a b-tree style index... right? Am I missing something?
No: "integer" on PostgreSQL is, in fact, a signed 32-bit number, but so is "serial". The difference between serial and integer is only that usage of the serial type as a column implicitly creates a sequence (which itself is a 64-bit counter, but is unrelated to the storage in the column), sets its owner to the column, and makes the column's value default to the next value of the serial. PostgreSQL also has the type "bigint", which is a signed (not unsigned) 64-bit number, and an equivalent "bigserial" type which you would need to use if you want a 64-bit version of serial.
Yes, and if I were to make one addition to the article it would be to add a section talking about the implications of that 0.5% error rate with some examples of uses where this would be appropriate and where it wouldn't be.
> and the upper bound on the time taken to join all three tables will be the square of that
These kinds of from-principle assertions about what Postgres's (or other DBs') performance will be like sound helpful but usually aren't. The kinds of queries you issue can change everything. Indexing can change everything. Postgres's configuration can change everything. Actual size of the table can change everything. For example, if the table is small, Postgres will keep it in memory and your plans will have scary looking but actually innocent sequential scans, which I think actually happened in his join table example.
Anyway, it's good to have a lot of tools in your toolbox, and this is an interesting tool with interesting uses. I just think it would be a grave error to take the performance ratios here as fixed.