Haven't, but it's worth noting that hardware is probably attributable to them edging out Mapd since they're on a 5-node minsky cluster featuring nvlink, hence as arnon said, are benefiting from 9.5x faster transfer from disk than than PCIe 3.0. That blog has not yet tested Mapd's IBM Power version-- would be interesting to see how it compared on that cluster.
- They've built their database on Postgres for query planning, but for any query which does not match what they've accelerated on GPU, they do not have the ability to failover to utilizing postgres on the CPU.
https://youtu.be/oL0IIMQjFrs?t=3260
- Data is brought into GPU memory at table CREATE time, so the cost of transferring data from disk->host RAM->GPU RAM is not reflected. Probably wouldn't work if you want to shuffle data in/out of GPU RAM across changing query workloads. https://youtu.be/oL0IIMQjFrs?t=1310
Note that it's at the top of the list probably because it's running on a cluster. It would be awesome to see such a comparison on some standard hardware, like a large AWS GPU instance (eg1.2xlarge).
Also note that the dataset is 600GB, so it won't fit a sinlge GPU, not even close.
https://www.brytlyt.com