You could also look at the new W3 Payment Request API [1] that stores card details securely in modern browsers.
Stripe has a Payment Request Button that enables one-click buying with Apple Pay, Google Pay, or the Payment Request API.
[2] The button will choose whichever method is best and available for your specific browser.
Elements lets you define validation behavior and styles by listening to event handlers that emit errors[0], and you can specify custom web fonts with the `fonts` option[1].
I appreciate the detailed analysis. A few comments:
> This post is probably going to make some people involved with RethinkDB very angry at me.
Actually, our community has always felt the opposite. Performance and scalability issues are considered bugs worth solving. That may have been the reaction of one or two community members, but that doesn't represent our values at all.
> A RethinkDB employee told me he thought I was their biggest user in terms of how hard I was pushing RethinkDB.
This may have been true (at the time) in terms of how SMC was using changefeeds, but RethinkDB is used in far more aggressive contexts. Here's a talk from Fidelity about how they used RethinkDB (for 25M customers across 25 nodes): https://www.youtube.com/watch?v=rm2zerSz6aE
SMC did seem to uncover a number of surprising bugs along the way: I would describe it as one of the more forward-thinking use cases that pushed the envelope of some of RethinkDB's newest features. This definitely came with lots of performance issues to solve along the way. I appreciate William’s tenacity and patience in helping us track down and fix these along the way.
> In particular, he pointed out this 2015 blog post, in which RethinkDB is consistently 5x-10x slower than MongoDB.
It’s worth pointing out that this particular blog post raised serious questions in its methodology, and recent versions of RethinkDB included very significant performance improvements: https://github.com/rethinkdb/rethinkdb/issues/4282
> Even then, the proxy nodes would often run at relatively high cpu usage. I never understood why.
I'd have to double-check with those who are far more familiar with RethinkDB's proxy mode, but it's because the nodes are parsing and processing queries as well, which can be CPU-intensive. They don't store any data, but if you use ReQL queries in a complex fashion (especially paired with changefeeds) it's going to require more CPU usage. We generally recommend that you run nodes with a lot of cores to take advantage of the parallelized architecture that RethinkDB has. This can get expensive if you aren't running dedicated hardware.
> The total disk space usage was an order of magnitude less (800GB versus 80GB).
> I imagine databases are similar. Using 10x more disk space means 10x more reading and writing to disk, and disk is (way more than) 10x slower than RAM…
This isn't necessarily true, especially with SSDs. RethinkDB's storage engine neatly divides its storage into extents that can be logically accessed in an efficient fashion. This is particularly valuable when running on SSDs, which are fundamentally parallelized devices. RethinkDB also caches data in memory as much as possible to avoid going to disk, but using more disk space doesn't immediately translate to lower performance.
One other interesting detail: since RethinkDB doesn’t have schemas, it stores the field names of each document individually. This is one of the trade-offs of not having a schema: even with compression, RethinkDB would use more space than Postgres for this reason. (This also impacts performance, since schemaless data is more complicated to parse and process.)
> Not listening to users is perhaps not the best approach to building quality software. [referring to microbenchmarks]
I think William may have misinterpreted the quote he describes from Slava’s post-mortem. Slava was referring to benchmarks that don’t affect the core performance of the database or production quality of the system, but may look better when you run micro-benchmarks: https://rethinkdb.com/blog/the-benchmark-youre-reading-is-pr...
We have always had an open development process on GitHub to collaboratively decide what features to build, and what their implementation should look like. I’m not certain what design choices William is suggesting we rejected. One has to only look at the proposal for dates and times in RethinkDB to see how this process and open conversation unfolds with our users: https://github.com/rethinkdb/rethinkdb/issues/977
> Really, what I love is the problems that RethinkDB solved, and where I believed RethinkDB could be 2-3 years from now if brilliant engineers like Daniel Mewes continued to work fulltime on the project.
RethinkDB development is proceeding after joining The Linux Foundation, despite the company shutdown. We believe that with a few years of work, RethinkDB will continue to mature as a database to reach Postgres’ level of stability and performance. We’re exploring options for funding dedicated developers long-term as an open-source project.
My thoughts: whatever technology you end up picking is going to have tradeoffs depending on your use case (and the maturity of the technology) and it's going to come with baggage. That's true of Postgres, MongoDB, RethinkDB, any programming language you choose, any tools you pick. If you're willing to carry that baggage it can be worth it: especially if it gives you developer velocity or if the problem you're solving is particularly well-suited to the tool.
Pick the technology that will have the least baggage for your problem. I often recommend Postgres to people, despite being one of the RethinkDB founders. Pragmatism wins over idealism, every time.
>It’s worth pointing out that this particular blog post raised serious questions in its methodology, and recent versions of RethinkDB included very significant performance improvements: https://github.com/rethinkdb/rethinkdb/issues/4282
I wouldn't even seriously consider that point - the article didn't even mention what version of MongoDB was being used. Safe mode writes could have been off, and he may have been just testing the latency between the client and database nodes. It's a pretty poor benchmark.
It's also worth noting that changefeeds are highly scalable: you can run tens of thousands of them on a single node, and scale them out linearly from there (even as they're scoped to specific queries.)
Obviously the performance characteristics will be impacted by the volume of changes that arrive to the database, but the architecture to support this is highly parallelized (all the way down to cores on the CPU.)
Thank you... iirc, the recommendation was 12-15 nodes at the top end. Though I haven't investigated deeply for a while now, as for the past 2 years I haven't had the option of what I've been using.