You opted to use features of std::vector that are documented to be unsafe (notably ::data()). This is the actual C++ translation of the opening code in TFA:
#include <vector>
#include <iostream>
int main() {
std::vector<int> vec = {1, 2, 3};
for (auto const & i : vec) {
std::cout << i << std::endl;
}
}
It is possible to use C++ to write unsafe code! Amazing! Some people want a language where this is not possible! Great!
> This is straightforward: we create a vector containing the values [1, 2], then iterate over it and print each element, and then finally print out the length of the vector. This is the kind of code people write every day.
The C++ code I provided does essentially this (I omitted printing the length, since it is so trivial), and is "the kind of code people write every day".
The fact that Rust requires you to consider move semantics for such simple code is precisely one of the central points of the article.
The C++ code implements the intended goal, not the problem TFA is trying to illustrate.
Changing between:
for (auto i : vec)
and
for (auto & i : vec)
has essentially no bearing on what the author is trying to show. If they were focused on how move semantics are always important, they would not use an integer type.
You are not fighting the C++ compiler or showing why the C++ compiler might be annyoing. You are introducing a bug by poorly using a library (which has nothing to do with writing and compiling C++). Ergonomics I believe are fine?
I'm struggling hard trying to understand what or if your comment has anything to do with GP's comment. Perhaps you wanted to tell that the Rust compiler might have stopped you from producing a buggy program, but again, it has nothing to do with GP's comment.
I think 0xdeafbeef is roughly recreating the first code snippet from the article (which is one of the things diath is complaining about) in C++ to show that the compiler should produce an error or else undefined behavior could occur on resize.
TiDB, 200k inserts per second; 200b per row on average. Bursty insert pattern, e.g., can have 200k inserts for an hour, then almost 0 for days. 8k reads per second on average, mostly reads by primary key. 20 hosts; 16 threads x 128GB RAM, 8TB NVME RAID 10. 60TiB of useful storage with replication factor of 3. Keyset pagination is the key. Also using rocksb for inserts batching. Costs around 20k on ovh
If the concern is: my apartment burned down and I need a backup from this past month, it should be ok (given the double copies & other redundancy)
If the idea is that they feel safe deleting things from main storage because it's backed up several times, your concern is probably right. I'm not sure tape is really justified for their use though. (What else has comparable longevity?)
And of course you could back up somewhere else. But eg mongo doesn't let you delete from their cold storage iirc (I can't validate this claim! So consider it hearsay)
Traveling into kernel flushes branch predictor caches, tlb. So it's not free at all.