Hacker Newsnew | past | comments | ask | show | jobs | submit | more 0xdeafbeef's commentslogin

Page cache reclamation is mostly single threaded. It's much simpler, than you can create in a user space, it has no weight for specific pages etc.

Traveling into kernel flushes branch predictor caches, tlb. So it's not free at all.


There is already tidb :)


Same for anthropic


you need only to cache asm output. It will take a few kb and no waste of cpu cycles.


#include <vector> #include <iostream>

  int main() {
      std::vector<int> vec = {1, 2, 3};
      vec.reserve(3);

      std::cout << "Initial capacity: " << vec.capacity() << std::endl;
      std::cout << "Initial data address: " << (void*)vec.data() << std::endl;

      int* ptr = vec.data();

      std::cout << "Pointer before push_back: " << (void*)ptr << std::endl;
      std::cout << "Value via pointer before push_back: " << *ptr << std::endl;

      std::cout << "\nPushing back 4...\n" << std::endl;
      vec.push_back(4);

      std::cout << "New capacity: " << vec.capacity() << std::endl;
      std::cout << "New data address: " << (void*)vec.data() << std::endl;

      std::cout << "\nAttempting to access data via the old pointer..." << std::endl;
      std::cout << "Old pointer value: " << (void*)ptr << std::endl;
      int value = *ptr;
      std::cout << "Read from dangling pointer (UB): " << value << std::endl;

      return 0;
  }
./a.out Initial capacity: 3 Initial data address: 0x517d2b0 Pointer before push_back: 0x517d2b0 Value via pointer before push_back: 1

Pushing back 4...

New capacity: 6 New data address: 0x517d6e0

Attempting to access data via the old pointer... Old pointer value: 0x517d2b0 Read from dangling pointer (UB): 20861


You opted to use features of std::vector that are documented to be unsafe (notably ::data()). This is the actual C++ translation of the opening code in TFA:

  #include <vector> 
  #include <iostream>

  int main() {
      std::vector<int> vec = {1, 2, 3};
      
      for (auto const & i : vec) {
          std::cout << i << std::endl;
      }
  }
It is possible to use C++ to write unsafe code! Amazing! Some people want a language where this is not possible! Great!


> This is the actual C++ translation of the opening code in TFA:

No, it isn't: this is iterating over references, not moving. This is equivalent to

  fn main() {
      let x = vec![1, 2];

      for y in &x {
          println!("{}", y);
      }
      println!("{}", x.len());
  }
in Rust. Note the &, just like in your C++.


The purpose of the first code example in TFA:

> This is straightforward: we create a vector containing the values [1, 2], then iterate over it and print each element, and then finally print out the length of the vector. This is the kind of code people write every day.

The C++ code I provided does essentially this (I omitted printing the length, since it is so trivial), and is "the kind of code people write every day".

The fact that Rust requires you to consider move semantics for such simple code is precisely one of the central points of the article.


"C++ code that implements the problem, but in a different way" is not "the actual C++ translation of the opening code in TFA."


The C++ code implements the intended goal, not the problem TFA is trying to illustrate.

Changing between:

    for (auto i : vec)
and

    for (auto & i : vec)
has essentially no bearing on what the author is trying to show. If they were focused on how move semantics are always important, they would not use an integer type.


You are not fighting the C++ compiler or showing why the C++ compiler might be annyoing. You are introducing a bug by poorly using a library (which has nothing to do with writing and compiling C++). Ergonomics I believe are fine?

I'm struggling hard trying to understand what or if your comment has anything to do with GP's comment. Perhaps you wanted to tell that the Rust compiler might have stopped you from producing a buggy program, but again, it has nothing to do with GP's comment.


I think 0xdeafbeef is roughly recreating the first code snippet from the article (which is one of the things diath is complaining about) in C++ to show that the compiler should produce an error or else undefined behavior could occur on resize.



1. Can it detect duplicates that have different resolution and compression?

2. Does it work in linear time, or square?


TiDB, 200k inserts per second; 200b per row on average. Bursty insert pattern, e.g., can have 200k inserts for an hour, then almost 0 for days. 8k reads per second on average, mostly reads by primary key. 20 hosts; 16 threads x 128GB RAM, 8TB NVME RAID 10. 60TiB of useful storage with replication factor of 3. Keyset pagination is the key. Also using rocksb for inserts batching. Costs around 20k on ovh


thanks for the info and context


See https://lwn.net/Articles/922405/ for a description of what it does and https://lwn.net/Articles/972710/ for the controversy it caused that is the reason why it took so long to land in mainline.


Blu-ray degrade after time, so it's not the best backup strategy


Depends on timeline

If the concern is: my apartment burned down and I need a backup from this past month, it should be ok (given the double copies & other redundancy)

If the idea is that they feel safe deleting things from main storage because it's backed up several times, your concern is probably right. I'm not sure tape is really justified for their use though. (What else has comparable longevity?)

And of course you could back up somewhere else. But eg mongo doesn't let you delete from their cold storage iirc (I can't validate this claim! So consider it hearsay)


Tape storage is way too expensive. It needs to be disrupted.


You can't pass values in registers using this model


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: