I am forced to conclude, from your complaints about the C++ code that you are exposed to, that it is very, very far from modern, for what I have to assume to be corporate reasons.
I don't know why you carry on about smart pointers. I find I use unique_ptr here and there (shared_ptr, practically never), and never, ever in any interface. Instead, modern C++ code traffics in value types, and relies on destructors, almost always from a library or compiler-generated, for cleanup. Use-after-free never happens because anything freed was only ever referenced from within a value-typed object that no longer exists.
The reason that, in modern C++, memory safety issues fade into insignificance is that we have access to libraries that are fully as fast as we could ever code in-line, optimized as only code that is used in many places can afford to be, that encode logic correctly, once, that can be used everywhere without compromise.
Rust does not yet have the facilities to capture as much semantics in libraries. Anything Rust can't capture in a library has to be coded in place, with all the opportunities for mistakes that brings.
So, Rust is good at safing low-level memory fiddling that we just don't do much of, anymore, in C++. Meanwhile, we avoid myriad errors that come from relying on necessarily leaky abstractions.
I was also using unique_ptr at Google (shared_ptr is not encouraged), but the problem with unique_ptr is that it doesn't solve the more complex data ownership cases, and it's totally unusable for multi-threaded data race problems.
Rust Rayon is a huge difference from anything available from C++, and it can't be solved in C++ without adding borrow checker to the C++ language (which wouldn't be a bad thing to do in my opinion, it would help modernizing old code bases).
You can't use unique_ptr for everything if you want to have performant code. It needs heap allocation, which slows down the code base compared to stack allocated variables, in which case tracking the ownership is much more complex, but it's worth it, especially if you have millions of users.
Someone still has to build those libraries full of value types, and build the necessary abstractions on top of low level pointer laden OS APIs. For a game engine programmer, that means dealing with new console hardware with new DMA controller and graphics APIs in concurrent environments where performance is critical.
If anything, low-level memory fiddling is becoming more common and more widespread. Vulkan and D3D12 get lower level than ever before in what they expose. Webassembly brings pointers back to webpages like it's the 1990s and we're rocking ActiveX. I fear I'm soon going to have to start running address sanitizer for webpages.
People tell me modern C++ solves this problem, but supposed examples of it invariably underwhelm me. I suspect people claiming such just have lower standards. MIRSA C can help. NASA-level piles of documentation and rules and restraint can help. Enough unit tests that you start to catch more compiler bugs than library bugs can help. Drowning the end user in the false positives produced by modern C++ static analysis tools can help. Thorough fuzz testing can help. But these are ancient techniques, not modern. Combine all that and more and, yeah, you can actually get kinda close to practicing safe C++. But this level of care is rarely practiced.
No, the real problem I think is very few people are capable of the skill, patience, and fortitude to write strictly correct C++, and they rarely have the time to. And absolutely none of them were born with that power. Fresh blood will learn from their mistakes, and we will pay for those mistakes in sweat, tears, missed milestones, and CVEs.
I'd love to be proven wrong. C++ pays my bills and has a great ecosystem in a lot of ways - I'd love if it were salvagable. Meanwhile, the last C++17 upgrade attempt I saw failed when we decided we were unwilling to maintain a fork of the standard library to fix compilation issues in the vendor's implementation for one of our platforms.
std::vector and std::string were designed in the '90s, and came with C++98. They have been modernized, somewhat, with move constructors and the like, but in answer to your implied question, no, they are not modern designs, and they still have sharp edges. That is a burden of backward compatibility.
But you knew they came from C++98. Maybe you meant to ask if I still use them? I do. But I don't store references to their elements.
I don't know why you carry on about smart pointers. I find I use unique_ptr here and there (shared_ptr, practically never), and never, ever in any interface. Instead, modern C++ code traffics in value types, and relies on destructors, almost always from a library or compiler-generated, for cleanup. Use-after-free never happens because anything freed was only ever referenced from within a value-typed object that no longer exists.
The reason that, in modern C++, memory safety issues fade into insignificance is that we have access to libraries that are fully as fast as we could ever code in-line, optimized as only code that is used in many places can afford to be, that encode logic correctly, once, that can be used everywhere without compromise.
Rust does not yet have the facilities to capture as much semantics in libraries. Anything Rust can't capture in a library has to be coded in place, with all the opportunities for mistakes that brings.
So, Rust is good at safing low-level memory fiddling that we just don't do much of, anymore, in C++. Meanwhile, we avoid myriad errors that come from relying on necessarily leaky abstractions.