Let's also admit every language before Rust hasn't succeeded at out right replacing the vast majority of C/C++, so it becomes it's own form of conservatism/bias. This post is coming from one of the few companies that actually does complete large scale rewrites of things. I'm curious/excited to see how it goes for them.
And I have that C/C++ bias: I don't enjoy programming in rust as much as I do C, but I don't like doing lots of new things.
There is hardly that much C and C++ left on GUI frameworks and distributed computing, other than existing infrastructure from projects that are decades old.
That's only late if there are other big changes going in at the same time. The vast majority of operational/ticketing issues have few code changes.
I'm glad I had the experience of working on a literal waterfall software project in my life (e.g. plan out the next 2 years first, then we "execute" according to a very detailed plan that entire time). Huge patches were common in this workflow, and only caused chaos when many people were working in the same directory/area. Otherwise it was usually easier on testing/integration - only 1 patch to test.
This is a great question, and a state of the art kind of thing.
HDDs are sold with a lifetime drive read/write amount and power cycle warranty, along with usually some environmental operating envelope. read/write relates to the quality/space of the platter, power cycle is usually the actuator & read/write head being reseated/wearing out. Environment is the same as all other devices in a DC.
Most folks replace drives when they die (reads/writes stall or return garbage), or when the warranty runs out. Some will pay for a warranty exception, and some will just use the drive outside of warranty. Depending on how you use the drive, what environment it's in, etc changes how much you can push things.
I'd say anywhere from 4-8 years, depending on how it's used. In many cases it can be cheaper to have a worse environment for your fleet (thus using less power on hvac) and replace devices more frequently.
I tried for 6 weeks. Eventually, it just stops functioning. The same program and arguments spits out "segmentation fault" 33% of the time I run it, with the other 67% working perfectly. The only way I could explain it was that it was in a function outside the main, because when I put the exact same code in the main, compiled and ran, it worked.
I have no other explanation. At some point, having too many nested loops and variables causes segmentation faults, whereas less complex code functioned without error. I needed to have certain things performed, and it only functioned in the main.
Why would you try to do this in C of all languages? It's one of the worst choices, especially for a self-learner and a beginner like you. Consider: choosing another language could, on its own, 100% eliminate any possibility of getting a segfault! With just that, you'd be spared from having to produce an abomination of many thousands of loc inside a single function, which is never (unless you're Donald Knuth) a good programing practice.
Python is slower but easier, and less likely to segfault out of blue! You don't even have to have a main() loop. If you just have an idea worth demoing quick, I'd recommend switching to Python 3.
There's also the fact that hard drive capacities keep increasing and increasing significantly faster that the power required, and sooner or later for very long term storage it'd become cheaper to migrate all your data from those 5 year old 4TB drives to more modern 16TB ones. That's assuming you want hot access to the data and don't plan on spinning them down as soon as you've written to them, like you'd do for a cold backup of the whole IA.
I remember for a long time (I'm talking 20-ish years back here), every hard drive I bought had double or more the capacity of every drive I'd ever bought previously combined. My first ever 40MB (yes, megabyte) drive got upgraded to an 80MB one, that got updated to a 250MB one, then a 750MB, and then a whopping 2GB drive (how would I _ever_ fill that up???) - and so on. That's slowed down some, but I'm currently starting to think about upgrading my 8TB drives (Raid1 pair) with 20TB drives when the prices start to drop a bit more.
Drives do 140-220MB/s depending on the LBA distance of the readhead, and that's not really changing. 160MB/s is very common.
So your 8TB drives, assuming 1MiB writes with a 20ms latency and 160MB/s, you can rewrite the drive ~155 times/year. At 20T this drops to ~62 times/yr.
Do people really replace their drives when the warranty runs out? Hard drive manufacturers won't provide data recovery on drives that fail under warranty[1]. It makes more economical sense to just run a drive until it dies. You'll end up paying the price for a new drive either way, but less often if you ignore the warranty expiring.
1: I discovered this myself when a Seagate drive containing some important data failed under warranty. If you're foolish enough to send them a failed drive with data you need recovered (like I was), all they'll do is throw it in the bin and send you a replacement drive.
Nah, you're pretty far off in terms of population numbers at FAANG. Amazon's levels are largely L-1 most places in terms of comp (aka aws L7 gets paid G L6), and 7+ at Meta is ~1% of the company's employees, and even less of it's SWE.
Amazon and Microsoft also have less "alignment" at various levels compared to silicon valley due to literal geographic and historical reasons. Principal SWE at AWS is probably ~L6 at G in my experience. Obviously there are always outliers in all directions.
L3 is early career, L4 is mid-career, L5 is senior. You can hit L5 on the strength of pure technical contributions regardless of business/org needs, usually.
L6+ is staff, and tends to involve a very different skillset. (If you're not looking to lead a team, you're probably not going to have the kind of impact that gets you to L6, let alone L7 or higher at Google.)
This is all to say that ICs in the L3/L4/L5 bucket generally show a clear progression in technical skills but beyond that it's fuzzier.
My definitions are basically: Junior developers need supervision because left to their own devices they'll screw things up horribly; normal developers can produce good code independently; and Senior developers are able to catch the mistakes the Junior developers are making and set them on the right path.
I see; that's L3, L4, and L5 progression in a nutshell at Google - although leaving L3s alone doesn't _guarantee_ something will go wrong, it was more that there was no way for them to figure out optimal solutions without help thanks to the sheer complexity of Google.
I'd say the same held true at Amazon but I was in groups which were, at the time, at the periphery of the company's engineering efforts - we didn't have any associated principals to talk to, and maybe one SDE3/L6 to 10 SDE2/L5s mixed with SDE1/L4s.
I would say* under these definitions L3 is junior; what the industry calls senior is somewhere between L4 and L5. L5s at Google are expected to mentor L3s and L4s but also to design systems, break down into tasks, and coordinate tasks between teams and engineers.
If you were a senior engineer at a 50-person startup you would commonly get hired at L4.
* I left Google 18 months ago; also, Google is a large company, and while they strive for uniformity across teams, the levels aren't really quite the same company-wide.
Markdown has lots of syntax from irc/email that folks have been using for decades. The popularity is also due this, it was partially a refinement rather than invented.
This is also why the parsing ambiguity/backtracking ends up occurring - humans can read plaintext and do pattern matching with context/attention whereas parsing algorithms have a hard time.
You can price this relatively quickly, and see that it's probably not worth it.
~1 week outage once every ~5 years, let's say you lose 100% of revenue that week, that's 0.38% of revenue. Now most likely they're not losing 100% of revenue, demand for cars is not very elastic... it starts to get pointless all around.
Which is frustrating for consumers, but the profit for perfection isn't usually there.
I think you'd have to compare that .38% of revenue to the cost of developing the paper process, as well comparing what revenue you'd lose with the paper process anyway. It doesn't seem crazy to me, unless your paper process is hard to develop.
There's also an argument that developing a backup process is a great exercise to help people understand all the parts of your system and how they interact, as a first step to making them more efficient/redundant/secure/delightful.
You are missing that Spain's economy is not growing like the USA's.
Spanish wages grew only 2.7%/yr L10Y [1], and its nominal GDP/capita looks completely flat [2].
This explains why the city is affordable for international tourists but not locals. Regardless, a high "tourist tax" would probably be better for their economy than an outright ban.
At 2.7% that's 2.7% and .7% growth faster than wages.
Let's look at the bay's wage growth[0]: 11% (or ~1%/yr) from 2010-2020, but they removed CPI-U inflation[1], so it's something higher (annual was ~1-3% in that time period). Which puts the bay area housing at 5%+ higher growth, 2x to 7x worse than Spain.
So, once again - Spain is doing well when it comes to housing prices. Tourism frustrates locals because they think it's increased their housing costs wildly - but in fact it's because their economy is switching to a tourist economy unless they find an industry to grow.
I have used this obsessively for many years, and removing the web version (and requiring device->device transitions) makes my phone a spof for this data I truly like to keep.
Right.. I always assumed that one day I could dump my location history into a geospatial database and query it. Now I'm hearing that you can't even get historical data through takeout anymore?
Take myopia into your own hands, is my best suggestion. Communities/movements like endmyopia (and many other forums) help understand what you can do.
Unfortunately I haven't found an easy way to keep up with the exercises that really improve things. If anyone has a way to make them easier to accidentally do, would love to know.
Safeeyes is a timer for Linux that every 15 min tells you to perform a simple eye exercise.
(I'm using 10m intervals and slightly longer exercise time and my eyes are better the more I spend in front of PC working - program doesn't stop movies and games).
Doesn't help with astigmatism tho, bugger :\
There is a mac alternative called Eye Leo
If anyone has a Windows version, pls post it. I'd love to get my mom on that.
No, the endmyopia stuff specifically rejects the bates methods (which include staring at the sun?? "palming"??).
Mostly it focuses on learning to focus at the edge of what you currently can at a distance, and then as you improve getting lighter and lighter prescriptions. I got as far as -1.5 better, but now sit at around -1 better. Also ensuring lots of super bright outdoor environments, and focusing at a distance regularly.
Whether it's a cure or studied well, I dunno. I just encourage folks to try it out if it helps them. I have an astigmatism as well, and I haven't seen any improvement there. So definitely not a panacea.
how does that work? ophthalmologists would say that it's impossible, but it feels like most physicians are probably 10-20 years behind the bleeding edge...
This is specifically covered in cases like Midler v. Ford, and legally it matters what the use is for. If it's for parody/etc it's completely different from attempting to impersonate and convince customers it is someone else.
Midler v. Ford is a bit different from CahtGPT in that it was about a direct copy of a Midler song for a car ad, not just a voice sounding similar saying different stuff.
And I have that C/C++ bias: I don't enjoy programming in rust as much as I do C, but I don't like doing lots of new things.