Amazon is truly awful at hiring. Their engineers often give strange programming puzzles to solve, then add arbitrary constraints like, "this needs to operate in 16MB of RAM".
If Amazon can't spring for a $15 stick of memory it's probably not going to be a great work experience.
I also wouldn't want to work with engineer who asked BS questions like that. Waste of time.
But this is a legit question. Believe it or not, even at Google/Facebook/Amazon a) memory is neither infinite nor free and b) tasks tend to run in thousands of instances.
Being able to save 10M means much more that $15 for a single die.
This may come as a great big surprise to you, but memory capacity is legitimately a problem, most especially at scale, and even where you can just "spin up a new instance".
Taking your "$15" stick of memory comment, that's a cost of $1.5m dollars when you start talking about a hundred thousand servers. You could pay the salary of a number of SDEs for the lifetime of that generation of servers for that much money.
Working in a memory constrained approach can lead to lower latency and faster applications, and make the difference between a service being profitable or not.
Without going in to details that would put my NDA at risk, when I worked on an AWS team, there was a change went through that improved the efficiency of the service by about a third. The original code was well written, extremely logical, and exactly what almost anyone would write when presented with the problem. You could have shown it to developers anywhere and they wouldn't have found fault with it.
Stopping and thinking very carefully about largely fictitious constraints revealed a potential other approach, one that was somewhat left-field, but both worked and came without any risks.
When your service is operating on many tens or hundreds of thousands of servers worldwide, or more, being able to retire up to a third of your servers is far from negligible in annual costs.
Obviously though the idea is you would be writing code at scale, where you will be processing a much greater volume of data where a $15 stick of memory is not going to solve your problems. Also, it's common to introduce those constraints to force you to use the 'clever' solution rather than the brute-force method.
It is true that their hiring process may be far from optimal. But IMO it's also fair to ask algo related questions by throwing in constraints and see how the candidate reacts.
Actually, posing a simple problem and then varying the constraints is one of my favorite approaches to interviewing. It tests a candidate's ability to reason under unfamiliar and changing conditions (like the real world) rather than regurgitate textbook answers.
If Amazon can't spring for a $15 stick of memory it's probably not going to be a great work experience.
I also wouldn't want to work with engineer who asked BS questions like that. Waste of time.