Because it's significantly harder to isolate problems and you'll end up in this loop
* Deploy everything
* It explodes
* Rollback everything
* Spend two weeks finding problem in one system and then fix it
* Deploy everything
* It explodes
* Rollback everything
* Spend two weeks finding a new problem that was created while you were fixing the last problem
* Repeat ad nauseum
Migrating iteratively gives you a foundation to build upon with each component
Of course, you need some way of producing test loads similar to those found in production. One way would be to take a snapshot of production, tap incoming requests for a few weeks, log everything, then replay it at "as fast as we can" speed for testing; another way would be to just mirror production live, running the same operations in test as run in production.
Alternatively, you could take the "chaos monkey" approach (https://www.folklore.org/Monkey_Lives.html), do away with all notions of realism, and just fuzz the heck out of your test system. I'd go with that, first, because it's easy, and tends to catch the more obvious bugs.
So just double your cloud bill for several few weeks, costing site like GitHub millions of dollars?
How do you handle duplicate requests to external services? Are you going to run credit cards twice? Send emails twice? If not, how do you know it's working with fidelity?
At least in the short term, I think the economics will prevent stores from using this for marketing data. Operating these drones has to very, very expensive and I don't think knowing about your driving habits for a few miles is worth the cost.
The only reason these things could be economical in the short term is because theft costs retail companies an insanely high amount of money.
However, this might change if these drones become cheaper to operate and purchase.
I would think there's some crime that would prevent people from using these to the extremes. I am almost certain it's illegal to put an air tag on someone to track their whereabouts and I would also think those laws would apply here.
When the shopping cart was first introduced to grocery stores, nobody wanted to use it. People preferred to continue lugging around heavy baskets rather than push a cart. Actors had to be hired to walk around the stores pushing them around to convince people it normal and valuable to use them.
Sometimes people are resistant to use things that improve their life and have to be convinced to work in their own self interest.
There are places around the world where shopping carts were introduced successfully without the accompanying actors to convince the customers to use it. The actual criteria must be whether the new addition boosts or hampers the customers' productivity, at least in the long run.
When I first heard about git, I knew that it would be very useful in the future, even if I had to spend some time and effort in mastering it. Same with CI, project planners, release engineering, etc. Nobody had to convince me to use them. But AI just doesn't belong to that category, at least in my experience. It misses results that a simple web/site search reveals. And it makes mistakes or outright hallucinates in ways even junior developers don't. It's in an uncanny valley between the classic non-AI services and plain old manual effort with disadvantages of both and advantages of neither. Again, others may not agree with this experience. But it's definitely not unique to me. The net gain/loss that AI brings to this field is not clear. At least not yet.
Only looking at home prices compared to salary is very misleading because it doesn't account for changes in interest rates. Mortgages were almost 20% interest in the 80s. Cheaper doesn't mean much if you still can't afford the monthly payment.
Also looking at average price doesn't account for the rising quality of housing. In the 1980s the average home was around 1,700 square feet. Today, it is nearly 2,700.
If you look at the pricing trend of a single house, it tells quite a story
In my city, A house that would have been 80k in the 80s is listed between 500-600k today, depending on the neighborhood and how updated it is
In the 80s you could get a 15-20 year mortgage at 20%
Now you get a 30 year mortgage at 5%
If your monthly payment today is less than it would be at 20%, it is only because you are expected to be paying for it at least an extra 10 years compared to the past
There is absolutely no question that houses are less affordable today than they used to be
And that's before even thinking about how salaries haven't grown anywhere near as quickly as real estate prices
An 80k house in the 80’s means inflation alone accounts for ~$300k of the current sale price of the house. If the general area has built up at all in the last 40 years, that could account for a bunch more of that cost. Absolutely some areas and places are climbing way faster than their market wages are keeping up, but I also think a lot of housing discussion compares a house 1 hour outside of the nearest big city with that same house now in the middle of that expanded big city. Location matters a lot, and what is a great location now might well have been out in the sticks 40 years ago.
> In the 80s you could get a 15-20 year mortgage at 20%
20% was the rate for 30 year mortgage in the 1980s. My source is specifically for 30 year mortgages.
> If your monthly payment today is less than it would be at 20%, it is only because you are expected to be paying for it at least an extra 10 years compared to the past
That's a gross overgeneralization. Interest rates are lower across the board today.
> If your monthly payment today is less than it would be at 20%, it is only because you are expected to be paying for it at least an extra 10 years compared to the past
I never said they weren't but you also haven't provided any evidence that arent.
> And that's before even thinking about how salaries haven't grown anywhere near as quickly as real estate prices
You're literally just repeating your original claim with no new evidence.
It really isn't relevant in the way that you think.
The fiscal cycle is a ponzi cycle following a ponzi curve this mirrors many areas including business growth S-curves.
In general that characteristically means benefits start front-loaded, they have a period of diminishment, and then at a point outflows exceed inflows where you have to pay back and keep paying more than you spent. The overall plan being by the third stage, you are dead and don't have to pay it back, or have been bought out and its someone else's problem.
Even with normalization of the price level it doesn't accurately reflect purchasing power well, and so you cannot really measure opportunity cost or make an accurate objective comparison.
This is the nature of fiat money distortions and why Mises was so against Socialism. In his works he defines the Economic Calculation Problem, or the Socialist Calculation Problem whichever you rather prefer. Sustaining chaotic distortions are ECP/SCP.
The nature of fiat/ponzi is that money printing in an economy debases exchange, extracting cost and forcing failures broadly to non-market socialism when that third stage happens.
This is reflected in a lack of employment, and no or gradually fewer businesses entering the market/industry. It acts like a sieve, with the money printer continuing even after no profit can be made (where regular business exits the market). It is a parasite that kills its host every time, but that is a problem for next quarter.
Distortions take many forms and they are chaotic and by that nature they unknowable in detail specifically and unpredictable. Artificial Supply Constraint to raise price level is one such form.
Often there are whipsaw dynamics between opposing constraints, with diminishing returns required to remain stable. A cliffside with drop-offs on either side as you approach, and you can only march forward into the ocean. At first there is a minor safe path, but eventually it converges. Hysteresis can be an impossible to solve problem without being able to change the underlying system.
Price levels being suppressed (such as Gold/Silver/Food). Price discovery being manipulated (dark pool transactions exceeding exchange volume). These are signs of chaotic distortions caused by money printing.
Economic calculation requires price discovery, which requires adversarial decision-making. Cooperative behaviors naturally occur when there are few participants. The distortions injected through the money supply have knowable (in general) dynamics and outcomes.
Its important to remember that Price != Purchasing Power. Wage suppression is also a real thing and its fueled by the same.
Honestly, this is a completely bogus interview that is at best, misleading, and at worst, outright lies.
> Well, all that is absolutely true, but sadly it’s even worse than that. What it doesn’t account for is people who have a piece of a job — they work an hour or two here and there, but they want a full-time job. It doesn’t account for that.
The BLS does track that as part of the U-6 unemployment rate which is near a 20 year low. The U-6 unemployment rate counts people that work less than 35 hours per week, but want to work more hours, as unemployed.
Correct but theyre saying the headlines are wrong because the statistic that they use is not accounting for people that are reluctantly part time, but they're not mentioning that there is a statistic that does track that and that statistic also reports that unemployment is low.
It is a lie by omission.
This interview was about how the data and our feelings about the economy don't match. The crux of their argument is that we're looking at the wrong data and the right data shows the true state of the economy, but the true data exists and doesn't align with our feelings either.
What use is the value of reading a journal if I cannot be certain that the material is reliably peer reviewed?
I'm not sure why the podcast author is being held to a standard that should be levied to other matter experts, that come way before he ever reaches out for an interview.
It is factual that Langer performed a study in which X was done, Y was measured and Z was concluded.
What is less clear is whether X was good experimental design, whether the measurements of Y were appropriate, relevant and correct, and thus whether or not Z can be concluded.
"Worldwide, at least 2.8 million people die each year as a result of being overweight or obese, and an estimated 35.8 million (2.3%) of global DALYs are caused by overweight or obesity."
It sounds like they're taking the door stop because they want the security features of the building to actually function and the other tenants are actively conprimising them
I think Google search and apples ecosystem are extremely different. Google search is trivial to leave, any one can switch to bing by just typing a different address in the URL bar. Switching off of apple products is painful and difficult and it's by design. My wife and I switched from iphone to Android over a year ago and we're still fighting with apple to stop routing some text messages to iMessage when it should be going to our phones over sms.
I don't think that's a fair comparison because they're fulfilling substantially different niches. Gemini is a conversational model that can generate images, but is mainly designed for text. Stable Diffusion is only for images. If you compare a model that can do many things and a model that can only do images by how well they generate images, of course the image generation model looks better.
Stability does have an LLM, but it's not provided in a unified framework like Gemini is.
* Deploy everything * It explodes * Rollback everything * Spend two weeks finding problem in one system and then fix it * Deploy everything * It explodes * Rollback everything * Spend two weeks finding a new problem that was created while you were fixing the last problem * Repeat ad nauseum
Migrating iteratively gives you a foundation to build upon with each component