I think maybe this misses the mark. Yes software can lead to unbounded complexity unlikely many physics based engineering disciplines.
However, at the end of the day, there is an input and output and compute and memory needed to run the thing and if we look at that we realize, we never actually left the bounded physical realm and we can still engineer software systems against real world constraints. We can judge its efficiency and breaking points.
What's very different is the cost to change the system to do something new and that's where this unbounded complexity blows up in our face.
>However, at the end of the day, there is an input and output and compute and memory needed to run the thing and if we look at that we realize, we never actually left the bounded physical realm and we can still engineer software systems against real world constraints. We can judge its efficiency and breaking points.
This is a common sense view of computation that's unfortunately wrong.
The simplest counter example is the busy beaver program: with as little as 12 states we have saturated the computational capabilities of the universe, but it looks completely safe and sane for the first few states you would be testing against.
You may call it pathological, and you'd be right, but the point is that you never know under which rug a function that takes more computation than the universe can supply is hiding.
By comparison power electronics engineers don't have to formally prove that they didn't accidentally include a nuclear power plant in their e-scooter design.
I think you just made my point. If designing an eScooter you'd look at available power needed across the problem space. Even more so you might put in a safety features like a temperature monitor so electronic components don't fail because someone decided to go up a steep 12 mile mountain path and overheat the battery.
If I was designing a software system, I could introduce a time constraint. An imagined conversation:
"How long will it take to get an answer?
Between half a second and the heat death of the universe.
OK. Can we just issue a timeout error after 1 second?"
This is putting controls in place so the system doesn't exceed its constraints and although hypothetical it might be able to do a job for any input, it can't because we haven't been able to find a more efficient solution for certain known and unknown scenarios.
And then you quickly find out that the turing machine on your lap doesn't actually have infinite tape. Do you honestly believe that there's no other human endeavor where you can DoS yourself?
If on the other hand you're speaking of the theoretical computational needs of the program you just wrote, then your earlier dismissal of mathematics and its "even worse track record" is all the sillier.
Mewing is something intended to address this, but evidence isn't there. Everyone wants a non-invasive solution rather than jaw expanders, braces, retainers etc.. so depending on where your bias, you might be against "Big-Ortho" and try this, or you could invest in proven orthodontics.
Dr Mew doesn't claim that orthodontics don't work, he points out they are expensive and lucrative, and he claims that if we maintain a "jaw healthy" diet from childhood, orthodontic problems will be much less prevalent in the population (this is a related but independent claim from the "mewing" regimen) He says that the evidence is found by comparing modern jaws/bites with historical skulls which show there has been a dramatic "20th century" emergence of orthodontic problems which would indicate a developmental issue rather than a genetic one.
I don't know if he is correct or not, but it's a claim that can be independently measured/verified. Instead of using and publishing such sound science, the orthodontia community is using "cancellation" against him which certainly matches the lucrative aspect, though doesn't provide direct evidence.
you sound angry, science is best conducted from a neutral POV
I've listened to his evidence, repeated it clearly here for you, and am aware of no counterevidence.
there is nothing wrong with calling his license revocation over this precise topic "cancellation"; cancellation is a more precise term than "full of shit" which could refer to constipation.
You don't seem curious to learn, the hallmark of HN's ethos.
There's definitely an existential question around if fusion will ever be able to beat renewables plus batteries, but who knows with our energy demands ever increasing at some point renewables may hit a breaking point in land cost.
I'm generally pro-publicly funded research. There is not any direct ROI on say the LHC, but it does fund advanced manufacturing and engineering work that might enable other more practical industrial applications. The ROI might be a century away.
Another way of looking at it is CDK's main product is an ERP tailored to the automotive industry. These aren't systems with short setup times and there is major risk to trying to replatform you're accounting system. How do you pull of an ERP migration while you're existing one is down? Further, almost everything the dealer does from inventory management to service is integrated to this system of record either directly or via data integrations.
Some of the smaller mom-and-pop stores just use small business accounting systems like quickbooks but those get pretty tedious to maintain with any sizable number of sales or employees per month.
Lately I've wondered... what if this flow of time wasn't true? at least not at universe scale? Messier than just gravitational affects but actual weird topography in time?
There is nothing scientific about this conjecture, simply a thought I haven't had time to fully contemplate. What if there were loops and turns such that light and energy from distant galaxies would loop back around not just in space but in time creating weird feedback loops.
Watches are common, indicators are common, warnings tend to be very last minute.
I consider myself a very weather aware person living near the edge of tornado alley in Dallas, I get all the alerts, generally keep a strong watch on radar development and storm arrival times (hail is just as much as a concern as tornadoes).
In general if there is a detection of a rotation or a strong hail core on radar, emergency sirens will go on near the affected area. Sometimes it just happens too fast, so if there is another method like the article sounds to detect a strong potential a tornado is forming it will absolutely reduce casualties.
As an example I lived through in October 2019. There was one hour between a Tornado Watch being issued and when the EF2/EF3 hit the ground. Watches generally last a long time and cover a large area so they aren't particularly helpful to me other than to indicate to 'check the radar on the regular'.
Because I was already glued to my phone I saw the warning right away, I was able to text friends that lived a few minutes from the tornado touchdown point that there was a tornado right next to them. Their sirens hadn't gone off yet, by the time they had taken shelter they heard the sirens and the wind kicking up right after. They got off light on damage compared to the rest of their neighborhood but I can't imagine someone out walking their dog or running an errand and then only having 1 or 2 minutes to find shelter. I'm still amazed this thing didn't cause more injuries particularly in the early minutes when the news crews and meteorologist were playing catch up.
https://en.wikipedia.org/wiki/Tornado_outbreak_of_October_20...
Maybe this tech would have helped give a clearer indicator versus the usual approach of waiting to see something on radar or manually spotting it. Or maybe some storms will just form too fast to have any useful indicators.
>I consider myself a very weather aware person living near the edge of tornado alley in Dallas,
Howdy neighbor! Do you find it mildy calming when looking at the weather warning tweets from Delkus while his avatar is cheersing you with a cocktail?
This tech might provide another layer of confidence for tornadoes, but as you mentioned, the hail is another story in and of itself. I have mixed feelings about the "tornado sirens" being used for hail/severe weather without a tornado. I lean towards it being a good idea. I'm actually in Dallas, not one of the suburbs, so we tend to get protected from tornadoes by the infamous heat dome. Earlier this year, the sirens went off for hail and they were talking about softball size hail and larger west in Arlington. So, yeah, people definitely need to know about that. However, it does remind me of the Hawaii debate on using the tsunami sirens to warn of the fire, but it is totally different in that this hail/tornado siren both mean to seek shelter and not a confusing run towards the danger.
Tweets from Delkus and the Fort Worth National Weather Service are the main way I pay attention to new developments. Typically stuff will get posted there before live TV coverage starts. There's almost always a graphic posted of what the window is they expect for storms to form which helps me understand what I need to pay attention to.
I have a relative in another state whose a meteorologist and he doesn't have nearly as much fun as Delkus's team does online.
As far as over-indexing on preparedness goes, there has been at least twice our kids school has dismissed early in light of a severe weather forecast, however they do this several hours ahead of time (parents can't react that fast anyways). Only for the storms to be your more normal strength T-storm by the time carline starts. I can certainly appreciate the intent, but it's almost never that certain what's going to happen unless the storms are already popping up.
I like Delkus' whole team. There's the younger guy that clearly loves the tech, and he's always making adjustments to show things that the old guys probably wouldn't think about. The Delkus and Finfrock types come across more as the guys that would just plug something in and use the defaults for everything. The other guy says "hold my beer" while he customizes everything.
I lived down the pike from the place where the largest tornado in history touched down, and you are spot on about warnings and knowing beforehand. If you are letting a tornado sneak up on you, you are doing it very wrong.
Hell, you know bad things are coming when it is 75F in the morning in December. The last bit of tornado weather telegraphed itself in the morning and didn't touch down until mid afternoon, yet I heard people on motorcycles when the storms hit.
The person you are replying to is saying the opposite. Watches are common (and the sky everywhere will warn you). But a tornado WARNING comes without enough time to broadcast and react.
No one is surprised by all the conditions that point toward a tornado. The problem is that in certain areas, the tornado watches are occurring ALL the time and you can't stop every time there is one.
So, yeah, in hindsight everyone saw it coming. But no one thinks THIS time will actually be the time so it does sneak up on people. My father recent went to bed during a tornado warning and one touched down less than a mile away. He got lucky. More reliable indicators of actual tornadoes will be helpful.
Ya because people have to go to work and such. I've been in countless tornado watches and days that could have spawned a tornado. Some years this covers a significant amount of spring. It's difficult to always maintain an eye on that.
And I've been involved in tornadoes on days where there was zero risk of the event. Surprise you're getting a tornado out of a totally random storm that was not predicted.
So much this. Where I grew up in Kansas it can be bluebird skies in every direction with a clear weather forecast at 3pm and by 5pm it's nothing but darkness and hail.
Tornado alley is where warm air from the gulf collides into cold air coming from the NW. The line where these collide shifts around. Storms can form very unpredictably and rapidly.
On the one hand it's awe inspiring to see a massive thunderhead materialize out of nowhere in just a couple hours. On the other it means what the other commenter is saying is very misleading. Texas, where the other commenter is apparently from, is at the southern end of tornado alley and I believe not quite as dynamic, which may explain their attitude somewhat.
Definitely think housing is more easily manipulated since there's little standardization of the condition of a house or the materials or features that would drive pricing. (sq foot, # bedrooms/bathrooms/pool/garage size, year built). Everything else is neighborhood comparative sales.
The auto world works a little different since there is an entire wholesale operation behind the scenes that also drives pricing. There are several alternatives to KBB although less targeted to consumers. Also way more standardized in condition reporting.... but I know too much about the industry.
I'm not so certain. There are only like a dozen car makers. In any meaningful city in America there are going to be several hundred, if not thousand, landlords. It seems hard to manipulate in that sense. Despite conspiracy theories - landlords are highly incentivized not to let apartments sit empty without rent coming in.
I do think that the introduction of software has probably made price discovery more efficient and gives landlords more confidence that if they raise rents, they will easily have offers even if the current tenants leave.
Yes, the are thousands of what technically qualifies as a 'landlord' in every city, but the distribution of landlords by property type skews strongly. Units with >10 units are owned by a relatively few number of large landlords with many properties. Properties with 1 or 2 units are often owned by individuals with one or maybe a couple properties. So depending on the neighborhood, properties there might be owned by lots of different small-time landlords or it might be just a couple of large corporate landlords.
For some of the more desirable and built-up urban neighborhoods, small individually owned properties are all but bought up and almost all of the units are in large complexes owned by large corporate landlords.
In addition, very often lots of the small landlords are hands off delegating most practical decisions on renting to a handful of large management companies, and if they were to become unsatisfied with doing that would exit the landlord role, rather than directly manage. So, in market effect, they function more like a smaller number of large landlords who happen to pay out profit shares to a larger pool of people than like a large number of landlords directly participating in the rental market.
I would just wager the number of people locked in to a super specific neighborhood is not that large. Even so I don't think the power law distribution is that extreme here compared to other parts of the economy.
I live in NYC. The housing market sucks here - but I don't think its because of collusion in setting prices. The supply just isn't as high as demand (and millions of voters who already own property seem fine with that)
It probably is more so in DC than in NYC. Most of the young professionals I know there only consider living in northwest, south of U street, which is only a few neighborhoods.
But if we consider that people would consider living in the larger commute area, the article says:
> Across a wider Washington-Arlington-Alexandria area, more than 90 percent of units in large buildings are subject to RealPage pricing
And these large building are the bulk of the housing stock in these areas, they're not competing with single family homes or duplexes in these areas.
Right - but what percent of those large buildings are owned by the same people?
I'm not suggesting an app couldn't help coordinate prices in this instance, but I am suggesting that as long as the buildings are owned by other people they still have strong incentive to compete with each other.
If I own a building and the app suggests I raise prices, but then the units don't fill, I'm just going to lower prices.
Whatever the number is, 90% of them are owned by companies that were colluding on pricing.
> If I own a building and the app suggests I raise prices, but then the units don't fill, I'm just going to lower prices.
Yes, but if everyone raises prices together, people won't just choose to be homeless. Collusion is anti-competitive because it ensures that the participants wont compete with each other on price. They all raise rates together so people are stuck: their rent went up, but so did rent everywhere.
In a competitive market, some will raise rates, some will keep them the same, some may drop them. Those differences in strategy is what creates competition. When everyone cooperates to do the same thing, it eliminates competitive pressure.
I see what you are saying. My comment was more geared towards how a Zillow number or KBB number could move the market and I didn't consider that in the context of the article about rent.
It would be interesting to see if how rent markets change if more data becomes available.
In the auto-industry, a car dealer can offload the unit to the wholesale market to minimize loss, whereas a landlord is less liquid.
The inventory pressure is there in both situations. An unsold car after 30-60 days is a problem both because of typical retail business dynamics and additional factors with general vehicle depreciation and high maintenance overhead of a vehicle compared to other capital goods, but the exit strategy is there and risk exposure can be limited.
Even a hybrid mission with humans in orbit doesn't make sense. You have to bring less fuel for landing/take-off from Mars. But for that same cost you could send way more robotic workers and just deal with speed of light/delays (3-20 minutes).
If there was significant uncertainty in what resources needed to be deployed to where then I could see a benefit to having an onboard team of humans who could assemble workers or payloads on the fly from orbit. However this would be a big shift from current mindset of designing robots for exact problem/solutions with precise payloads to instead having an excess of resources on board.
If the perspective shifted to "we're colonizing Mars so every ounce of metal in orbit will get used at some point" this is less of a concern.
* Getting real deep with detecting OS/instruction set and edge cases
* Constantly validating permissions on every directory and file
* Constantly verifying checksums on everything put in place
* Concurrency controls to make sure the user didn't launch the installer twice, or the system wasn't live running when being reinstalled.
* Dependency verification was it's own rats nest of problems
* Uninstalling
Easier problems:
* Logging
* Status tracking (except for really large files things get weird...)
* Aborting/Cancelling install
Logging is easier - but deceptively so! Most problems have hidden depth, and logging is certainly a great example.
So - do you want to log variables for debugging purposes once installers get to a certain level of complexity? Great. Now you're logging people's usernames and passwords, and you have to add some functionality to not do that.
However, at the end of the day, there is an input and output and compute and memory needed to run the thing and if we look at that we realize, we never actually left the bounded physical realm and we can still engineer software systems against real world constraints. We can judge its efficiency and breaking points.
What's very different is the cost to change the system to do something new and that's where this unbounded complexity blows up in our face.