Wait, but the point of the piece is that although college has always been transactional, behavior has changed.
If so, why would transactional-ism be the cause?
Read on:
> The average student has seen college as basically transactional for as long as I’ve been doing this. They go through the motions and maybe learn something along the way, but it is all in service to the only conception of the good life they can imagine: a job with middle-class wages. I’ve mostly made my peace with that, do my best to give them a taste of the life of the mind, and celebrate the successes.
And then, crucially:
> Things have changed. Ted Gioia describes modern students as checked-out, phone-addicted zombies.
"Things changed" is the part I disagree with. The students just have better tools to respond to the same incentives. My cohort ~15 years ago would have used just as much chatgpt if it had been available, and our spelling would have been just as bad if AIM had autocorrect when we were kids.
When better technology and lower standards allow disengaged students to pass, what you get is more disengaged students.
I don't think the author of the piece is saying there has been a cultural change among students, emanating from within. Rather the thesis is that smartphones are the culprit. "Things changed" can encompass the proliferation of smartphones.
Sure, but the argument is still that the smartphones aren’t the root cause. It’s the transactional nature of the thing. Can’t fail students because money would go down, so keep passing them as they get better equipped to ignore you and have reduced requirements to get a passing grade.
The thing that’s changed is how much the transactional nature favors the lazy students, not the smartphones specifically.
The reason the argument is so bad that “it’s the smartphones” is because that implies an easy solution that is external to the academic system, when the root cause is internal to the system.
Why would the transactional nature favor students now though? What’s the mechanism for that, that’s internal to the system?
In other words it sounds like you’re arguing that the root cause is “the transactional nature” but that’s the one thing that hasn’t changed. So why is it worse now?
What is it that makes students “better equipped to ignore you”?
Because the universities themselves have been constantly lowering standards. It was always a transaction but there was a price. That price is locked in a race to the bottom because administrations and professors don't care about standards.
As a counterpoint, Postel's Law as implemented in other domains has been spectacularly successful.
One classic example is in transistor networks: each node in a network (think interconnected logic gates, but at the analog level) accepts a wider range of voltages as "high" and "low (i.e., 1- and 0-valued) than they are specified to output. In 5V logic, for example, transistors might output 5V and 0V to within 5%, but accept anything above 1.2V as "high" and below that as "low". (Sometimes called the "static discipline" and used as an example of the "robustness principle"—the other name for Postel's Law.)
This is critical in these networks, but not because transistor manufacturers don't read or fully implement the spec: it's because there is invariably unavoidable noise introduced into the system, and one way to handle that is for every node to "clean up" its input to the degree that it can.
It's one thing to rely on this type of clean-up to make your systems work in the face of external noise. But when you start rearchitecting your systems to operate close to this boundary—that is, you're no longer trying to meet spec, because you know some other node will clean up your mess for you—you're cooked. Because the invariable noise will now push you outside the range of what your spec's liberal input regime can tolerate, and you'll get errors.
The problem isn't Postel's law. It's adverse selection / moral hazard / whatever you want to call the incentive to exploit a system's tolerance for error to improve short-term outcomes for the exploiter but at long-term cost to system stability.
Complex systems are by definition high dimensional. We often build them with fault tolerance and soft "failure modes" to prevent catastrophic events. However, the dimensionality and complexity mean that almost every sufficiently complex system is ALWAYS running in a "degraded" mode. However once this is normalized, the failure when it occurs is usually catastrophic, and determining proximate cause (often work grouped as "root cause"), much less fixing it, is made even more difficult.
The rapid exponential growth in complexity seen in semi over many decades has created guardrails (eg horrific yield/field failures) in modeling and verification that prevent a lot of problems. I do worry that as Moore slows (multi-chip modules are not Dennard scaling) we will lose some of this anti-fragility.
Of course the other side of this is Muntzing (the removal of any part that doesn't cause immediate failure):
Why doesn't it feed through to the price of food? Of vehicles? Of energy?
It only feeds through to the price of certain classes of goods: housing, healthcare, education.
Those are also "markets" that are artificially supply-constrained, through zoning, the AMA, and accreditation.
To be clear, I'm not saying that we should get rid of zoning, the AMA, and accreditation—but we should be much more careful to avoid use of those tools to curb supply.
Kidding aside, most people looking for housing aren’t buying land, they’re buying housing — which absolutely does not have elastic supply by policy, not by natural law.
Well, they are buying land because housing exists on land. As discussed elsewhere in the thread, if you make the housing upon that land more elastic (e.g. by loosening zoning restrictions), that elasticity pretty much immediately gets baked into the price of land itself.
This is so significant an effect that there is a highly lucrative business in simply buying low-density zoned land and going through the entitlements to turn it into a high-density zone. This does not just generate a free lunch for a developer to build more units on the same plot of land at the same price, it makes the land instantly more expensive.
> there is a highly lucrative business in simply buying low-density zoned land and going through the entitlements to turn it into a high-density zone
Sure, but this is only a lucrative business because despite the land getting more expensive, the housing units are less expensive—otherwise who in their right mind would pay as much for one unit in a duplex/triplex/etc. as they'd have paid for a single-family home in the same location?
The $/sqft of housing tends to go up as density increases... for the same reason as the article is suggesting: incomes are higher, so people can eat higher prices.
> The $/sqft of housing tends to go up as density increases
This is only true generally, not within a specific neighborhood, and it's because of correlations between demand and density.
If you look at a neighborhood with mixed SFH and condos, the condo $/sqft is lower than the SFH $/sqft. (To be clear: that's $/sqft of housing space not of land).
Having a diversity of density enables home pricing at different points. Looking only at SFH (as this article does) is missing the forest for the trees, IMO.
> Gentrification actually only affects a very small percentage of people who end up refusing to sell and holding out until they cannot afford anymore.
Not quite just those who refuse to sell — because housing costs impact the cost of every other local service, maintenance in a gentrified area often becomes unaffordable for those who hold out, and then they can’t afford it. Roof replacement is the classic example. Another example (though not as relevant to the 90 year old on social security) is childcare costs.
Take a look at the EPA "exception" that California has needed in order to impose more stringent fuel efficiency standards for automobiles.
Many forms of commerce or communication that are relevant across state lines (net neutrality rules, etc.) are considered a federal prerogative and states have limited ability to control these.
Yes, states could do more to fund research--and hopefully they will--but no state has the same level of tax rate as the federal government, and while the NSF budget is "noise" in the federal budget ($10B/$1.7T discretionary) it would be quite a big outlay for most states, even for California it would represent 3%+ of the total state budget to reproduce.
Though, now that I look at that number, maybe it's actually an opportunity for CA...
> Take a look at the EPA "exception" that California has needed in order to impose more stringent fuel efficiency standards for automobiles.
Yet, WA now has a carbon tax for companies operating within its borders. And it was found constitutional by the SCOTUS.
> Many forms of commerce or communication that are relevant across state lines (net neutrality rules, etc.) are considered a federal prerogative and states have limited ability to control these.
The interstate agreements are allowed as long as they don't infringe on the sovereign Federal power.
And there are plenty of workarounds. For example, CA has these ridiculous agricultural inspection stations on freeways. They are legal because they don't technically deny you the freedom of movement, declining to submit to an inspection simply revokes your driving privilege in CA.
"Securing the funding" is much closer to the work than "providing roads and sewage".
In most sciences, to actually secure the funding, you need to argue for why the problem is important, why the team has a shot at solving it, and what possible approaches look promising. Then you need to actually advise the team in supporting the work.
With apologies to Bill Buxton: "Every interface is best at something and worst at something else."
Chat is a great UI pattern for ephemeral conversation. It's why we get on the phone or on DM to talk with people while collaborating on documents, and don't just sit there making isolated edits to some Google Doc.
It's great because it can go all over the place and the humans get to decide which part of that conversation is meaningful and which isn't, and then put that in the document.
It's also obviously not enough: you still need documents!
But this isn't an "either-or" case. It's a "both" case.
Yeah, and in fact this is about the best-case scenario in many ways: "good defaults" that get you approximately where you want to be, with a way to update when those defaults aren't what you want.
Right now we have a ton of AI/ML/LLM folks working on this first clear challenge: better models that generate better defaults, which is great—but also will never solve the problem 100%, which is the second, less-clear challenge: there will always be times you don't want the defaults, especially as your requests become more and more high-level. It's the MS Word challenge reconstituted in the age of LLMs: everyone wants 20% of what's in Word, but it's not the same 20%. The good defaults are good except for that 20% you want to be non-default.
So there need to be ways to say "I want <this non-default thing>". Sometimes chat is enough for that, like when you can ask for a different background color. But sometimes it's really not! This is especially true when the things you want are not always obvious from limited observations of the program's behavior—where even just finding out that the "good default" isn't what you want can be hard.
Too few people are working on this latter challenge, IMO. (Full disclosure: I am one of them.)