I see you speak from experience. I feel like I'm watching the same cycle play out over and over again, which is that a new, transformative technology lands, people with a vested interest spend a lot of time denouncing it (your examples mostly land for me), the new technology gets over-hyped and fails to meet some bar and the haters all start crowing about how it's just B.S. and won't ever be useful, etc. etc.
Meanwhile, people are quietly poking around figuring out the boundaries of what the technology really can do and pushing it a little further along.
With the A.I. hype I've been keeping my message pretty consistent for all of the people who work for me: "There's a lot of promise, and there are likely a lot of changes that could come if things keep going the way they are with A.I., but even if the technology hits a wall right now that stops it from advancing things have already changed and it's important to embrace where we are and adapt".
Such a sane, nuanced take on new technologies. I wish more people were outspoken about holding these types of opinions.
It feels like the AI discourse is often dominated by irrationally exuberant AI boosters and people with an overwhelming, knee-jerk hatred of the technology, and I often feel like reading tech news is like watching two people who are both wrong argue with one another.
Moderates typically have a lot less to say than extremists and don't feel a need to have their passion heard by the world. The discussion ends up being controlled by the haters and hypers.
New technologies in companies commonly have the same pitfalls that burn out users. The companies have very little ability to tell if a technology is good or bad at the purchasing level. The c-levels that approve the invoices are commonly swayed not by the merits of the technology, but the persuasion of the salespeople or the fears of others in the same industries. This leads to a lot of technology that could/should be good being just absolute crap for the end user.
Quite often the 'best' or at least most useful technology shows up via shadow IT.
And a subgroup (or cousin?) of the exuberant AI boosters are the people absolutely convinced that LLM research leads to the singularity in the next 18-24 months.
I really do wish we could get to a place where the general consensus was something similar to what Anil wrote - the greatest gains and biggest pitfalls are realized by people who aren't experienced in whatever domain they're using it for.
The more experience you have in a given domain, the more narrow your use-cases for AI will be (because you can do a lot of things on your own faster than the time spent coming up with the right prompts and context mods), but paradoxically the better you will be at using the tools because of your increased ability to spot errors.
*Note: by "narrow" I don't mean useless, I just mean benefits typically accrue as speed gains rather than knowledge + speed gains.
Is AI like other technologies though? Most technologies require a learning curve that usually increases as the technology develops and adds features. They become "skills" in themselves. They are tools to be used; not the users of the tools themselves.
AI seems like the opposite to me. It is the technology that is "the learning curve" in the long term. Its whole point long term is to emulate learning/intelligence - it is trying to be the worker, not the workers tool (whether it suceeds or not is another story). The industry seems to treat it as another tech/tool/etc which you need experience/training in which I wonder is the right approach long term.
Many people will be wondering (incl myself) whether learning to use "AI" is really just an accessibility/interface problem. My time is valuable, should I bother if the productivity gains (which may only last a year or so before it changes again) outweigh the learning time/cost of developing tools/wrappers/etc? Everyone will have a different answer to this question based on their current tradeoffs.
I ask the question: If I don't need it right now (e.g. code is only 10-20% of my job for example), why bother learning it when the future AI will require even less intelligence/learning to use?
Unfortunately, thoughtful, nuanced takes don't make headlines, don't get into Harvard Business Review, and don't end up as memos on the CEO's desk. Breathless advocacy and knee-jerk dismissals get the clicks and those are the takes that end up bubbling to the top and influencing the decision makers.
What do you do if it ruins your life on the way to getting your day in court? If you get fired, your employer won't be forced to rehire you, and they are likely protected from any retaliation against you because they were acting in good faith. You aren't going to sue in civil court and get financial restitution from an underage kid or someone with no assets worth seizing. You still lose in either case.
If a deepfake on the internet can get you fired, you need a better employer, or a better contract with your employer, because the actions of third parties outside of your control should not affect your employment relationship. More importantly, your employer should recognize and understand that fact.
That's a consequence of living in a free society. The law does not get involved until after the fact anyway, so what protections would you be seeking? Laws don't stop people from taking action, they only provide consequences for those actions.
Most informed analysts say Russia has the opposite problem. They don't have any more meat for the grinder without tapping the middle and upper class of Russian citizens, which will have repercussions, potentially serious ones, for Putin.
Well, North Korea benefits from getting experience and field-testing radios and winter underwear. The drone environment is very good advertising for their goal of becoming a major arms dealer.
Yes, North Korea gains immediate benefit (money or material aid) and a theoretical delayed benefit (demonstration of mercenary abilities, and real world experience for their troops if they survive). Russia gains bodies to throw against bullets. If every North Korean soldier died but took several bullets for Russian soldiers, it's a win for Russia. They do not care about the North Korean soldiers or North Korea.
This ludditism shit is the death drive externalized.
You'd forsake an amazing future based on copes like the precautionary principle or worse yet, a belief that work is good and people must be forced into it.
The tears of butthurt scientists, or artists who are automated out of existence because they refused to leverage or use AI systems to enhance themselves will be delicious.
The only reason that these companies aren't infinitely better than what Aaron Swartz tried to do was that they haven't open accessed everything. Deepseek is pretty close (sans the exact dataset), and so is Mistral and apparently Meta?
Y'all talked real big about loving "actual" communism until it came for your intellectual property, now you all act like copyright trolls. Fuck that!
You sure are argumentative for someone who believes they are so correct.
In any case, I don’t think I’m a Luddite. I use many ai tools in my research including for idea generation. So far i have not found it to be very useful. Moreover the things it could be useful for such as automated data pipeline generation it doesn’t do. I could imagine a series of agents where one designs pipelines and one fills in the codes, etc per node in the pipeline but so far I didn’t see anything like that. If you have some kind of constructive recommendations in that direction I’m happy to hear them.
Honestly, I donate enough money to politicians to make them stand up and take notice when I email or call them and share my thoughts, which leads me to the conclusion that people in the middle and lower class are going to need to find ways to pool money in such a way that they can change their party politics. It's not that all politicians are completely motivated by money, but IMO you unfortunately have to aim at the lowest common denominator.
You can only donate $3500 to any politician. (legally, if you do something illegal and are not caught...). There are complex limits notice when you say something. (for a small city that limit will make them listen, but nothing national or even a large city)
What you can do is get out votes. People knocking on doors is still one of the largest drivers of votes so if you organize those systems they will listen to you.
I donate to the party, and I donate at the individual limit. At that level they still care because people who donate at that level are connected with other people who donate at that level, and those people tend to reach out and coordinate. Periodically I get emails from other donors who ask me to reach out to such and such a person, a candidate or a party rep, and encourage that they take a look at X issue through a particular perspective.
I think more people would benefit from forming Super PACs and using that as leverage in pushing political change with parties.
I am not at all familiar with the US system. How come there is a $3500 donation limit to politicians, but the tech billionaires have donated hundreds of millions to the inauguration fund?
My problem with this is that people like Larry Ellison are more likely to want to use this against other people but would excuse themselves from any consequences.
This is fascinating to me, having been at a few companies with market caps north of $50B in the tech space. The senior exec's office space aren't fundamentally different from anyone else's. I wonder if it's an old-money vs. new-money industry thing.
This was an older company. It was one of those "IBM satellite companies" (that's what I called them before they used that term for something else). It did business with a lot of IBM customers providing services and hardware that IBM didn't choose to do so, but did so with their blessing and cooperation.
That's a very valid argument. Both SpaceX and Tesla are quite capital efficient. Maybe another angle to consider is what's being optimized for? What outcomes would be considered successful for these federal agencies? That's probably going to tell us more about whether the austerity measures that seem likely result in more efficient use of resources to create successful outcomes.
One thing that seems worth think through more is whether the stated outcomes of those agencies is what's actually be optimized for, or whether those are suborned for personal gain by a few parties.
This is not correct. SpaceX is covered by ITAR and therefore cannot hire foreigners.
Of the approximately 70,000 Tesla employees in the US, fewer than 2,000 are H-1B workers. The rest are US citizens or permanent residents. Tesla's manufacturing is much more vertically integrated than other auto manufacturers, so they rely almost entirely on their US factories to produce the cars they sell in the US. Other auto makers tend to do more manufacturing overseas to save on labor/safety/environmental costs, then do final assembly in the US to avoid tariffs.
As someone who's home insurer pulled out of California and so I had to scramble to find another carrier, I looked at the FAIR plan and it is completely untenable for most people. My insurance was already high, ~$2000/year for coverage that would rebuild our house, and under FAIR it would have gone up to $12000/year.
I mostly agree with the article that insurance is grounded in statistical measures of risk and there's no point railing against it. Norms are going to have to adapt to increased risk and how we build homes and infrastructure needs to shift away from short-term, low-cost thinking to longer-term solutions with a higher-upfront cost and lower TCO given the new constraints. Things like burying power lines, aggressively managing fire danger, and homes that are built to be more sound to natural disasters have to become the status quo.
Most of these things are already possible today. In my neighborhood, PG&E did an assessment and it would cost every homeowner on the street ~$25,000 to have the power lines buried. I would have opened my wallet immediately to reduce the fire risk, but it got caught up in politics and policy. When we had some renovation on our house, my wife and I insisted on some of the work being done in ways that would make the house safer and easier to maintain over the long work. The contractor balked at first saying it would cost us an extra couple of thousand dollars. I had to point out that an extra $3000 to make sure things lasted an extra 5 - 10 years and was easier to maintain and upgrade meant nothing. But people have to insist on doing better because right now the norm is to cut corners on everything to save in many cases a negligible amount of money over the life of the work or against the cost if there is a disaster.
The building codes will need to reflect the new normal. Defensible perimeters, metal roofs and masonry or cementitious exteriors are a must for many areas going forward. Log cabins amongst the pines just aren't tenable in the West any more.
You say that... but a well built log cabin, with a Class A fire resistant roof, is rather likely to survive a wildfire unbothered if the ground a couple feet around it is kept cleared.
They're simple (not a lot of corners for burning things to wedge in), they tend very well sealed with smaller windows (so less chance of a window breaking and allowing embers in), and the amount of thermal energy it takes to light a full log on fire is quite high. Radiant heat from a forest fire isn't going to bother a log cabin. It might darken the wood somewhat, but it won't light smooth logs on fire. Even random firebrands and such lack the energy to bother wood.
The only concern would be a shake roof - that would catch fire easily and burn the place down. But a well built and "tight" roof (no massive eaves with vents into an attic, just minimal overhangs) of Class A fire resistance would work just fine.
Metal roofing is not inherently fire resistant, either - it depends on the materials, and what's below it. Some metal roofing can transfer enough heat to the wood below to light that on fire, even without direct flame spread. And, non-intuitively, a lot of asphalt shingles are Class A fire resistant when properly installed.
What doesn't work well, obviously, are the sort of expensive homes with "all the architectural features," lots of inside corners that trap debris, and an incredibly complex roofline.
People forget that you don't have to modify a McMansion to whatever requirements you're adding - you can build something entirely different.
"Earthships" or other hobbit-hole like houses are almost completely fireproof as long as the entries are handled correctly - anything that can start a fire through three feet of earth is probably a volcano anyway.
Don't most of those suffer from serious ongoing humidity problems? I've looked into that style of housing in the past, and it seems like it's always having issues with mold, mildew, and ohter "issues of running 90-100% interior humidity for long periods of time" sort of problems. I think they're okay in drier climates - IIRC they were developed in New Mexico, which is "bone dry nine months of the year, and somewhat drier the other three."
They do - and there are ways to counteract it (the usual problem is similar to damp basements compounded by the lack of air movement and humidity control).
It’s a matter of cost (it’s almost never worth it) and tradeoffs.
But if fire survivability is paramount, it is an option.
A "log cabin amongst the pines" with a decent sized "yard" clearance area, a good roof, and where the sides of the house are kept reasonably moist is pretty much fireproof.
The advantage of a metal roof as opposed to most others is the reduction of nooks and crannies where embers can get trapped and light the roof on fire. Metal roofs are also more slick, and dangerous to work on, than any gritty material. A hipped standing seam metal roof with a moderate pitch is going to shed embers pretty handily on both the windward and leeward sides.
Meanwhile, people are quietly poking around figuring out the boundaries of what the technology really can do and pushing it a little further along.
With the A.I. hype I've been keeping my message pretty consistent for all of the people who work for me: "There's a lot of promise, and there are likely a lot of changes that could come if things keep going the way they are with A.I., but even if the technology hits a wall right now that stops it from advancing things have already changed and it's important to embrace where we are and adapt".
reply