Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Tech Progress Is Slowing Down (wsj.com)
58 points by lxm on Feb 18, 2023 | hide | past | favorite | 105 comments



Tech progress is slowing down because all of the entrenched players are protecting their cash cows rather than innovating. I also think that the ability of these companies to capture practically all talented tech employees with inflated salaries has harmed the industry and diminished creativity. The fracturing of these massive conglomerates hopefully brings a cambrian explosion of new tech.


Did you read the article? Tech progress in the areas the article talked about slowed down like 70 years ago.


I read up until the paywall


<https://news.ycombinator.com/item?id=34849488>

Archive.Is (<https://archive.today>) defeats most paywalls these days. There are other options such as <https://12ft.io>.

It's generally presumed that HN readers will be aware of this.


Many people, myself included, would prefer paywalled content didn’t get submitted to HN at all.

The quality of paywalled articles is low compared to the other articles submitted and the discussions that follow are significantly worse on average.


Understood, and some sympathies. Take it up with dang, or submit an Ask HN piece.

That said: you have the option of flagging such submissions, provided you're over the fairly modest karma limit, which it appears you are.


Same. I don’t know why people keep engaging with wsj content.


So much this. It's time to go Ma Bell on Google et. al.


How would you split up Google?


The US Govt is currently suing Google to make them divest their ad exchanges. That would be a good start.


As far as I'm concerned, the author of this article has not made a strong enough argument to draw such a bold conclusion.

They point to patent research, showing that innovation in areas outside of computers and electronics has slowed down. Then admits "of course" computers and electronics aren't included in this. But computers and electronics dominate our world. If you exclude them and say "tech progress is slowing down!", you're saying almost nothing at all, because you've excluded the thing that accounts for our currently-alleged rapid rate of technological progress.

He does attempt to cover this point by asserting that increases in processor performance are slowing down. This is enough for him to draw the conclusion of: "for computers [...] the period of rapid exponential growth will soon become history." This is a massive logical leap. Processor performance isn't the only component of technological progress in the wide field of computers and electronics, and that's especially obvious in the current time where rapid progress is occurring in ML algorithms.

I think this subject is a worthy point of inquiry, but it seems to me that the article is simply taking facts and drawing unwarranted conclusions from them.


"The author of this article", Vaclav Smil, has made the argument at rather greater length than this one brief excerpt:

<https://vaclavsmil.com/category/books/>


I wonder if there's some other measure that can be made to mark the shift in technology.

For example, the processor transistor count.

Maybe (desktop) systems should be judged by total system transistor count, as GPUs have replaced processors with respect to transistors.

Or possibly, there should be a measure of transistors per person - many many people are carrying around phones with literally billions of dollars of development in them. Phones might arguably be the single most potent expression of technology on the planet. (I was going to say they serve one person, but arguably they might eually serve apple and advertising)


> For example, the processor transistor count.

AFAIK no processor is hand drawn anymore, even the infividual logic cells. Everything is synthetised. Then the transistor count is useless. Is like counting lines of code to assess productivity.


A favorite of mine from an earlier era:

"That the automobile has practically reached the limit of its development is suggested by the fact that during the past year no improvements of a radical nature have been introduced."

- Scientific American, Jan 2, 1909


Until the recent fundamental change of making cars electric instead of run on gas, have there really been many other "improvements of a radical nature" since 1909? If you go back and you look at cars from that period they not only look like modern cars, they are made by the same companies. They definitely are less powerful, and yet they have similar miles-per-gallon efficiencies as many larger cars from recent years.


Why is "of a radical nature" the important part here, and who gets to define it? Is fuel injection radical? Seatbelts? Crumple zones? GPS? In 1909 a car had 35HP and could peak at 53MPH, and was an ultimate death trap in an accident. Hundreds of improvements, some more radical than others, led to the cars of today being better in virtually every way, including subtle ways nobody knew mattered at the time (like not flinging people out the windshield in an accident).


That phrase is important because otherwise it wouldn't be there: it is load-bearing in that quotation.

All of the things you mentioned seem like the kind of incremental improvements you get over an additional hundred years of iteration and improvement... but I think it is non-sensical to try to sell that refinement as as impressive as the burst of improvement and innovation you saw as cars were first being defined.

In practice, I think a lot of people want every individual thing we do to follow some kind of exponential or even linear growth curve, but it seems much more likely that everything follows a sigmoid curve: an S-shapes trajectory wherein after a period of slow improvements the actual meat of a particular innovation are really experienced during a much faster and almost explosive growth followed by a return to slow incremental improvements to wring out the last benefits (but never just becomes fully flat).

The reason why, on a whole, we see such great improvements in our lives is then because of the combination of numerous S curves from new paradigms that overtake the old and provide an illusion of smooth and continual progress.

Like, I do think the premise of "tech progress" slowing down is strange: in the past few years alone we've seen disruptive "radical" paradigm shifts occurring that have altered how people live their lives to a pretty radical extent--though if you wanted to discount anything that was catalyzed by political and medical crises I might be forced to cede my stance? like, looking back in 40 years, this might all look incremental as I guess a lot of it is still speculative--but banal things like word processors or even laptops, we're clearly pretty far past the growth phase of the S-curve, and so all the things we already have aren't really improving much anymore and likely never will.


> In practice, I think a lot of people want every individual thing we do to follow some kind of exponential or even linear growth curve,

Greed.

The stakeholders just want more.


Any given change to a car might by itself fail to seem radical - I agree, yet in the decades since 1909, how cars (and trucks) are used in our world has changed radically, and radically changed our world. Paved roads and freeways, gasoline production, the repair industry, and taking for granted that a person of ordinary means (in first world countries) has the ability to travel dozens of miles to work daily. Foodstuffs can get transported hundreds or thousands of miles and be edible. My parents didn't eat a green leafy vegetable from October to May as children, because such veggies didn't grow locally in Winnipeg.


Yes, there have been massive improvements. so much so that many people take these as a given. Here’s a small few off the top of my head: - power steering (most people would struggle to drive a car without it) - anti lock breaks - suspension systems - fuel injection - multi gear transmissions - airbags

Now that I think about it I’m not sure I’d put the change from gas to electric as more revolutionary than a number of these seemingly mundane improvements.


Perhaps unlike some other commenters, I think that a bit of balance to the hype around the whole 'exponential' view that we're in a period of unprecedented change, is reasonable.

I do find the quoted link between number of transistors and performance (Single threaded, Multi threaded? Who knows?) to be highly inappropriate without some qualification.


When zoomed in growth is not linear – it never was, it never is going to be.

What is more telling than the ups and downs in growth is how much stuff is obviously unsolved that will obviously be solved, but where it's not obvious yet how it's going to be solved. One such example is human (and then super human) natural language recognition and processing.

Right now it's worse than human, who are not even particularly good at it, giving language barriers and also hearing impediments. It fails all the time. It is slow. The input needs to be clear and slow.

But why should would any of that be? Why would a computer not eventually be better at recognising signal through horrible noise, better than the best human even could be? There will be instruments in every consumer electronic device that can beat every humans at audio input (that might already be true). Then, clearly, a connected device will be able to understand/translate/process the input better than any single human. And lastly, that device will then be able to offer more context and action for that information, and quicker, than any human.

This is an example for something already obvious. I don't need to know how it's going to happen. It will happen 100%, because it's obviously useful and there is hard technical or physical limitation to any of this. At some point all the required tech will have progressed enough that you are going to mumble an arbitrary request be able to say "book me x at y and also inform x to tell her i am running and an AI is making it happen and you thinking absolutely nothing off it.

As long as we are not there, as long as the obvious and obviously doable stuff is not done, tech progress is not slowing down. It's just not linear.


One thesis is that it takes exponentially more resources to maintain a linear growth in tech over time. When there’s not enough resource to push progress, growth slows down.

Look at the particle accelerators. It costed a tremendous amount of money to build leading to the discovery of all the exotic particles. The next version costs many times of the past ones. It just takes a lot of resource to move forward.


True but that is all because of endless management, specifically middle management that is pretty much killing any initiative or innovation you can have in company.

Change the way you do management which is non-trivial and really difficult thing, and you will get a ton of innovation and progress and productivity.

AI allegedly can replace middle management :)


> Exponential growth has not taken place in the fundamental economic activities on which modern civilization depends for its survival — agriculture, energy production, transportation and large engineering projects.

This is inevitable in a society where progress is predicated on return on investment.


No it isn’t. “Tech” is only slowing down for a very narrow definition of tech. Perhaps we forget that rather recently the world created a vaccine for a novel virus using cutting edge techniques, that genome sequencing costs continue to fall, that genome construction costs continue to fall, that synthetic meats are on the horizon, that infrastructure for electric vehicles is now present in much of the world. Tech innovation continues to flourish, that growth just isn’t focused in computer hardware any more


> genome construction costs continue to fall

On the other hand, construction construction costs continue to rise. Housing is of greater practical importance. It's a large chunk of every household's budget.

1. "Does construction ever get cheaper?", https://constructionphysics.substack.com/p/does-construction...


That’s anecdotal though. For an opposite anecdote, airplanes have been pretty much the same since the 70s.


Airplanes are a relatively poor anecdote. They have advanced in small but steady changes over time. The prevalence of the single aisle, 6 abreast seating of the 737 and A320 came about because airlines could reliably fill short to medium haul flight and were cost effective in other ways. We have more efficient engines, systems have gotten safer, etc. There is a lot of work behind the scenes with aircraft.


Wright's brother's first flight happened in 1903. Within 15 years there were dogfights in WW1. In 31 years the first commercial service over the Atlantic started (cargo) and 5 years later (1939) the first commercial passenger transatlantic line was opened. Three decades later, the man got to the Moon. A few years later Concorde started regular supersonic commercial transatlantic flights.

After that we got what? Less noise, improved fuel efficiency, increased range? I started flying about 25 years ago. I don't remember that flights were noisier. As for efficiency and range, I don't give a hoot. I care about ticket prices and flights happening on schedule. Prices have gone up, innovative fees piled up on top of them, delays are oh-so-common. As for safety, 25 years ago planes were crashing about as frequently as now. For regular people like me, the advances in safety were imperceptible.


A lot more passenger miles are flown then 25 years ago: there are more flights and they go further. Fatalities per passenger mile are something on the order of 5-10 times less than in the 90s.

You also hear about every single one of them now, partly as they're rare and exciting to the media and partly because the Internet makes everything "nearby", whereas in the 90s you'd have probably only gotten a quick radio segment and a mid-paper article on a remote air crash.

https://commons.wikimedia.org/wiki/File:Fatalities_per_reven...


I can’t tell if you’re serious or not, but airline flights seem incredibly cheap to me and substantially safer than 40 years ago. It’s true that most of the safety in US scheduled aviation had been achieved by 2000.

In the 60s, 70s and 80s, flights were expensive and rarely taken by regular people. Now, they’re readily accessible to most everyone. (I can recall calling to book a “bereavement fare” to attend a family funeral and having to be prepared to send proof of death because a regular fare from BOS to PIT would otherwise be prohibitively expensive. I don’t know if those even exist anymore because the economy fares are so cheap.)


To be fair, every single one of your complaints can be attributed to innovations of capitalism. You’re experiencing fewer innovations not because they’re not happening, but because they’re serving an insatiable demand for profits for centers or capital.

That’s not to say it isn’t disappointing! Just that your marginally-the-same flight experience is that way because improvements in flight are designed to enhance the lives of executives and investors rather than passengers. If anything, they’ve made the flight experience worse for you and me as much as we'll bear, while improving flight objectively at the same time… because profits demand ever more growth.


Looking at the DoT’s stats on financials of 24 US airlines, I dunno if this critique is rooted in reality. I think domestic airlines aren’t doing a great job for their investors, either. Can’t comment on executives.


Being bad at capitalism doesn’t mean capitalism is not their MO. I don’t mean this argumentatively, just so that point is clear: I’d apply the same logic to the various banking and investment shenanigans which lead to the housing market crash in the aughts, despite the fact that those services to capital ultimately proved highly counterproductive in many cases.


We've improved fuel efficiency by a factor of 8 since 1970: https://commons.m.wikimedia.org/wiki/File:Aviation_Efficienc...


> Jet airliners have become 70% more fuel efficient between 1967 and 2007.

https://en.wikipedia.org/wiki/Fuel_economy_in_aircraft#Histo...

I'd agree that aviation improvements have slowed way down, but they've certainly gotten dramatically better in that time frame.

And, while passenger airplanes do look similar to how they used to, we now have all sorts of drones flying around, some of which would have awed people in the 70s.


They may look the same, but they've changed a lot when it comes to safety: https://en.wikipedia.org/wiki/Aviation_safety#/media/File:Fa...


Except they aren't. Not when it cmes to noise, fuel consumption or safety. Or weight, or range, or reliability.


> airplanes have been pretty much the same since the 70s.

you need to back your claims


Since the Concorde was grounded, arguably we've regressed.


Consumer tech progress is slowing down.

There are still advances in tech currently happening, but people’s daily lives have gotten as good as they’re gonna get. Most progress now is just slightly faster processing on devices, better battery life, maybe new materials here and there, medical advances if we’re lucky. But for the most part there is no big thing waiting for the consumer, like the invention of plastics or a smartphone.

Many of the advances we are seeing in tech are at the producer level. It’s getting easier to make better stuff faster, and this will just translate to more products, more content, and more processing of data.

But all this stuff will be pretty much invisible or unknown to everyday people. All they will see will be the end products, which will be greater in quantity but not much different in quality. Life will be the same for a long time now until the next big tech catalyst emerges.


I don't think people's daily lives are "as good as they’re gonna get" - there are other advances in the horizon (I think AI is gonna be the big one), and it's impossible to predict the potential impact they'll have on people's lives.


In developed countries yes, but as more products, more content, and processing of data is available, it will make it to less developed countries and cater to other cultures, resulting in similar advances there.


That’s just moving goalposts. Developed countries represent the current standard of tech, everything else is just playing catch up, it’s not “progress”.


Scale is important. If a medical breakthrough can be delivered to 1% of the population vs 50% of the population, that is progress that meaningfully improves the human condition and affects how risk averse we are to different diseases. If we deploy electric vehicles to 5% of drivers vs 100% of drivers, that is going to have a tremendous environmental impact that can benefit everyone.


Karl Marx disagrees.

Seriously, if you're rich enough maybe what you say is true, but I don't think you could say "people’s daily lives have gotten as good as they’re gonna get" until poor people in "developed countries" don't have to worry about basic necessities to stay alive.

Perhaps one could strike a couple countries off the "developed countries" list and call it a day, but that list would be much smaller..


> But for the most part there is no big thing waiting for the consumer, like the invention of plastics or a smartphone.

Household robots and Augmented reality are two big obvious ones.


Household robots wouldn’t really do anything we haven’t seen before or can’t be done today with hired servants or slaves. Robots wouldn’t make much of a difference, it’s just no human required.

Augmented reality might still have a chance but who knows. That’s a lot of progress that has to be made, and it won’t be quick. Look how long VR has tried to catch on.


> Household robots wouldn’t really do anything we haven’t seen before or can’t be done today with hired servants or slaves

We're talking about consumer households

> Augmented reality might still have a chance but who knows. That’s a lot of progress that has to be made, and it won’t be quick. Look how long VR has tried to catch on.

We've just seen the start of things editing reality instead of just AR overlays with papers like Dream Mix. Yeah it is a long way off from realtime. Hardware wise we already are close to being there on displays with Varjo XR-3 for passthrough AR.

We have a path to full generative passthrough AR that really modifies the world instead of overlays, and we have the tech to keep making headsets more compact with microdisplays and pancake optics, with good enough brightness and HDR once we move to micro LED.


That's a pretty big change for the majority of the population that can't afford hired servants, or don't want to pay someone for that sort of work.

At the moment, that sort of work requires you to have a decently upper middle class income/lifestyle, and it might not anymore.


1. Access to space is becoming cheaper.

2. News coming from fusion recently, it could be more than hype.

3. High temp (ie cooled with liquid nitrogen) superconductors at industrial scale

4. mRNA for cancer and vaccines

5. ...


I was expecting something about EVs. It really seems that they are replacing the ICEs at fast pace. Not only in the stats but in the streets as well.


In studying the potential and limits of technology, there's stunningly little discussion of what technology is and how it achieves its ends.

John Stuart Mill distinguishes science from the "arts" (the term for "technology" in the 19th century) as "science most conveniently follows the classification of causes, while arts must necessarily be classified according to the classification of the effects", from Essays on some unsettled Questions of Political Economy.[1]

There's some discussion of the nature of technology, W. Brian Arthur, The Nature of Technology , and Kevin Kelley's What Technology Wants, as well as several titles by Steven Johnson. The contribution of philosophy to the question is generally unsatisfying, though there are Jaques Ellul, Lewis Mumford, Michel Foucault, and Martin Heidegger.[2] None of these are themselves technologists, which whilst providing some stand-off distance also manifests much ignorance.

I've found useful to consider what the specific mechanisms of technology are, and have come up with a nine-part breakdown, which I refer to as the ontology of technological mechanisms:

- Fuels & energy sources: Primary means for effecting change in a system. Biomass, fossil fuels, nuclear energy, environmental fluxes (solar, wind, hydro, geothermal, etc.)

- Materials: Stuff we build and process with, both structural and feedstocks. Stone, wood, fibre, vitrified materials, metals, chemicals, fluids, etc.

- Power transmission and transformation: Conversions between types or forms of power, from simple mechanisms to electronics and quantum effects.

- Process knowledge: Specific "how to" knowledge, "technology" in the vernacular.

- Causal knowledge: Understanding of properties and mechanisms, "science" in the vernacular.

- Networks: Links and nodes, physical or virtual. Transportation, communications, knowledge itself.

- Systems: Process with feedback.

- Information: Sensing, parsing, storage & retrieval, processing, and transmission.

- Hygiene effects: Dealing with unintended or undesired consequences.

The classification has seemed reasonably stable and useful to me for some years now.

From this a few aspects become clearer:

1. Each modality has its capabilities and limitations. E.g., materials vary in properties and abundance.

2. Some modalities scale linearly (e.g., the effects of additional energy are generally directly proportional to inputs), some exponentially (networks and systems), others seem to be emergent and impose non-evident but long-term costs (hygiene).

3. Virtually all exponential change seems to involve or rely highly upon network effects. These are only a limited set of modalities.

4. Tremendous advances in raw capabilities in specific areas (e.g., information) seem to provide at best limited real-world outcomes. E.g., multi-millionfold increases in computational capabilities have resulted in extension of useful weather forecasting only by a factor of days. Efficiencies of automobile and aircraft transport improve with increased informational capacities, but only to inherent limitations defined by physics (drag coefficients, Carnot / Rankine efficiency).

We can also look at specific areas of technological progress ... or stagnation ... and see where these fall within the structure. Keep in mind that a given real-world technology, say, computer chips, typically covers a set of these factors, say, networks, systems, materials, and process knowledge, in the case of semiconductors.

What all of this suggests to me is that even with considerable future potential in certain areas, we're likely to see limitations imposed by other elements of the ontology.

________________________________

Notes:

1. Previously discussed with a longer quotation here: <https://news.ycombinator.com/item?id=23000911>. Source: <http://www.gutenberg.org/files/12004/12004-h/12004-h.htm#FNa...>

2. See generally: <https://plato.stanford.edu/entries/technology/>


Uncle Pete vindicated again. Hopefully this thesis will be something more than a few people will agree with him on.


The golden era of tech in the recent past was due to the MASSIVE PUBLIC INVESTMENTS mostly pushed by the "cold war". That's is. Private led research have a target: company profits. As the capitalism tend to converge to larger and larger dominant entities the substantial innovation needed by competition disappear and all try to fake evolution to keep their dominant position. Nothing more.

We need PUBLIC research for the society, well funded, well SEPARATED by the private sector who can grab ANY idea, but can't influence in lobbying terms the public research, no revolving doors etc.

A small example: these days enough tech guys understand that we damn need integrated desktops, like the classic Xerox PARC ones, LispM ones etc so NOT countless separated apps, like containers on a ship-OS but a single one where anything is a function usable anywhere. Now see the current trend of "apps" and services who try to do more and more things because integration matter and it's not possible with systems designed to be just ships loaded with sealed containers. Did you see the comparison? Not yet? Ok, let's observe then the actual EV and PV system status:

- essentially ALL current EVs now are NMC/LFP 400V batteries.

- essentially ALL current domestic PV storage are NMC/LFP 400V batteries

- NO DAMN SYSTEM exists (except two experimental and only partial products) to integrate them so one can damn charge it's EV from the solar PV with direct surplus from the inverter MPPTs. No useless and wasteful double DC-AC-DC conversion, no fixed power charge and so on.

Why? Because there is no damn integration because those who produce PV stuff only do that, cars are made by someone else, than some car OEM try to offer complete systems (Toyota and limited Tesla) who happen BOTH to be crappy and limited because developed by people who do not have them nor know enough the whole system.

With public research researchers who design a New Deal know well ALL the parts and have no competing interests, so they'll likely end up with a fully integrate systems ONE standard not a handful and so on. The private sector of course is free to change, but having such good research for free they'll not change that much simply because it's too costly for them.

Now I think anyone can understand the comparison...


Seems like a weirdly timed article given the new S curve is about to start with ML finding numerous real world applications.


It’s a promotional piece because Smil has (yet another) new book coming out.

I’ve seen it (thanks Z-Library) and am distinctly underwhelmed. It’s about tech hype and failure but says nothing new and even the rehashes are poorly done, judging from the areas I personally know best.

Vaclav Smil started off as a knowledgeable commentator on sci-tech (his fertilizer book is especially nice) but has devolved to basically a brand, with the work (such as it is) done by an army of grad assistants. Very surface stuff beloved by the likes of Bill Gates who confuse surface with depth.


It’s funny you say “about to start” - I feel like I’m in a minority here but I’m kind of suspicious that we’re at the end of the S-curve of ML, and the real-world applications are well-known:

- A/B testing for UI design - Ml modeling for ad targeting and optimization - fraud detection - image classification, object recognition and segmentation (lots of neat Snap filters, many niche CV apps with drones, industrial sensors, medical diagnostic tools, self-driving cars???) - text classification (sentiment analysis) - text generation (translation, document summaries, writing assistants) - niche academic/scientific applications (simulation acceleration)

Empirical modeling has been around for more than a century, there’s just been an explosion of image and text data that have demanded a new generation of empirical models (ML) - it’s resulted in a vast array of decision-acceleration applications, but aside from ChatGPT it feels like the boundaries are starting to be felt more concretely these days.


It remains to be seen how far GPT-like models can go. If we can teach it to be consistently better at humans in doing tasks that require intelligence, and if the costs of running the AI models are smaller than hiring people to do those things, things could get out of hand pretty quickly.

One could doubt whether GPTs really can outperform humans consistently in areas that matter, but we haven't even tried yet.


Between this and the "Save money by skipping breakfast" piece, the WSJ is quickly affirming all my suspicions that it's editorial section can be safely ignored.


Well, it’s also owned by Rupert Murdoch.


If anything, I take articles like this in mainstream media as a canary in the coalmine that something big is about to happen.


Some kind of technological shift that will make nowadays programming seem like chiseling furniture by axe. And we’re all just going to watch dumbfounded when things are going to get to complicated to understand by man.


Love this post for some similar thinking:

https://www.cold-takes.com/this-cant-go-on/

The world is a crazy place, could get crazier. Or not. I don't know.


Can you elaborate what you mean?


Not the GP, but this is how one can view it:

When the leaps in progress are so large that the general public does not understand that they have happened, or see them somehow as lateral moves rather than forward moves, a lot has happened in a short span of time.


If progress slows or plateaus, the inevitable breakthrough will really shine.


something something lagging indicator


What are the real world applications you’re referring to?


Not the gp, but…

- medical diagnosis

- assistant-systems for everything

- noise reduction in industrial settings (via mechanical design)

- basically anything language related

- interpretation of image data

- essentially anything involving a pair of interacting sequences (thanks, transformers)

Contrary to many cynic takes, deep learning, CNNs, transformers are massive. It’s hard to realize the scope of these developments zoomed in to one’s warped perception of time and progress. One needs to zoom out a bit.

We could probably stop advancing the field and reshape most of the “real world” we’re used to, just with things you can import with two lines in python. But those changes come incrementally, and most of the bright minds around are dedicated to advancing our conceptual realm instead of the meatspace-one.


I agree with this perspective. There are certainly more step changes to come in the theoretical domain (arbitrary tensor 2 tensor of any dimensionality / variable input output size?), but the application space seems largely known, and now just requires lots and lots of implementation in the real word - which isn’t necessarily going to be from wildly successful software platforms but more from applied AI practitioners building niche tools everywhere


It can be anything really. Better medication design, discovery and modelling of new compounds, disease discovery, better understanding of our environment, etc.


So why aren't we already aware of what these applications are since ML has already been well-funded and hyped for about a decade? Basically what is the inflection point?

Obviously this is subjective, but underwhelmed would be a compliment for how I feel about ChatGPT. I've been hearing about these near-term breakthroughs for nearly a decade, even working in the industry. And yet, the real-world progress is nothing compared to the hype (or funding).

So why now? What's changed?


I’m not the OP, and I’m a radiographer. Big things are happening with image processing in radiology. ML/Deep Learning or whatever the marketing people call it is what is being done behind the scenes in Siemens ‘Deep Resolve’ for MR systems.

It’s utterly transformed what we do. It adds signal in an initial processing stage in the k-space domain (I’d estimate 25%-30%) then reconstructs the image. Then it doubles the resolution (or does it quadruple?) with double the pixels in the x direction, double in the y.

It does this based on a training dataset of paired images, one high resolution, one low resolution.

Images are now obtained quicker and are higher resolution than ever. Imaging protocols have extra sequences as time constraints are reduced. Patients can stay still for the short scans we are doing. Not ever scan benefited the same way, and some sequences don’t have the tech yet.

The images are fantastic. It’s a larger change than the move to high field magnets (1.5T to 3T) and the hype about it isn’t anywhere near enough.

I’m lucky to be able to compare imaging with and without the technology applied, to be able to mess about with it and find the rough edges (they are unexpected and a little counter intuitive) but I’d be trying to avoid systems without it (or an equivalent) as a system user or a patient. The future is very very bright.


Thank you for responding.

Do you know if 'Deep Resolve' potentially introduces artifacts? I believe you that it's better. I'm trying to figure out if it's either

a) we developed a superior technology that is almost always better, but will very occasionally introduce noise that can be screened by a trained technician (or whatever)

b) we developed a superior technology that is literally better in every way when comparing final images

I'm not trying to discount a) here because if it's progress for the industry then that's still a win.


It’s still earlyish days and it’s fascinating.

As a rule, in MRI you have three things, pick 2. Resolution, signal and time to acquire.

The nature of the Deep Resolve training dataset means that matching the training input parameters makes images look good. Counterintuitively, accelerating the scan more sometimes improves images (better matching the training dataset). The differences are not subtle. This sort of breaks the res/signal/time things.

Yes, it can produce artefact on images that are low signal. It’s a grain type effect in the phase direction.

Every new acceleration technique has its artefacts and issues (fast spin echo, single shot, parallel imaging, simultaneous multi slice, etc). The beauty of Deep Resolve is that you can reconstruct the image again without DR applied and compare the result.

One minor proviso though, DR loves signal, so a scan that is to be run with DR needs more signal than a non deep resolve sequence. This is more than made up for by the resolution doubling after DR is applied.

Other accelerations have their own quirks to be handled (eg, parallel imaging needs a lot of over sampling, simultaneous multi slice needs a lot of extra elements turned on.

I’d say it’s your option b, but with the caveat that every single MR sequences (DR or not) needs someone to check the image is real and not showing stuff that isn’t there. Artefacts handled on a normal day will include machine faults, technician introduced artefact, patient issues (movement!), sequences issues, vendor specific problems and some weird as things that never get explained. These things keep me employed.

It looked like we were entering a Teslas arms race - more being better. The likes of DR are doing the opposite, with great images at lower field strength. Lower field strength magnets have better t1 contrast, are easier to make, are easier to install, they use less (or zero) helium, they are easier to maintain, they are safer, they are more readily available, they are lighter, they are cheaper etc etc.


> Images are now obtained quicker and are higher resolution than ever. > I’m lucky to be able to compare imaging with and without the technology applied

Interesting, can you share some images, or a paper on this?

Btw I've tried some online image enhancers on a random painted image and it was not amazing, it did enhance some parts but not really by much. I'm sure specialized systems can do much better when the input image is made by the same source as the training set.


I have got a few images that I used for a talk. These were an early iteration of the tech (Deep Resolve Sharp) and not what we use now (Deep Resolve Sharp, but also Deep Resolve Boost, which also adds signal in the k-space domain).

Siemens have a mass on on it, but I don’t believe the marking really conveys how good it is. They also talk up the speed side of things, while I have gone for a bit of speed but mainly higher resolution. I’m not even sure how to talk about resolution anymore. Is it the voxel size acquired, or what the end result produces? It’s smoke and mirrors but whatever you call it, the end result is great. We are also accelerating scans more than the Siemens examples (they seem to use 2 or 3x a lot. We go minimum 4, and as much as 8). We use higher resolution than most of their examples too.

https://marketing.webassets.siemens-healthineers.com/2f18155...

https://www.siemens-healthineers.com/magnetic-resonance-imag...


* better pictures from your phone's cameras

* auto-captions for youtube videos, you can now search through them. when they started they were quite good and have only improved since.

* ChatGPT we haven't seen widely deployed yet but already now a lot of people use products built on GPT's technology, so an even better version will benefit all of them and widen the circle of users.


I think this is a critical question. Most businesses, imo, would not benefit from the added uninterpretablity and computational overhead from using deep learning. Of those industries modeling stochastic or evolving processes, however, the impact is monumental.


True. The article is about hardware, though. Luckily, we're getting a lot more use out of GPUs and SSDs at a time when improvement in CPUs and RAM has drastically slowed.


ML is capitalizing on the remaining facets of tech that aren't slowing down (parallel computing as in GPUs and memory capacity). Core speeds? RIP. :'(


The presence of currently unfolding areas of innovation doesn't discredit the fact that the exponential progress we've enjoyed since the Industrial Revolution has slowed suddenly over the past few decades.


ChatGPT enters the chat


paywalled please delete



Are paywalls ok?

It's ok to post stories from sites with paywalls that have workarounds.

In comments, it's ok to ask how to read an article and to help other users do so. But please don't post complaints about paywalls. Those are off topic. More here.

<https://news.ycombinator.com/newsfaq.html>


It’s almost as though advances in technology will not solve all of humanity’s ills, which surely cannot be true. /s


No it's not.


Forced diversity. Offshore developers. Agile bastardization.


None noticed yet that it is written by Vaclav Smil.


... and?

(I'm a fan of his, for what it's worth. Your meaning / intent however is unclear.)


[flagged]


Can’t tell if this is a troll or not. Jeff Bezos owns the Washington Post not the Wall Street journal.


No, he owns the WaPo. Murdoch's news corp owns wsj.


He is owner of Washington Post, not WSJ?


Other things that are good to note:

- Jeff Bezos got most of his money from an emerald mine in Zambia

- Jeff Bezos owns 97% of the Hawaiian island of Lana'i

- Jeff Bezos moved his hedge fund from the North East to Florida


Source?


We’re in the Singularity even right now and most of the world doesn’t know it.


We don’t have millions of flying cars yet!

Most of what we have today was predicted in science fiction novels and video many decades ago.

Even the causes of a singularity were predicted (since they are what created the prediction of a singularity). I.e. AI, quantum computing, etc.

The “real” singularity happens at the point in the future where we have no conception what is going to happen next.

So the singularity stays in front of us, but gets closer and closer.

Then, at some point, change happens faster than humans can track or understand.

We are not there yet.


It’s an S curve of which we are in the early ramping phase. ML is going to change everything


I can agree with the singularity being a continuous transition in time and across people.

The future is becoming less predictable for all of us.

And the fraction of us that study, drive or otherwise understand the biggest changes at any given date are getting fewer.

But it is happening as an exponential toward a critical value. Not an S-shaped curve as there won't be any slowing down.


I see the top asymptotic part of the curve being an Iain Banksian “Culture”. Very very far off, but at a certain point, maybe Kardashev II-III, there will be a gradual plateau.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: