Hacker News new | past | comments | ask | show | jobs | submit | wanderingstan's comments login

There will also be Jevons Paradox happening: LLMs reduce the “cost” of writing code, so more code gets written.

I already find myself building prototypes that I never would have attempted previously; it’s just too easy to have the LLM whip one up. LLMs also make it easier to do “grunt work” coding of writing validation checking and unit tests on non-critical code where previously I wouldn’t have bothered.


The author left out my favorite example: that you are mathematically more likely to be the slow line at the grocery store (or the slow lane on the highway). It’s not just bad luck!

> you actually are, on the average, in the slower lane, because slow lanes are (on average) the ones that have more vehicles in. So you are more likely to be in these lanes than in the faster moving ones which more vehicles are in.

https://leightonvw.com/2019/04/04/the-slower-lane-paradox-in...


> In particular, cars travelling at greater speeds are normally more spread out than slower cars, so that over a given stretch of road there are likely to be more cars in the slower lane

How many cars pass a sign over an hour-long period?

It should be equal to speed (miles/hour) X density (cars/mile) (say that speed and density are constant over the hour)

So more cars in the fast lane will pass that sign than cars in the slow lane will over the hour if speed (fast) / speed (slow) > density (slow) / density (fast)


Wow. This is such a fascinating insight. Obvious in hindsight but never occurred to me.


I hadn’t realized it, but you are right.

- Orbit height: 20,200 km

- Earths diameter: 12,760 km

https://www.gps.gov/systems/gps/space/

https://science.nasa.gov/earth/facts/


Anecdotally, almost every day I’ll overhear conversations at my local coffee shop of non-developers gushing about how much ChatGPT has revolutionized their work: church workers for writing bulletins and sermons, small business owners for writing loan applications or questions about taxes, writers using it for proofreading, etc. And this is small town Colorado.

Not since the advent of Google have I heard people rave so much about the usefulness of a new technology.


These are not the sort of uses we need to make this thing valuable. To be worthwhile it needs to add value to existing products. Can it do that meaningfully well? If not it's nothing more than a curiosity.


Worthwhile is a hard measure.

To make money though it just needs to have a large or important audience and a means of convincing people to think, want, or do things that people with money will pay to make people think, want or do.

Ads, in other words


Can you get enough revenue from ads to pay the cost of serving LLM queries? Has anyone demonstrated this is a viable business yet?

A related question: has anyone figured out how to monetize LLM input? When a user issues a Google search query they're donating extremely valuable data to Google that can be used to target relevant ads to that user. Is anyone doing this successfully with LLM prompt text?


I bet Google is utilizing the value of the LLM input prompts with close to the same efficiency they are monetizing search. I that case, there are two questions -- 1) will LLM overtake search? and 2) can anyone beat Google at monetizing these inputs? I think the answer to both is no. Google already has a wide experience lead monetizing queries. And personally, I'd rather have a search engine that does a better job of excluding spam without having to worry whether or not it's making stuff up. Kagi has a better search than any of the LLMs (except for local results like restaurants/maps).


Years ago I built a service that would print family & friend Instagram and Facebook posts and automatically mail them to an incarcerated loved one. I joked that it will be technically possible to convert the videos to flip books, and here you have gone and done it!


Just spitballing, but in C it was always the convention to use zero-based indexing. Probably because you were often adding these indexes to pointers (literal memory addresses) to index into an array, so you needed a zero to index into the first slot of the array.


What I don't get is why the day of the month doesn't also start at 0?


As a kid hacking away on an Apple II this was apparent; all the good Basic games were written in Woz’s Integer Basic.


And two thousand years ago Socrates said the same thing about writing:

> For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them.

https://www.historyofinformation.com/detail.php?entryid=3894

I love to mention this whenever someone taking the “this new tech is bad” angle. We have to be careful to not just have the bias of “whatever existed when I was born is obvious, anything new that I don’t understand is bad.”


I understand the truth that writing was very much a huge net positive for humanity, but that doesn't mean Socrates was in any way wrong.

To my mind, though, LLMs are just plain inaccurate and I wouldn't want them anywhere near my codebase. I think you gotta keep sharp, and a person who runs well does not walk around with a crutch.


In general conversation, “intelligence”, “knowledge”, “smartness”, “expertise”, etc are used mostly interchangeably.

If we want to get pedantic, I would point out that “knowledge” is formally defined as “justified true belief”, and I doubt we want to get into the quagmire of whether LLM’s actually have beliefs.

I took OP’s point in the casual meaning, i.e. that LLMs are like what I would call an “intelligent coworker”, or how one might call a Jeopardy game show contestant as intelligent.


> Sumerian is 5,000 years old. We understand Sumerian. We are not going to forget Sumerian. A warning written in English is not going to be unreadable in 10,000 years.

It’s worth noting, however, that Sumerian was forgotten for nearly 2000 years: from ~200CE until the 1900s.

I agree it seems unlikely for a language to be completely forgotten again, we can’t be sure.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: