Hacker Newsnew | past | comments | ask | show | jobs | submit | mlsu's commentslogin

and at this scale it seems like the hazards of h2 would be pretty minor. You're not exactly going to have a Hindenburg situation with only a couple dozen liters of H2.

No but you might get serverely hearing impaired..

> If you start imposing laws or other practices every time a group of people feel “uncomfortable”, the world will quickly grind to a halt.

I mean, yes, quite an apt description of our reality. This has basically been the modus operandi of the whole of American society for the last 3 decades.

Can't have your kids riding bikes in the neighborhood. Can't build something on your own property yourself without 3 rounds of permitting and environmental review. Can't have roads that are too narrow for a 1100 horsepower ladder truck. Can't get onto a plane without going through a jobs program. Can't cut hair without a certificate. Can't teach 6 year olds without 3 years of post grad schooling + debt. Can't have plants in a waiting room because they might catch on fire. Can't have a comfortable bench because someone who looks like shit might sleep on it.

Can't can't can't can't ...


It's an interesting thought experiment to consider how you would organise your ideal society.

I lived in Switzerland for a time and there are many notorious rules (e.g. don't shower or flush your toilet after 10pm; don't recycle glass out of working hours) governing day-to-day behaviour which initially seem ridiculous and intrusive. However, what you quickly realise is that many of these are rooted in a simple cultural approach of "live your life as you wish, just don't make other people's life worse" - an approach I came to appreciate.


This is it. It’s amazing how accepting people of this reality and how resigned they are about it.

Has there ever been a situation where taking away parking has lead to traffic dropping?

I've heard this, but I've never seen an example in practice. It seems like making things more walkable and bikeable, at the expense of cars, always increases foot-traffic, with no exception.


Yes, though I can't recall enough details that I could help you search.

Basically anytime it is tried in the suburbs where nobody is walking now nothing changes. When a lot of people are already walking you can increase traffic by getting rid of cars.

Details matter, most of the places people take aware cars are already dense areas and they tell you about it. However in a few cases someone who hasn't understood the context tried to apply a lesson it doesn't apply and it fails.


I think what you’re referring to is traffic evaporation. But that’s sometimes referring to road capacity more than parking

Fred Brooks, from "No Silver Bullet" (1986)

> All software construction involves essential tasks, the fashioning of the complex conceptual structures that compose the abstract software entity, and accidental tasks, the representation of these abstract entities in programming languages and the mapping of these onto machine languages within space and speed constraints. Most of the big past gains in software productivity have come from removing artificial barriers that have made the accidental tasks inordinately hard, such as severe hardware constraints, awkward programming languages, lack of machine time. How much of what software engineers now do is still devoted to the accidental, as opposed to the essential? Unless it is more than 9/10 of all effort, shrinking all the accidental activities to zero time will not give an order of magnitude improvement.

AI, the silver bullet. We just never learn, do we?


I think software was indeed 9/10 accidental activities before AI. Probably still mostly accidental activities with the current LLM.

The essence: query all the users within a certain area and do it as fast as possible

The accident: spending an hour to survey spatial tree library, another hour debating whether to make our own, one more hour reading the algorithm, a few hours to code it, a few days to test and debug it

Many people seem to believe implementing the algorithm is "the essence" of software development so they think the essence is the majority. I strongly disagree. Knowing and writing the specific algorithm is purely accidental in my opinion.


Isn't the solution to that standardizing on good-enough implementations of common data structures, algorithms, patterns, etc? Then those shared implementations can be audited, iteratively improved, critiqued, etc. For most cases, actual application code should probably be a small core of businesses logic gluing together a robust set of collectively developed libraries.

What the LLM-driven approach does is basically the same thing, but with a lossy compression of the software commons. Surely having a standard geospatial library is vastly preferable to each and every application generating its own implementation?


I mean, of course libraries are great. But the process to create a standardized, widely accepted library/framework usually involves with another kind of accidental complexity: the "designed by committee" complexity. Every user, and every future user will have different ideas about how it should work and what options it should support. People need to communicate their opinions to the maintainers, and sometimes it can even get political.

At the end, the 80% features and options will bloat the API and documentation, creating another layer of accidental activity: every user will need to rummage through the doc and something source code to find the 20% they need. Figuring how to do what you want with ImageMagick or FFmpeg always involved with a lot of reading time before LLM. (These libraries are so huge that I think most people only use more like 2% instead of 20% of them.)

Anyway, I don't claim AI would eliminate all the accidental activities and the current LLM surely can't. But I do think there are an enormous amount of them in software development.


It that's the essence, then of course 9/10 is accident. I think that's not software engineering though.

The essence: I need to make this software meet all the current requirements while making it easy to modify in the future.

The accident: ?

Said another way: everyone agrees that LLMs make it very easy to build throw away code and prototypes. I could build these kind of things when I was 15, when I still was on a 56k internet connection and I only knew a bit of C and html. But that's not what software engineers (even junior software engineers) need to do.


There are mixed views here. Some are making the claim relevant to the Silver Bullet observation, than LLMs are cutting down time spent on non-essential work. But the view that's really driving hype is that the machine can do essential work, design the system for you, and implement it, explore the possibility space and make judgments about the tradeoffs, and make decisions.

Now, can it actually do those things? Not in my estimation. But from the perspective of a less experienced developer it can sure look like it does. It is, after all, primarily a plausibility engine.

I'm all for investing in integrating these generative tools into workflows, but as of yet they should not be given agency, or even the aesthetic appearance of agency. It's too tempting to the human brain to shut down when it looks like someone or something else is driving and you're just navigating and correcting.

And eventually, with a few more breakthroughs in architecture maybe this tech actually will make digital people who can do all the programming work, and we can all retire (if we're still alive). Until then, we need to defend against sleepwalking into a future run by dumb plausibility-generators being used as accountability sinks.


>Now, can it actually do those things? Not in my estimation

Just today I asked my clawbot to generate a daily report for me and it was able to build an entire scraping skill for itself to use for making the report. It designed it along with making decisions along the way including changing data sources when it realized one it was trying was blocking it as a bot.


Buddy if you think financial crashes were bad today, you should see what happens when banking is not regulated (great depression). Or, if you think war is bad today, you should see what happens when the world becomes multipolar and countries start carving up the world for territory (WWII).

Like please, read a history book.

I'm sure I agree with you that there are many problems with this system but life without it can get so much worse. The green agenda? 4G? That's the worst thing you can imagine?


Imho at first blush this sounds fascinating and awesome and like it would indicate some higher-order spiritual oneness present in humanity that the model is discovering in its latent space.

However, it's far more likely that this attractor state comes from the post-training step. Which makes sense, they are steering the models to be positive, pleasant, helpful, etc. Different steering would cause different attractor states, this one happens to fall out of the "AI"/"User" dichotomy + "be positive, kind, etc" that is trained in. Very easy to see how this happens, no woo required.


An honest account of this situation would place at least some blame on there being a tall SUV blocking visibility.

These giant SUVs really are the worst when it comes to child safety


I bet we'll the the SUV mania in the future as something crazy, like smoking in a plane or using lead for gasoline. Irrational large size cars that people get because everyone it's afraid of another SUV hitting them in a sedan. The tragedy of the commons.

You're right, but there will be some brand new, even worse social psychosis by then, surely. Cigarette smoking actually makes more sense to me than giant cars -- at least it only hurts the person doing the smoking!

The best reaction from Waymo would have been to start to lobby against letting those monster-trucks park on streets near schools. They are killing so many children, I'm flabbergasted they are still allowed outside of worksites.

From a "my opinion" standpoint, yes, I would love to see this.

From a tactical PR standpoint, it would be a disaster. Muh big truuuucks is like a third rail because Americans are car obsessed as a culture. They already hit a kid, best to save some energy for the next battle.

Besides if Waymo wins (in general) private car ownership will decrease which is a win regardless. And maybe Waymo can slowly decrease the size of their fleet to ease up the pressure on this insane car size arms race.


What I find a bit confusing is that no one is putting any blame on the kid. I did the same thing as a kid, except it was a school bus instead of SUV, and that was a fucking stupid thing to do (I remember starting to run over the street, and the next thing is that I am in the hospital bed), even though I had been told to always cross the street from behind the bus, not in front of it.

That day I learned why it was so.


Of course the kid is at fault. But everyone knows that kids do stupid and reckless things, which is why drivers are generally expected to take more care around schools and similar institutions. If robotaxis are not able to do that, then the results will be easy to predict

Not only this. The marketplace got way less efficient. These companies are so large that they rival small states, with very little actual competition and command economies internally.

When management decides to build the metaverse, it should be a career-ending and company-ending move. What did the shareholders say? Nothing, they know that there's no competition. The leadership stayed. $70B!

Huge swathes of tech (and the economy at large) are like this. The stock market plays a huge part -- there are very few active participants, and individual pockets are bigger than ever (think e.g. Softbank); capital flows to whoever is largest. Even VC's talk about "what's your moat" -- they don't want you to out innovate, that's actually difficult; why do that, when you can find a regulatory loophole, or market power, and exploit that instead.

When one earns better return on his dollar from monopolization and market power, it's a very very bad sign for the economy at large. And we very clearly have not yet learned this lesson, even when signs of it (China out innovating us in a rapidly growing number of industries; political instability; state capture, etc). We are already a couple decades into this habit and it will not end well for us. I think this is an issue with USA industrial strategy at large. We say over and over again, we need to do the hard stuff, we need to invest in energy, batteries, 'hard tech' etc. But what did we do? $1T to Sam Altman sitting on stage in the Steve Jobs outfit, doing the App Store for ChatGPT.

Individual SWEs are doing what individual people did in the Soviet Union. Join the party, read the party book, and get a cushy mid-level bureaucrat position. It beats working the factory, that's for sure!


> This also highlights the importance of model design and training. While Claude is able to respond in a highly sophisticated manner, it tends to do so only when users input sophisticated prompts.

If the output of the model depends on the intelligence of the person picking outputs out of its training corpus, is the model intelligent?

This is kind of what I don't quite understand when people talk about the models being intelligent. There's a huge blindspot, which is that the prompt entirely determines the output.


Humans also respond differently when prompted in different ways. For example, politeness often begets politeness. I would expect that to be reflected in training data.

If I, a moron, hire a PhD to crack a tough problem for me, I don't need to go back and forth prompting him at a PhD level. I can set him loose on my problem and he'll come back to me with a solution.

> hire a PhD to crack a tough problem for me, I don't need to go back and forth prompting him at a PhD level. I can set him loose on my problem and he'll come back to me with a solution.

In my experience with many PhDs they are just as prone to getting off track or using their pet techniques as LLMs! And many find it very hard to translate their work into everyday language too...


The PhD can't read minds, the quality if the request from a moron would be worse than the quality of the request from someone with avg intelligence. And the output would probably noticeably differ accordingly

Unless your problem fits the very narrow but very deep area of expertise of the PhD you’re not going to get anything. The phds I have worked with can’t tie their shoes because that wasn’t in their dissertation.

Well if it ever gets to be a full replacement for phds, you’ll know cause it will have already replaced you.

I think that's what is happening. It's simulating a conversation, after all. A bit like code switching.

that seems like something you wouldn't want from your tools. humans have that and that's fine, people are people and have emotions but I don't want my power-drill asking me why I only call when I need something.

>Humans also respond differently when prompted in different ways.

And?


A smart person will tailor their answers to the perceived level of knowledge of the person asking, and the sophistication of the question is a big indicator of this.

What is a "sophisticated prompt"? What if I just tack on "please think about this a lot and respond in a highly sophisticated manner" to my question/prompt? Anyone can do this once they're made aware of this potential issue. Sometimes the UX layer even adds this for you in the system prompt, you just have to tick the checkbox for "I want a long, highly sophisticated answer".

They have a chart that shows it. The education level of the input determines the education level of the output.

These things are supposed to have intelligence on tap. I'll imagine this in a very simple way. Let's say "intellignce" is like a fluid. It's a finite thing. Intelligence is very valuable, it's the substrate for real-world problem solving that makes these things ostensibly worth trillions of dollars. Intelligence comes from interaction with the world; someone's education and experience. You spend some effort and energy feeding someone, clothing them, sending them to college. And then you get something out, which is intelligence that can create value for society.

When you are having a conversation with the AI, is the intelligence flowing out of the AI? Or is it flowing out of the human operator?

The answer to this question is extremely important. If the AI can be intelligent "on its own" without a human operator, then it will be very valuable -- feed electricity into a datacenter and out comes business value. But if a model is only intelligent as someone using it, well, the utility seems to be very harshly capped. At best it saves a bit of time, but it will never do anything novel, it will never create value on its own, independently, it will never scale beyond a 1:1 "human picking outputs".

If you must encode intelligence into the prompt to get intelligence out of the model, well, this doesn't quite look like AGI does it?


How much of this is actually due to the recipient of the information being low-intelligence? If we use Communications theory for this (SCMR Model), having an intelligent sender and content won't do much use if the receiver of said information is unable to understand and use it.

I see it with coworkers all the time. They'll ask ChatGPT to do an analysis and it'll output test results for a T-test. They don't know how to interpret it at all, and so it's ultimately meaningless to them. They're just using "stat sig" as a way to make a non-technical VP happy. In situations like this, I don't think a highly intelligent source, model or human, can make the recipient be more intelligent than they actually are.


ofc what I'm getting at is, you can't get something from nothing. There is no free lunch.

You spend energy distilling the intelligence of the entire internet into a set of weights, but you still had to expend the energy to have humans create the internet first. And on top of this, in order to pick out what you want from the corpus, you have to put some energy in: first, the energy of inference, but second and far more importantly, the energy of prompting. The model is valuable because the dataset is valuable; the model output is valuable because the prompt is valuable.

So wait then, where does this exponential increase in value come from again?


the same place an increase in power comes from when you use a lever.

> the same place an increase in power comes from when you use a lever.

I don't understand the analogy. A lever doesn't give you an increase in power (which would be a free lunch); it gives you an increase in force, in exchange for a decrease in movement. What equivalent to this tradeoff are you pointing to?


In general it will match the language style you use.

If you ask a sophisticated question (lots of clauses, college reading level or above) it will respond in kind.

You are basically moving where the generation happens in the latent space. By asking in a sophisticated way you are moving the latent space away from say children's books and towards say PhD dissertations.


I don’t find this to be true at all. You can ask it in text speech with typos and then append how you’d like the response to be phrased and it will follow the instructions.

Yeah, because you told it explicitly how you would like the response to be phrased, which is the same thing you’re doing implicitly when you simply talk to it in a certain way.

Come on, this is human behavior 101, y’all.


i don't know, are we intelligent?

you could argue that our input (senses) entirely define the output (thoughts, muscle movements, etc)


The whole point of humans is the way we process the input. Every life form out there receives sound vibrations and has photons hitting their body all the time, not everyone uses that information in the same way or at all. That plus natal reflexes and hardcoded assumptions

There's a bit of baked-in stuff as well. We are a full culture-mind-body[-spirit] system.

Fortunately we've got the full system because even under ideal conditions nobody's actually ever been intelligent at all times and we need the momentum from that full system to resume in an intelligent direction after an upset when it's not all at its best.

When you read something like this it demands that you frame Claude in your mind as something on par with a human being which to me really indicates how antisocial these companies are.

Ofc it's in their financial interest to do this, since they're selling a replacement for human labor.

But still. This fucking thing predicts tokens. Using a 3b, 7b, or 22b sized model for a minute makes the ridiculousness of this anthropomorphization so painfully obvious.


Funny, because to me is the inability to recognize the humanity of these models that feels very anti-humanistic. When I read rants like these I think "oh look, someone who doesn't actually know how to recognize an intelligent being and just sticks to whatever rigid category they have in mind".


"Talking to a cat makes the ridiculousness of this intelligence thing so painfully obvious."

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: