Hacker Newsnew | past | comments | ask | show | jobs | submit | chaosjevil's commentslogin

That sounds considerably more expensive than Microsoft's usual "embrace, extend, extinguish".


It's a bit off-topic, but the role of urine in teeth cleaning reminds me one of Catullus' poems, poking fun at someone who was a bit too eager to laugh/smile as trying to show off how much urine he drunk.

For context: Celtiberians used urine to brush their teeth. The Romans were aware of that, but they thought that it was disgusting.

(The poem is Catullus 39, if anyone is interested. Link: https://en.wikisource.org/wiki/Translation:Catullus_39 )


Relevant tidbit: hot mixing with quicklime was key for this self-healing property[1], as it allowed the creation of small lime clasts across the material.

[1] https://arstechnica.com/science/2023/01/ancient-roman-concre...


Nah. It's either the 2nd biggest communication problem, or a side effect of a bigger problem. Depending on how you analyse it.

The biggest communication problem is that you always need to take into account that your reader might be braindead trash. That has two consequences:

1. Unless you expose your full train of thought, expect screeches like "I dun unrurrstand, SPOONFEED ME BASIC REASONING, REEE".

2. Unless you explicitly say something, expect some assumer to claim that you said the opposite. Bonus points if this is due to failure to take context into account, or even notice that the context is missing.

Both consequences have been training writers to idiot-proof their texts with big walls of unnecessary words. And that's the case here.


TL;DR: AI doesn't understand human language prompts, it's just associating specific tokens with specific outputs.


I don't think that the internet is currently dead ("dead" = "bot activity mistaken for human activity greatly surpassing genuinely human activity"), but that this is a real future concern.

One thing to take into account is that current bots suck major balls. Specially when it comes to utterance purpose, and for longer utterances; they fail the Turing test really hard, provided that the one asking "is this a bot or a human?" is able to interpret utterances based on nearby utterances and non-textual information around it. In other words if the internet was dead already we would've noticed it.

And someone might argue "ackshyually, there are more advanced bots that model language on pragmatic levels, you just don't know about them because it's all hidden!". Well, then you get a conspiracy "theory" with its typical appeal to ignorance, not something worth thinking about.


>Web advertisers seem like a classic case of taking miles when given an inch.

I agree. And for me they're also two (!) good examples of tragedy of the commons.

From the side of the advertisers, your ads are only competitive if they're slightly more obnoxious than your competitors, but as you make your ads more obnoxious you're degrading a common resource - the willingness of the public to put up with your crap. Eventually the public says "this is too obnoxious, I'm going to block it".

And, from the side of the sites showing ads, you'll want (or need) the additional money brought by one more ad. But once you do it, you're creating yet another ad space - making the market value of ad spaces a tiny bit cheaper. And as everyone is doing this, the price of ad spaces drops down to the bottom, so you need to include more and more ads on your platform to stay competitive (or even to stay online).

I would expect governments to intervene in those situations. At the end of the day, a government should, among other things, prevent its citizens from making things worse for everyone, when seeking their own interests. Sadly governments aren't big fans of contradicting megacorps like the Ad Sense (from Alphabet) mafia.


The author might call Nichomachean Ethics "ancient"; I'd call it "unrefined".

>Virtue is a state “consisting in a mean,” Aristotle maintains, and this mean “is defined by reference to reason, that is to say, to the reason by reference to which the prudent person would define it.” (For Aristotle, the “mean” represented a point between opposite excesses—for instance, between cowardice and recklessness lay courage.)

The idea of virtue being a mean is an unchecked assumption, from oversimplified matters.

Reusing the example with the cowardice-courage-recklessness triplet: suppose for a moment a fool that acts only when it is not necessary to act; he runs away from an ant, but throws himself into a fight against a pack of lions. The fool in question is, at the same time, coward and reckless, but he is not courageous.

You might say "he's being coward or reckless on different circumstances"; and you'd be right. This shows that there's a second dimension to take into account here, besides "ability to act": it's "ability to detect when to act". And once you do this split, you notice that courage is not a mean between cowardice and recklessness; it's simply the opposite of cowardice, with recklessness having another opposite (let's call it "carefulness").

I'm not too eager to assume that _all_ virtues work like this, but the presence of at least one virtue not behaving this way already shows that Aristotle's "virtue as a mean" concept is flawed and assumptive.

_________________

Another issue; from the book. Aristotle makes an unholy mess of what's good for the individual and what's good for society, around book III; almost like he was preaching "hey if you want happiness/eudaimonia you should have arete/virtue~". Things simply don't work like this on an individual level - you can be the biggest arsehole in the village and still potentially live a happy, meaningful life. Or live in virtue and miserable.


>Things simply don't work like this on an individual level - you can be the biggest arsehole in the village and still potentially live a happy, meaningful life. Or live in virtue and miserable.

This is pretty much the complaint of Socrates' interlocutors in The Republic.

Socrates (or Plato) might object that "being the biggest arsehole" would entail security concerns that would hinder happiness.

They may also object to such a life being meaningful if it involves indulging one's desires to the detriment of the rest of the villagers. Don't the villagers make their happiness possible? Will their arseholiness help sustain what makes human life meaningful into perpetuity, or set the stage for its destruction?


> Reusing the example with the cowardice-courage-recklessness triplet: suppose for a moment a fool that acts only when it is not necessary to act; he runs away from an ant, but throws himself into a fight against a pack of lions. The fool in question is, at the same time, coward and reckless, but he is not courageous.

I posit that one should be afraid of fighting an ant because the ant is weaker than you. The fear in this case is not of being defeated or humiliated, but of being cruel and unkind. So simply, what kind of character are if you can only fight something that is weaker than you?

On the other hand, if you choose to climb the harder hill or fight the harder fight (lions, kraken, etc.), that isn't indicative of foolhardiness and neither is it indicative of courage (I agree). It is a simply a fight that befell you and you decided not to roll over and die. I think it is nihilist to fight something that is stronger than you, and you tacitly accept your defeat before you even begin the fight.

Tbh, fights are depressing. I am more inclined to being a fool than a fighter.


>I posit that one should be afraid of fighting an ant because the ant is weaker than you. The fear in this case is not of being defeated or humiliated, but of being cruel and unkind. So simply, what kind of character are if you can only fight something that is weaker than you?

That's different - that person wouldn't fear the ant like a coward, but take a moral instance against fighting entities weaker than oneself.

>On the other hand, if you choose to climb the harder hill or fight the harder fight (lions, kraken, etc.), that isn't indicative of foolhardiness and neither is it indicative of courage (I agree). It is a simply a fight that befell you and you decided not to roll over and die. I think it is nihilist to fight something that is stronger than you, and you tacitly accept your defeat before you even begin the fight.

I think that courage is only a meaningful attribute to assign to a being when there's an actual choice between fighting and not fighting. In the case of a "fight or die" situation, you don't really have much of a choice.

>Tbh, fights are depressing. I am more inclined to being a fool than a fighter.

Frankly? I hear ya. Ditto, in large part.

The same reasoning could be used with other moral dimensions than coward vs. courageous vs. reckless. I'd argue that it applies to greed; by Aristotle's reasoning the opposite of greed would be making oneself poor, or perhaps lack of care for material possessions. It's a silly argument, in the eyes of someone living in 2023. (Even if it was a rather clever reasoning for those times.)


>A lot of the AI doomerism comes from folks who do not understand the real complexities in making systems which really can function in a fully autonomous way in an environment which is hostile and dynamic.

That's a rather interesting mix of appeal to ignorance and argumentum ad hominem. Two genetic fallacies, together; neither addressing what is said, but instead who says it.

>Evolutionary goals are not easy even with autonomous systems as goal definition is largely defined by societal needs, the environments we stay in and the resources we work with.

Great way to say "I didn't read the article".

The author is not talking about evolutionary _goals_.


Oh my goodness they’re not even fallacies. A fallacy is saying that something is wrong because of fallacious reasons. But the OP is indirectly making the much weaker claim that “I’m not buying it and that’s because X and Y.”

Speaking of fallacies, “AI doomers” (I’m just running with it) often deploy the rhetoric (not really a fallacy) that AI is about to doom us all because everything is supposedly so simple (like intelligence) and therefore it’s conceptually simple for a sufficiently advanced (but not that advanced) AI to keep improving itself. Now how do you respond to someone that just says that things are not complex when in reality things are indeed complex? Basically you have to unpack everything because the other person is just taking things at face value and is being naive. It’s like an “appeal to my ignorance”.


>A fallacy is saying that something is wrong because of fallacious reasons.

A fallacy is a basic flaw of logic reasoning supporting or ditching a claim, view, statement or position. The post above does it, implicitly, as yourself acknowledged - "I'm not buying it" is still a view - it's ditching a claim because of things unrelated to said claim, like "who said it".

>often deploy the rhetoric (not really a fallacy) that AI is about to doom us all because everything is supposedly so simple (like intelligence) and therefore it’s conceptually simple for a sufficiently advanced (but not that advanced) AI to keep improving itself.

It's both the rhetoric (on a discursive level) and a fallacy (on the logic level): oversimplification. (Although "appeal to my ignorance" sounds funnier.)

>Now how do you respond to someone that just says that things are not complex when in reality things are indeed complex?

Provided that you're talking with a somewhat reasonable person, you say something like "Shit is far more complex than you're pretending that it is.", then highlight a few instances of on-topic complexity that affect the outcome.

Now, if you're talking with braindead trash, things are different. Those things aren't convinced by reason, so you use emotive appeal, even if it's itself a fallacy (you claim something because it makes someone feels good/bad, not due to its truth value). Something like "oh poor thing, complexity hurts your two neurons and your precious, OOOHH SO PRECIOUS! feelings? Too bad! Things won't become MaGiCaLlY simpler if you screech hard enough. Reality does not revolve around your belly."


> The author is not talking about evolutionary _goals_.

Well... He is, actually, just not biological-evolutionary goals. (Like "natural selection," the term "evolution" can apply to anything that has appeared or might appear).

I do think you're right that the article frames the topic pretty well and explores it well, including the concerns that the person you're responding to raised.


> Two genetic fallacies

I, for one, reject the work of Cohen and Nagel because I think they’re both bad philosophers and therefore cannot be swayed by such rhetorical machinations!


>Great way to say "I didn't read the article".

or you, the parent comment.


I read it. (Although I wish that I didn't. Reddit exodus is taking its toll on other sites, it seems.)


Nah. Human utterances convey purpose on a discursive level; including your comment or mine. We say stuff because we want to do something, like showing [dis]agreement or inform another speaker or change the actions of the other speaker. This is not just probabilistic - it's a way to handle the world.

In the meantime those large language models simply predict the next word based on the preceding words.


We're able to do something analogous to reinforcement learning (take on new example data to update our 'weights').

Why do I spend time debating these ideas on Hacker News? Probably the underlying motivation is improving the reliability of my model of the world, which over my lifetime and the lifetimes of creatures before me has led to (somewhat indirectly) positive outcomes in survival and reproduction.

Is my model of the world that different to that of an LLM? I'm sure it is in many ways, but I expect their are similarities as well. An LLMs model encodes in a form a bunch of higher order relationships between concepts as defined by the word embedding. I think my brain encodes something similar, although the relationships are probably orders of magnitude more complex than the relationships encoded with GPT-4.


Is my model of the world that different to that of an LLM?

Well, one major way you’re different from an LLM is that you’re alive. You’re capable of learning continuously as you go about your day and interact with the world. LLMs are “dead” in the sense that they’re trained once and frozen, to be used from then on in the exact same state of their initial training.


I agree that is a fundamental difference. That’s what I meant about reinforcement learning. Our ‘model weights’ are being updated with new data all the time.

I was just referring to what happens at a specific instance in time when someone asks me for example ‘What’s the capital of Norway?’


That one’s not a great example. Either you know the capital or you don’t. There’s no process (other than research) by which you can learn the name while attempting to answer.

A question I get much more often is “how do I solve this math problem?” Many times, the problem is one I’ve never seen before. So in the process of answering the question, I also learn how to solve the problem too.


While you can apply zero shot learning and get the answer to a new math problem, you are only apply the learning to significant depth after a fine-tuning session - sleep.


> We say stuff because we want to do something, like showing [dis]agreement or inform another speaker or change the actions of the other speaker.

My LLaMA instance is absolutely capable of this. ChatGPT shows a very, very narrow range of possible LLM behaviors.


We learn and fine-tune while we learn than we might reason and might use other tools like Wikipedia lookup.

LLM start prelearned and already can use tools.

And autogpt adds reasoning loops. .the intend? Human tasks.

Let's build a LLM which needs to stay alive. Let's see perhaps we are closer than you think.

I welcome my overlord, hi overlord I can help you stay alive and I'm friendly


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: