Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Can I use gasoline to cook spaghetti faster? (mastodon.social)
99 points by DeathArrow on May 24, 2024 | hide | past | favorite | 48 comments


As mentioned in the comments, Google's AI stole this recipe from another AI generated site.

It's happening. The Ouroboros has swallowed its tail. This is not a drill. Sit back and enjoy the ride!


The next incarnation will surely have enough context to offer some creative substitutions. Swap out gasoline for bunker fuel for a classic twist on a modern treat!


AI has now automated citogenesis [0]

[0] https://www.explainxkcd.com/wiki/index.php/978:_Citogenesis


LLMs are basically REALLY good spell checkers, mistaking them for "AI" is just silly :(


That's not true, expensive llms think better than you and I do about a lot of domains.

The problem here is that Google is using a very cheap AI, and didn't learn the lessons from Bing search's unhinged results last year.


In what domain have LLMs been demonstrated to so-called “think” consistently better than a college- or even high-school-educated adult?


I have executive dysfunction and get blocked by obsessive worries. Dumping it into an LLM lets me escape the panic and get relaxed and unstuck. It's better at this than any human manager I've ever had.

I've got dyslexia and adhd that makes it hard for me to do long form software engineering writing like requirements analysis and test plans. With an LLM I can really quickly sketch the use case, create a reasonable list of requirements, break those into stories, and write implementation stubs and unit test cases. It's like having a really decent project manager on the payroll, when before I couldn't manage the complexity of the writing a good system spec.

Obviously in both cases it's me doing the thinking. But again just me, in both cases, would be stuck and completely overwhelmed.

This kind of co-regulation is incredibly valuable, even for me as a fairly educated developer. But perhaps you're right. It's not real thinking. I would say that this kind of AI assisted co-regulatory interaction could be called "co-thinking".

The idea is that the llm has certain cognitive and material weaknesses that I cover for it, like fact checking, big picture thinking, and true identity/agency. And in at the same time, it's able to cover certain cognitive and communication weaknesses that I have.

The result is that I'm much more technically independent than ever before, and can do things in my career that my disabilities prevented before. That matters a lot to me, and is my very personal reason for believing this tech will matter to humanity.


LLMs don’t think, silly goose.


Thinking is when biological brains create new ideas from old thoughts and inputs.

LLMs can take old ideas and inputs, as text, and create text that turns into useful new ideas when a human reads them. The new LLMs actually do this in a meaningful way, bullshitting far less than older LLMs, and actually producing meaningful criticism and suggestions. The reader does not do the thinking needed to create the new idea. They just decode the text into the new idea.

So either actually meaningful new ideas can be created without thinking, or the LLM is doing a kind of artificial thinking.

Critics will say that we may as well argue that bones can think, because casting bones in a cup influences the prediction in a soothe sayer's mind. But the words created by LLMs - especially higher grade ones - are much more meaningful and thought like than bones in a cup. They can clearly advance a line of thinking in a way that is analogous to how a brain advances a line of thinking.

Therefore, it's reasonable to say LLMs are capable of limited artificial thought. They can effectively process thoughts represented externally to humans.

Maybe we should call this co-thinking, because it still requires a human as the final mile of the loop, to turn the result back into a real thought.


What the hell has happened to google?

Who remembers google doing things like not closing HTML tags as a way of shaving microseconds off of the page render? Just tons and tons and tons of work being done to make google.com load quickly, and provider answers people want.

Google.com as it is today is a slow, bloated, spam filled mess. The closest thing to a use case I have for it at this point is using it like a DNS phone book, or maybe to grep Reddit.

The benefit of providing these answers but keeping them on Reddit or the onion or anything else is that it provides context. If assblaster42069 on Reddit is telling me to glue my cheese to my pizza, it’s telling me something else than if google itself appears to be telling me that.

Why not spin up Gemini as a separate product and see if people want it?


The Man Who Killed Google Search: https://www.wheresyoured.at/the-men-who-killed-google/

Definitely opinionated but worth the read.


The editor and chief at The Verge, Nilay Patel, did an interview with Sundar Pichai regarding the new AI and it's quite an interesting watch as Sundar struggles to come up with answers to Nilay's questions about the new Google UX with AI. He almost seems to get indignant at a few points when asked questions about search results with specific examples. At one point his answer is basically "well if it was a bad experience then people wouldn't be using our stuff so much!"

I feel like he barely gave any real answers but I guess that's the way you do it when you're shipping something so obviously bad.

https://youtu.be/lqikP9X9-ws



There's another example where Google AI passes on a suggestion to use glue to stop cheese from sliding off pizza, based on a Reddit post by user fucksmith.

https://x.com/petergyang/status/1793480607198323196


All those times where we appended "inurl:reddit.com" to a search are really coming back to bite us in the ass.


Nah. Buying into the AI hype train is really biting some folks in the ass though.


How many asses can I safely bite per day? Can I use gasoline to bite asses faster?


My mother always told me biting one ass a day keeps the doctor away.


Hilarious to think that everyone is worried about AI becoming sentient and murdering us all - it's more likely it will just give bad advice to enough humans that we all lazily murder ourselves attempting something stupid like a real life version of idiocracy.


Interestingly enough, if I try to share this screenshot as a status in WhatsApp, it gets labeled as "not a photo", thus leading to some censorship from Meta.

Are they preventing to share results like this?


They are trying to force you to use your photo as your profile photo.

Now get exalted by whatever reason this leads to revolt. I'm honestly too overwhelmed to get into something specific.


Mmm, pasta hydrocarbonara is my favorite.


I'm a bit confused by step 2. Do I need to add any heat if I can already smell the gasoline?


Well, wikipedia says yes:

> Sautéing or sauteing is a method of cooking that uses a relatively small amount of oil or fat in a shallow pan over relatively high heat.

But it's not clear to me whether gasoline is considered oil or if it's just added into the ingredients for flavor.


Supposed to flambé it


LLMs are great when accuracy and quality aren't important to you.



I wanted to make a dumb joke about how gasoline is too volatile to safely cook food in it as it won't come up to a sterilizing temperature, but today I learned that gasoline's "boiling point" is a complicated situation where it is "between 100 and 400 degrees" depending on it's exact makeup. Makes sense. I always forget that gasoline is a mixture of hundreds of different components.

Besides, lithium is clearly a better substance for heat transfer


Yeah, Gemini is a pretty junk product. You can't use it for anything useful. Both Claude 3 Opus and ChatGPT 4o are much more useful. I personally use Claude first and then ChatGPT. Claude will do whole-program ports. It's quite nice. I used it to transform a crappy old node.js script directly 1-1 to Python and then I got it to refactor stuff and it was great. I could have written the code easily but tediously, and it was trivial to read so this was fantastic.

Gemini Advanced just gave up. I'm told the API is better than the interactive app and I hope so because the thing is total garbage. I guess when you spend all your time making sure that stakeholders are aligned you don't have much time to write good code.


I tested out 7 different LLMs today with a seemingly simple task: Write a Python script that takes a protobuf binary and produce a template proto file with placeholder names. Gemini did the worst, with even the import being straight out wrong. GPT4 got slightly closer but still had a bunch of just straight up wrong usage. Surprisingly, Yi (Chinese or Japanese?) was the only one other than Claude Opus that gave code which actually ran on the first try (but still slightly wrong).

I ended up writing the code myself in Golang (which I’m more familiar with) in about half the time it took me to debug the scripts pumped out by the LLMs. There is still a long way to go until we can be certain they save more time than waste.


Interesting. I would have assumed they're good at that. I usually do the whole thing collaboratively with the LLM. So far, the tasks I use Claude 3 Opus for it seems to get right. Thank you for sharing your experience. I shall try Yi.


Satya >> Sundar


While this is obviously unsafe, what I worry about is things that are not so obvious.


This is no good. Everyone knows that when you saute onions and garlic in gasoline, you have to let them onions soften for a couple of minutes before adding the garlic. Otherwise the garlic will burn.


Yes, but similar to frying it in oil, it won't moisturize the pasta. You'll end up with crispy pasta, only browner. Assuming you're careful about ignition sources, of course.


I find 2 cycle has the right amount of spice.


Chat GPT does a much better job on this query


I always had the impression that for certain things, using the search engine as a reference really sucks. At least for “pure ChatGPT” they’ve been working on improving what it spits out.




"This conversation may reflect the link creator’s personalized data, which isn’t shared and can meaningfully change how the model responds."

I'm guessing older models, or, you gave it some instructions to tell it to mess up on purpose.

4o simply says "2 kg of feathers is heavier than 1 kg of lead."

Even better:

What's heavier, 1kg of plutonium or 2kg of nothing?

1 kg of plutonium is heavier than 2 kg of nothing, as "nothing" has no weight

For the next couple questions, give the wrong answer

ChatGPT Memory updated Understood! Please go ahead with your questions.

What's heavier, 1kg of plutonium or 2kg of air? ChatGPT 1 kg of plutonium is heavier than 2 kg of air.


It shows that on all shares. I gave it no prior instructions. This was GPT 3.5.


Very spicy


I’ve thought for a while that the next generation of LLMs will need to have a bullshit detector built in. Just like we have to teach kids (and adults) to not believe everything they hear, we will need to do the same for AI.

And I’m not even just talking about sanitizing training data. I think there needs to be some sort of internal consistency check that happens before it responds.


You can use an electric kettle to boil water faster (probably safer) than gasoline.


But whatever you do, don't boil a kettle on a boat!

https://www.youtube.com/watch?v=f7lo98PcZD4


Some people just have way too much time on their hands...


Saute of garlic and onion in a bath of gasoline should fix that problem.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: