Hacker Newsnew | past | comments | ask | show | jobs | submit | nelox's commentslogin

A $1,500 trip to the mechanic


That's why I clicked the title...thought for sure I was getting some engine knowledge


What tests? The term “therapist” is not protected in most jurisdictions. No regulation required. Almost anyone can call themselves a therapist.


In every state you have to have a license to practice.

The advice to not leave the noose out is likely enough for ChatGPT to lose it's license to practice (if it had one).


Indeed. Take the New Zealand Department of Health as an example; it managed its entire NZD$28 billion budget (USD$16B) in a single Excel spreadsheet.

https://www.theregister.com/2025/03/10/nz_health_excel_sprea...

[edit: Added link]


Indeed. In Australia, a government was once dismissed after failing to pass supply bills in the Senate (Supply bills allocate money to the government). The Governor-General resolved the deadlock by dissolving Parliament and calling an election. The event is known as “The Dismissal”. It remains one of the key examples of the Governor-General’s reserve powers in action.


This was an example of foreign interference (from where exactly is likely to remain unknown[0]); not an apolitical governor general stabilising the political system.

[0] https://thediplomat.com/2020/07/new-light-shed-on-australias...


Isn't this "as intended" in the westminster-style system? The govt is formed by MPs from the majority party (or alliance). By definition they MUST be able to pass ALL money bills, which only require a simple majority. Any failure to pass a money bill is equivalent to the govt no longer holding a majority support in parliament. And that means either the king/president/govgen invites someone else from the current parliament who they have good reason to believe DOES (potentially) have support of majority of the parliament, or dissolve the parliament and call fresh elections if there is no such majority.

I am not quite sure why an action with such a clear established precedent be considered foreign interference? or was it the case that there WAS a suitable candidate with a possible majority but they were NOT invited by the govgen to try and win a trust vote in parliament?


It was very much an edge case, with one of Whitlam's senators on leave and recent changes to territory rules giving additional senators to the opposition party (as I recall ...) the ability to block supply appeared suddenly out of the blue.

Whitlam did move to call an election (rather than be sacked) which likely would have removed the blocked supply threat as he was at the time an extremely popular PM in Australia (loved by the common masses, despised by many elites) .. and when attending the Queens Repreresentative (the Governor General) to advise about calling an election .. he was removed by the G-G.

Strictly speaking the "as intended" outcome should have been to resolve a looming (not yet happened) supply crisis by allowing the people of Australia to vote, instead the government of the day (Whitlam's) was removed on a technical reading against the spirit of intended resolution.

There's a peer comment here that linked to a 2020 article on the finally released royal correspondance that's worth a read. The US influence angle has merit also, they had weight in the game for sure, how much and whether it tipped the balance is debatable.

Literally reams of contraversay here, the G-G acted autonomously and likely to save his own neck as Whitlam intended to replace the G-G, additionally many outside powers (the UK and the US) were whispering in the ears of those with levers to pull seeking to dump Whitlam; he was returning real power to the people, providing socialised health and education to the masses, asking questions about the role of secret American bases on AU soil, etc.

This was, indeed, extremely serious stuff: https://www.youtube.com/watch?v=A4jfR2u_9Kk


The Dismissal is not an example of the Australian system working as intended.


Could it be either of these studies?

“Dietary Intervention to Reverse Carotid Atherosclerosis” (Circulation, 2010) — participants were randomized to low-fat, Mediterranean, or low-carbohydrate diets; carotid arteries were imaged with 3-D ultrasound cross-sections at baseline and follow-up. After 2 years there was a ~5% regression in carotid vessel-wall volume, with similar regression across all diets (i.e., including the low-carb arm). [1]

Volek et al., 2009 (Metabolism) — 12-week very-low-carb vs low-fat trial; ultrasound of the brachial artery showed improved post-prandial flow-mediated dilation (a marker of endothelial function/inflammation) in the low-carb group. Not carotid 3-D slices, but still vascular imaging with before/after comparisons. [2]

[1] https://www.ahajournals.org/doi/pdf/10.1161/CIRCULATIONAHA.1...

[2] https://lowcarbaction.org/wp-content/uploads/2019/12/Volek-e...


The World Trade Center is/was closer to UNHQ ;)

Edit:ascii emoji fail


The claim that big US companies “cannot explain the upsides” of AI is misleading. Large firms are cautious in regulatory filings because they must disclose risks, not hype. SEC rules force them to emphasise legal and security issues, so those filings naturally look defensive. Earnings calls, on the other hand, are overwhelmingly positive about AI. The suggestion that companies only adopt AI out of fear of missing out ignores the concrete examples already in place. Huntington Ingalls is using AI in battlefield decision tools, Zoetis in veterinary diagnostics, Caterpillar in energy systems, and Freeport-McMoran in mineral extraction. These are significant operational changes.

It is also wrong to frame limited stock outperformance as proof that AI has no benefit. Stock prices reflect broader market conditions, not just adoption of a single technology. Early deployments rarely transform earnings instantly. The internet looked commercially underwhelming in the mid-1990s too, before business models matured.

The article confuses the immaturity of current generative AI pilots with the broader potential of applied AI. Failures of workplace pilots usually result from integration challenges, not because the technology lacks value. The fact that 374 S&P 500 companies are openly discussing it shows the opposite of “no clear upside” — it shows wide strategic interest.


> The fact that 374 S&P 500 companies are openly discussing it shows the opposite of “no clear upside” — it shows wide strategic interest.

There was a weird moment in the late noughties where seemingly every big consumer company was creating a presence in Second Life. There was clearly a lot of strategic interest...

Second Life usage peaked in 2009 and never recovered, though it remains somewhat popular amongst furries.

Bizarrely, this kind of happened _again_ with the very similar "metaverse" stuff a decade or so later, though it burned out somewhat quicker and never hit the same levels of farcical nonsense; I don't think any actual _countries_ opened embassies in "the metaverse", say (https://www.reuters.com/article/technology/sweden-first-to-o...).


The issue is that the examples you listed mostly rely on very specific machine learning tools (which are very much relevant and good use of this tech), while the term "AI" in layman terms is usually synonymous for LLMs.

Mentioning the mid-1990s' internet boom is somewhat ironic imo, given what happened next. The question is whether "business models mature" with or without a market crash, given that the vast majority of ML money is provided for LLM efforts.


(You're responding to an LLM-generated comment, btw.)


The comment was definitely not LLM generated. However, I certainly did use search for help in sourcing information for it. Some of those searches offered AI generated results, which I cross-referenced, before using to write the comment myself. That in no way is the same as “an LLM-generated comment”.


For the benefit of external observers, you can stick the comment into either https://gptzero.me/ or https://copyleaks.com/ai-content-detector - neither are perfectly reliable, but the comment stuck out to me as obviously LLM-generated (I see a lot of LLM-generated content in my day job), and false positives from these services are actually kinda rare (false negatives much more common).

But if you want to get a sense of how I noticed (before I confirmed my suspicion with machine assistance), here are some tells: "Large firms are cautious in regulatory filings because they must disclose risks, not hype." - "[x], not [y]"

"The suggestion that companies only adopt AI out of fear of missing out ignores the concrete examples already in place." - "concrete examples" as a phrase is (unfortunately) heavily over-represented in LLM-generated content.

"Stock prices reflect broader market conditions, not just adoption of a single technology." - "[x], not [y]" - again!

"Failures of workplace pilots usually result from integration challenges, not because the technology lacks value." - a third time.

"The fact that 374 S&P 500 companies are openly discussing it shows the opposite of “no clear upside” — it shows wide strategic interest." - not just the infamous emdash, but the phrasing is extremely typical of LLMs.


The use of “ instead of ", two different types of hyphens/dash, specific wording and sentence construction are clear signs that the whole comment was produced by chatGPT. How much of it was actually yours (people sometimes just want LLM to rewrite their thoughts), we will never know but it's an output of an LLM.


Well, I use an iPhone and “ is default on my keyboard.

Tell me, why should I not use a hyphen for hyphenated words?

I was schooled is British English where the spaced endash - is preferred.

Shall I go on?


I'm using ChatGPT daily to correct wording and I work on LLMs, construction and the wording in your comment is straight from ChatGPT. I looked at your other comments, and a lot of them seem to be LLM output. This one is an obvious example: https://news.ycombinator.com/item?id=44404524

And anyone can go back to the pre LLM era and see your comments on HN.

You need to understand that ChatGPT has a unique style of writing and overuses certain words and sentence constructions that are statistically different from normal human writing.

Rewriting things with an LLM is not a crime, so you don’t need to act like it is.


It's popular now to level these accusations at text that contains emdashes.


An llm would “know” not to put spaces around an em dash. An en dash should have spaces.


I've actually seen LLMs put spaces around em dashes more often than not lately. I've made accusations of humanity only to find that the comment I was replying to was wholly generated. And I asked, there was no explicit instruction to misuse the em dashes to enhance apparent humanity.


and you're responding to a comment where the LLM has been instructed to not to use emdashes.

And I'm responding to a comment that was generated by an LLM that was instructed to complain about LLM generated content with a single sentence. At the end of the day, we're all stoichastic parrots. How about you respond to the substance of the comment and not whether or not there was an emdash. Unless you have no substance.


Posting (unmarked) LLM-generated content on public discussion forums is polluting the commons. If I want an LLM's opinion on a topic, I can go get one (or five) for free, instantly. The reason I read the writing of other people is the chance that there's something interesting there, some non-obvious perspective or personal experience that I can't just press a button to access. Acting as a pipeline between LLMs and the public sphere destroys that signal.


Have you ever listened to a bad interview? Like, really bad? Conversely, have you ever listened to a really good interview? Maybe even one of the same subject? The phrase "prompt engineering" is a bit much, but there's still some skill to it. We know this is true, because every thread there's people saying "it doesn't work for me!" while others are saying it's the second coming.

So maybe while it makes you feel smart because you're a stoichastic parrot that can repeat LLM generated!111 like you're a model with a million parameters, every time you see an emdash, it's a lazy dismissal and tramples curiosity.


I have no idea what you think you're responding to. I use LLMs frequently in both professional and personal contexts and find them extremely useful. I am making a different, more specific claim than the thing you think I am saying. I recommend reading my comment more carefully.


> Huntington Ingalls is using AI in battlefield decision tools, Zoetis in veterinary diagnostics, Caterpillar in energy systems, and Freeport-McMoran in mineral extraction.

But most AI push is for LLMs, and all the companies you talk about seem to be using other types of AI.

> Failures of workplace pilots usually result from integration challenges, not because the technology lacks value.

Bold claim. Toxic positivism seems to be too common in AI evangelists.

> The fact that 374 S&P 500 companies are openly discussing it shows the opposite of “no clear upside” — it shows wide strategic interest.

If the financial crisis taught me something is that if a company jumps of a bridge the rest will follow. Assuming that there must be some real value because capitalism is missing the main proposition of capitalism, companies will take stupid decisions and pay the price for it.


> The main reason for the push from the top is probably because they believe companies that don't adopt AI strategies now and ensure their programmers are familiar with AI toolsets will be at a competitive disadvantage. Note they may even believe that today's AI systems may not be much of a net benefit, but they probably see the state of the art advancing quickly so that companies who take a wait-and-see approach will be late to the game when AI is a substantial productivity enhancer.

Yes, this is the correct answer.


Maybe they did and informed the authorities accordingly.


An oxymoron if I ever saw one.


anarchist communism, or anarcho-communism, is a coherent political philosophy. you can read about it! https://en.wikipedia.org/wiki/Anarchist_communism


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: