Hacker Newsnew | past | comments | ask | show | jobs | submit | hdseggbj's commentslogin

As long as people keep paying scammers they'll keep scamming. I've lost a lot of money and opportunity refusing to submit to extortion.


That's like saying why not just live on the dole?

Answer: pride, curiosity, inspiration, creativity, a sense of ownership, integrity, control, vision

I can't believe these concepts are so foreign to so many.


Okay, well I'm asking the author why they developed their own analytics tool rather than using an industry standard hoping there's some reason beyond motivational phrases.

Like, asking if there's a real technical reason.

But, go off I guess.


The distinction is irrelevant. No one thought the nukes would blow themselves up, but here we are on the precipice of destruction. We aren't going to bury ai in 20 story bunkers to protect llms.

Everyone knows the real evil are the people building the ai. We aren't afraid of agi we are afraid of Boston dynamics (aka Google). We are afraid of Bill gates, Sam altman, Jeff bezos, Larry Ellison and elon musk.

These are dangerous people. Their mind set. Their unstoppable lust for money and power. We are afraid of their ability to convince humans to do unspeakable things for this imaginary thing we call money.

With agi they don't have to persuade anyone with anything. There's not Even a human conscience to stop them. We know these folks have no conscience of their own. The world is still here because other people have them.

Agi won't have a conscience or the integrity to stop evil.

Human society is collapsing because Good people are doing nothing.


P.S. Boston Dynamics is no longer owned by Google, it's part of Hyundai now.


Ah yes, rings a bell. Doesn't change the sentiment though. They're all crooks and cheaters who will sacrifice the greater good for their own self interest.

https://en.wikipedia.org/wiki/2025_Georgia_Hyundai_plant_imm...

It's the fear of death that's killing us all.


There are tons of examples of chaebols behaving evilly in Korea. You don't have to pull out a scenario where half of Americans and most non-Americans think Hyundai are the victims, not the bad guys.


You don't have to launder money when the leader of the country is involved.


Solar companies fund the initiative, so it being funded or by whom is irrelevant. Everyone involved is motivated by self interest.


They also enrich the parasitic bureaucracy. Climate change is a scam. Not because it isn't changing, of course it's changing, but humans can't and won't change it back, nor should they bother trying.

What they should do, scientifically, is adapt, like all organisms.

The irony is those demanding we change our behavior to reverse climate change are the ones actually fighting to keep humans from changing by adapting to changing climatic conditions, and so they are the biggest threat to human survival as a species.

We're gonna burn every deep of oil. Petroleum use goes up every year, regardless.


So you pay more money and also give up your privacy for what you could pay cash for. I don't think you're the target market for this phone.


I pay less money for my burrito than I would with cash, but the reason I use my phone is convenience, not cost.

> I don't think you're the target market for this phone.

My comment is downstream of the entertaining of a possibility of:

> a significant user base that runs alternative operating systems

... which isn't going to happen if you ask your users to give up commonly used features. It will forever be a niche project, at best.


And there are still folks who don't use ad blockers.


Just give the ai to user relationship a protection like attorney client privilege.

Edit: ai has already passed the bar exam.


It only "passes the bar exam" when AI, or some other flawed process, is the examiner. See e.g. https://doi.org/10.1007/s10506-024-09396-9 for a debunk.


That's not a debunk. "Calls into question" does not equal "in truth, it failed the exam. "


No, it’s a debunk. ChatGPT-4 scored in the 48th percentile (15th percentile in essays) amongst individuals that passed the bar exam. That’s very poor performance.


Thus it scored higher than almost half the humans who passed the test. In other words it too passed the bar.


Attorney-client privilege has limits. For obvious reasons I haven’t read any affidavits associated with the warrant, but it sure sounds like this would fall outside the bounds of attorney-client privilege.


With an attorney you have a clear sense of when you pass outside of that privilege. With a friend or colleague you have a social sense of what's going to remain confidential, plus memories aren't perfect. "Preserving, recording and reporting every word" is not the same as any of these things. This cannot be the world we all have to live in going forward; it's not safe or healthy.


Seems natural to extend privilege here. People are using it as a therapist.


There are a lot of counterarguments I could bring up, but just of the top, plainly, just because people use LLMs as therapists, lawyers, doctors, deities, doesn't make LLMs such.

My personal believes (we should not rely on models for such things at this stage, let's not anthropomorphize, etc.) to one side, let me ask, do you think if I used my friend Steve, who is not a lawyer but sounds very convincingly like one, to advice me on a legal dispute, that should be covered by attorney client privilege?

Cause, even given the scenario that LLMs suddenly become perfectly reliable enough to verifiably carry out legal/medical/etc. services to a point where they can actually be accepted into day-to-day practice by actual professionals and the companies are willing to take on the financial risks of any malpractice for using their models in such areas (as part of enterprise offerings for an extra fee of course), that still wouldn't and shouldn't mean that your run-of-the-mill private ChatGPT instance has the same privileges or protections that we afford to e.g. patient data when handled digitally as part of medical practice. At best (again, I dislike anthropomorphizing models, but it is easier to talk about such a scenario this way), a hypothetical ChatGPT that provides 100% accurate legal information would be akin to a private person who just happens to know a lot about the law, but never got accredited and does not have the same responsibilities.

Again though, we are far from that hypothetical anyways, "people" using LLMs that way does not change this fact. I know, unfortunately, there are people who are convinced that current day LLMs have already attain Godhood and are merely biding their time and that doesn't become real either, just because they act according to their assumptions.

I really struggle to understand, nor do I see any cogent arguments across this comment section why current day LLMs in such a scenario should be treated differently to e.g. a PKM software or cloud hosted diary and afforded the same legal protections (or lack thereof depending on viewpoint, personal stance and your local data privacy laws).


You'll find these laws privileging certain folks are contoured and controlled by the individuals who have already been granted such privilege to discourage and limit competition. Not because it's good in any way for the client.

Protectionism hurts all of society to benefit a few.


Perhaps this is a language barrier, but I genuinely do not understand what is meant by this. Like, what does this have to do with protectionism, who are the "folks" in this case, etc. Honestly asking.


Doctors control who can be a doctor, what is required to be a doctor, what doctors can and can't do, and that people are forced to go to them for Healthcare ... all to protect their personal income. Not to better Healthcare. Not to expand access to Healthcare. But precisely to make it cost more to get. They are hurting society to benefit themselves.

Milton Friedman explains it to doctors here: https://m.youtube.com/watch?v=ss5PxPlnmFk


Yeah, politely, respectfully, no.

Don't know where to start, but I want to assure you, no matter where on this planet you live, Medical Doctors are generally not at fault for high costs of care. Depending on which health care system we are talking about, the particulars may be different, but no, MDs are not interested in worsening patient care for their own benefit. Kinda difficult considering the amount of uncompensated labor and stress compared to other higher paying occupations. Ask a trainee/resident/equivalent for your local health care system if you want some details.

And people are "forced" to go to an MD for medical treatment in the same way they are "forced" to go to any other domain specific expert, it is where the experience and liability lie because they have undertaken the time, training and exams to ideally assure a specific level of care.

Incidentally, has absolutely zero to do with LLMs and the fact that this is cloud hosted software, not an entity, being or anything of the sort, so shouldn't receive any special considerations beyond what we afford to cloud hosted content. Couldn't find anything on patient data processing in that MF collection linked and as that was his area of work, was purely US centric. Medical care is however the purview of medical professionals outside the US as well, including in countries with far higher patient outcomes. If there is an applicable argument, just quote that directly over linking a collection of clips.

To bring this back to the topic at hand, LLMs can and are being used in Medical Practice already. And neither did Doctors prevent that, nor did that require a law change, because, as stated before, it is merely data being input and processed. There are EU MDRd apps for skin cancer, there are on-prem LLM solutions that adhere to existing patient privacy regulations, etc.

Basically, Doctors do not stand in the way of LLM usage (neither could they, nor do they have the time) and even if they wanted to, LLM input and output is just data and gets treated accordingly.


I can represent myself in court, but I can't prescribe my own medication. If one does not go to the doctor to get those drugs they'll die, so yes: forced.

All you assured me of is that you didn't watch the video.


There's a fine line between teasing, which can build up a person, and degradation designed to destroy self esteem. It's also easy to dismiss that degradation as "I'm just teasing. You're too sensitive. "


Buy puts


The financially illiterate don't understand that "going short" is not the simple reverse of "going long" and that there are more difficulties involved with borrowing stocks to short or in buying puts. Well, puts are easy to buy, at least, but the manner in which they decay makes it hard to win with that strategy, harder, in fact, than buying calls to go long. But yes, you can technically buy puts. You can also play Powerball.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: