Hacker News new | past | comments | ask | show | jobs | submit login

Thank you for the reply.

The power companies are going to buy access to this data. This government regulated utility is going to be allowed to charge its subscribers based on social metrics.

This sounds very far fetched to me.




I think you'd be surprised by how many companies buy access to that data.

Would it surprise you if your utility pulled your credit score? Cause that already happens.

Why is it far-fetched that they wouldn't take another step or two along that path?

And let's say it isn't the power company. Let's say it's your employer. Feel good about that?


I'm not surprised about how data is shared. I know we freely share it to companies without concern. Much of this is within our own control.

Regulations would prevent this type of abuse to power consumers in the United States. The rest of this argument is whataboutism.


> Regulations would prevent this type of abuse to power consumers in the United States. The rest of this argument is whataboutism.

So... you agree with the original post now that there's a specific risk with the current potential uses of these models?

I'm not sure what you're questioning at this point.

("what if it was your employer" is also not "whatabouttism," it's another facet of the same concern. Another party who pulls publicly available data today, and could potentially pull even more, giving us as the public the need to decide if we want to allow that to happen. Should someone be able to be made unemployable if an LLM decides they're too much of an asshole on Twitter? Let's figure it out.)


>So... you agree with the original post now that there's a specific risk with the current potential uses of these models

No. I do not believe the US government would allow public utilities to set prices based on social metrics bought from 3rd parties.

>"what if it was your employer" is also not "whatabouttism,"

I do not believe a public utility and my employer correlate. Maybe if I was a government employee. Even then, I don't believe the government would have the right to use that data against me either.

I have a hard time believing these underhanded tactics are overlooked and allowed.


>I have a hard time believing these underhanded tactics are overlooked and allowed.

Really, we're looking at the wrong place talking about the power company... where you should be looking is rental properties.

https://www.propublica.org/article/yieldstar-rent-increase-r...


I don't understand your fixation on power companies.

Imagine you'd never read the original comment and instead you read a comment from me, saying, "I have a specific concern about how LLMs like ChatGPT will let companies do far more intrusive background checks against every applicant than they do today. I don't want a world where the standard process for getting a new job includes a background check that runs an LLM across everything you've posted on the internet. It wasn't practical for them to do this for every single applicant in the old world, they just did cursory background checks since doing more would cost too much, but as LLMs get cheaper it will be easy and cheap."

Do you agree that that's a concern introduced by the development of LLMs?


AI is not introducing any more concern. Companies already incorporate this practice. Are you against the practice or against AI?


My friend, do I have a rant lined up for you..:

When I worked at a utility company, we had a purchased dataset on customers with things like inane, ridiculously specific features such as "average number of game consoles per household", on street level.

Now,

a. the data quality was absolute dog shit, and

b. the idiotic amount of features made it super easy to overfit, leak or otherwise do untoward shit when training a model, and

c. often the resulting models (churn damping, targeted marketing) didn't perform significantly better than random sampling...

But the business users/POs ate up that shit like fine cuisine italian sandwiches, because "we have comprehensive 360' multichannel whateverthefuck" insight into our customer base and we make meaningful business decisions based on this.

And this is in GDPR-crazy europe.


Yeah... I've seen similar in the US.

I see LLMs rapidly increasing the data quality of those sorts of datasets by enabling full-text crawling of all sorts of other publicaly-available or purchasable streams of text. You said on HN once that you had 5 consoles? Well, we matched your hn username to your username on this other site, and there was a breach that matched usernames to emails over there, and there was a different breach that let us match emails to full names, and bam, now we have an accurate number.


People lie online. I have 5 Ferraris. I am 7 ft tall. I once scored 4 touchdowns in a single game. Now the power company AI can use those "facts" to target their marketing message to me, one of their wealthiest and most attractive customers.


>now we have an accurate number.

No, now we THINK we have an accurate number because everyone in the entire chain is bullshitting about how good machine learning and their data is.

That's so much worse.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: