Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

He links to Reddit, a site where most people are aggressively against AI. So, not necessarily a representative slice of reality.


He links to a post about a teacher’s expertise with students using AI. The fact that it’s on Reddit is irrelevant.


If you're going to champion something that comes from a place of extreme political bias, you could at least acknowledge it.


This is a baffling response. The politics are completely irrelevant to this topic. Pretty much every American is distrustful of big tech and is completely unaware of what the current administration has conceded to AI companies, with larger scandals taking the spotlight, so there hasn't been a chance for one party or the other to rally around a talking point with AI.

People don't like AI because its impact on the internet is filling it with garbage, not because of tribalism.


>This is a baffling response.

Likewise.

95+% of the time I see a response like this, it's from one particular side of the political aisle. You know the one. Politics has everything to do with this.

>what the current administration has conceded to AI companies

lol, I unironically think that they're not lax enough when it comes to AI.


Based on your response and logic - no dem should read stuff written by repub voters, or if they do read it, dismiss their account because it cannot be … what?

Not sure how we get to dismissing the teacher subreddit, to be honest.


I think there implication is that because the teacher posted on Reddit, they are some kind of socialist, and therefore shouldn't be listened to. I guess their story would be worth listening to if it was posted on truth social instead?


Ah! Nice point on truth social.


Nah, misses the entire point of what I was saying.

But thanks for recognizing that Truth Social has a noticeable political leaning. So close, yet so far.


>they are some kind of socialist

Yes, that is accurate.

>I guess their story would be worth listening to if it was posted on truth social instead?

No, I don't take anti-AI nonsense seriously in the first place. That aside, the main point here was that Reddit has a very strong political leaning. If anyone tried to insist that the politics of Truth Social is irrelevant, you'd immediately call it out.


It really doesn't lol.

I don't get the reactionary right's hysteria about Reddit. It's so clearly not true it's just silly.

It's like when my brother let my little cousin watch a scary movie and she had hysterics about scary things for days. Y'all tell each other ghost stories and convince yourselves it's real.


Yet another one! And literally all I have to do is point out that Reddit is a far-lefty website (it obviously is) and say that I won't play along (I won't).


So instead of addressing the actual substance, you dismiss it because of your assumption of their political leanings.

Good luck navigating the world, I guess.


They have the same political leanings as you. I notice these things.

>Good luck navigating the world, I guess.

Thanks. And a hardy "F you" to you too.


Look, another one! Twist it however you want, I'm not going to accept the idea that far-lefty Reddit is some impartial representation of what teaching is or what the average person thinks of AI.


> 95+% of the time I see a response like this, it's from one particular side of the political aisle. You know the one. Politics has everything to do with this

I really don't, honestly you're being so vague and it's such a bipartisan issue I can't piece together who you're mad at. Godspeed.


Why? So you could discard it faster?

Read things from people that you disagree with.


Because I'm not going to play a game where the other side gets to ignore the rules.


I’d like to see a statistically sound source for that claim. Given how many non-nerds there are on Reddit these days, it’s unlikely that there’s any particular strong bias in any direction compared to any similar demographic.


Given recent studies, that does seem to reflect reality. Trust in AI has been waning for 2 years now.


By what relevant metric?

The userbase has grown by an order of magnitude over the past few years. Models have gotten noticeably smarter and see more use across a variety of fields and contexts.


> Models have gotten noticeably smarter and see more use across a variety of fields and contexts.

Is that really true? The papers I've read seem to indicate the hallucination rate is getting higher.


Models from a few years ago are comparatively dumb. Basically useless when it comes to performing tasks you'd give to o3 or Gemini 2.5 Pro. Even smaller reasoning models can do things that would've been impossible in 2023.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: