Hacker Newsnew | past | comments | ask | show | jobs | submit | i_dont_know_'s commentslogin

Assuming a model is person-like, it gets even harder when we ask "who" the model is.

Is it this particular model from today? What if it's a minor release version change, is it a new entity, or is it only a new entity on major release versions? What about a finetune on it? Or a version with a particular tool pipeline? Are they all the same being?

I think the analogy breaks down pretty fast. Again, not to say we shouldn't think about it, but clearly the way to think of it is not "exactly a person"


We're talking past each other, I think.

To be clear, I believe that models are machines. They're clever, useful machines. We get sucked in. But, they're just machines, and thus property. If I delete a model, in an effective sense, I've disposed of property. I have not destroyed anything that I would consider a "who", i.e. a person. I've just turned off the computer. But, as the original piece points out, there are folks out there with a pathological (yes!) concept of AI as sentient entities - persons; well, let's say person-adjacent, at least. They have "relationships". Will they feel absolutely evil when they stop paying the subscription, and the company "terminates" the model? Maybe they will, but that's their scrambled thinking, not mine. If one believes an AI is a person, one *does* have an ethical dilemma when it's turned off. You'd have an ethical obligation to stop the slaughter, wouldn't you?

If I take my sick dog to the vet to be put down because she has a cancer that's making her life miserable I'm emotional, but ethically I feel it's the right thing to do. It's also lawful. I don't think I'd feel as comfortable ethically taking my grandmother for the big exit. Also, it's not lawful in most places: even with informed consent. The distinction is the difference.


I wanted to talk about Anthropic's "soul" document they include in Claude's prompt, some of the issues it might be causing, and point out what we're seeing now as we're seeing it probably isn't artificial consciousness so much as prompt adherence.


I'm actually quite surprized.

From another article today, I discovered the IRS has a github repo with (what seems to be) XML versions of tax questions... surely some combination of LLM and structured data querying could solve this? https://github.com/IRS-Public/direct-file/tree/main


I feel like these kind of things push us as a society to decide what exactly the purpose of school should be.

Currently, it's been a place for acquiring skills but also a sorting mechanism for us to know who the "best" are... I think we've put too much focus on the sorting mechanism aspect, enticing many to cheat without thinking about the fact that in doing so they shortchange themselves of actual skills.

I feel like some of the language here ("securing assessments in response to AI") really feels like they're worried more about sorting than the fact that the kids won't be developing critical thinking skills if they skip that step.

Maybe we can have


The current system "sorts" the students who "developed critical thinking skills" out from the ones who didn't put in effort. If there's no expectation that they'll be sorted thus, then the vast majority won't (and right now don't) bother with developing or exercising those skills. Usually they'll just put all their effort into the one or two classes they have that actually make them demonstrate mastery of the material.


Sure school is for acquiring skills, but it's also day care. A place to keep children during the day so their parents can work, especially in a society where more and more the expectation is that both parents work.


The article is about college-level education, which is primarily about ranking students in order of who should get the best entry-level jobs. If technology is disrupting the effectiveness of that ordering function, then something needs to change.


There is evidence that the ranking of students in order of who should get the best entry-level jobs is done mainly by the college admissions process which bins students into more or less selective colleges.


Thanks, the goal wasn't to get a full overview of the commercial space but really to understand more of the fundamentals of building, the limits and use-cases of agents in general.

Specifically I see a huge push to "build agents" and to "use agents" as building blocks for more complex interactions. I wanted to get a feel for why and my conclusions are more around that and not "what's possible"


I wanted to dig into the hype around agents and figure out both the use-cases and quirks, so I wrote this little 'review'.

I think, like most of the 'AI' sphere right now, they're overhyped but I think it's good to get a feel firsthand of when they're useful and when they're not.


I clicked - it's for a self-improvement week.

I read through and I think it's an interesting initiative (lots of good stuff around holding self and others accountable, positive reinforcement etc) but the title really makes it sound like you're recruiting for a toxic workaholic startup or something.


Really good point on open source and the nudges it provides, and definitely a point that isn't made often enough!


True... I was trying to define them the way (I think) companies are defining them (like what their alignment teams are looking at) and the way it's reported. I think in these specific contexts they're used with overlap but yeah I do bounce back and forth a bit here.


I keep seeing "AI ethics" being redefined to focus on fictional problems instead of real-world ones, so I wrote a little post on it.


Great little post. Congrats.

Also there's the ethics of scraping the whole internet and claiming that it's all fair use, because the other scenario is a little too inconvenient for all the companies involved.

P.S.: I expect a small thread telling me that it's indeed fair use, because models "learn and understand just like humans", and "models are hugely transformative" (even though some licenses say "no derivatives whatsoever"), "they are doing something amazing so they need no permission", and I'm just being naive.


I'm a radicalized intellectual property abolitionist. The ethical issue with scraping is the DDoS-like nature it has on smaller sites and running up the bandwith bill for medium hosts. There's no individual compnay at fault for the flood. Rather, it's an emergent result of each startup attempting to train data that's ever so slightly more up-to-date or broad than its competitors. If they shared a common corpus that updated once per month, scraping traffic would be buried in organic human visitors instead of the other way around. Let them compete on training methodology, not a race for scraping.


Worrying about that stuff is just a waste of time. Not because of what you said, but because it's all ultimately pointless.

Unless you believe this will kill AI, all it does is to create a bunch of data brokers.

Once fees are paid, data is exchanged, and models are trained, if the AI takes your job of programming/drawing/music, then it still does. We arrived at the same destination, only with more lawyers in the mix. You get to enjoy unemployment only knowing that lawyers made sure that at least they didn't touch your cat photos.


The thing is, if you can make sure that some of that your images/music/code aren't used for AI training, then you can be sure that you can continue doing what you do, because your personal style enables the specialty you can create.

Maybe you will lose some of your "territory" in the process, but what makes you, you will be preserved. Nobody will be able to ask "draw me a comic with these dialogue in the style of $ARTIST$".


> The thing is, if you can make sure that some of that your images/music/code aren't used for AI training, then you can be sure that you can continue doing what you do, because your personal style enables the specialty you can create.

Personal styles are dime a dozen and of far lesser importance than you think.

Professionals will draw in any style, that's how we make things like games and animated movies. Even assuming you had some unique and incredibly valuable style, all it'd take to copy it completely legally is finding somebody else willing to copy your style to provide training material, and train on that.


> Personal styles are dime a dozen and of far lesser importance than you think.

Try imitating Mickey Mouse, Dilbert, Star Wars, Hello Kitty, XKCD, you name it.

Randall will possibly laugh at you, but a legal company which happens to draw cartoons won't be amused and come after you in any way they can.

> Professionals will draw in any style...

Yep, after calling and getting permission and possibly paying some fees to you if you want. There's respect and dignity in this process.

Yet, we reduce everything into money. Treating machine code like humans and humans like coin-operated vending machines.

There's something wrong here.


> Try imitating Mickey Mouse, Dilbert, Star Wars, Hello Kitty, XKCD, you name it.

Those are not styles, they're characters for the most part.

You absolutely can draw heavy inspiration from existing properties, mostly so long you avoid touching the actual characters. Like D&D has a lot of Tolkien in it, and I believe the estate is quite litigious. You can't put Elrond in a D&D game, but you absolutely can have "Elf" as a species that looks nigh identical to Tolkien's descriptions.

For style imitation, it's long been a thing to make more anime-ish animation in the west, and anime itself came from Disney.

> Yep, after calling and getting permission and possibly paying some fees to you if you want.

Not for art styles, they won't. Style is not copyrightable.


> Those are not styles, they're characters for the most part. (Emphasis mine)

While I know that styles are not copyrightable for good-faith reasons, massive abuse of good-faith is a good siren for regulation in that area.

> You absolutely can draw heavy inspiration from existing properties, mostly so long you avoid touching the actual characters.

From what I understood, it's mostly allowed for homage and (un)intentional narrowing of creative landscape. Not for ripping people off.

> For style imitation, it's long been a thing to make more anime-ish animation in the west, and anime itself came from Disney.

But all are done in tradition of cross-pollination, there was no ill-intentions, until now.

After OpenAI ripped Studio Ghibli, and things got blurred. It's not my interpretation, either [0] [1].

Then there's Universal and Disney's lawsuits against Midjourney.While these are framed as character-copying, when you read between the lines, style appropriation is also something being strongly balked at [2].

So things are not as clear cut as before, because a company stepped on the toes of another one. Small fish might get some benefits as a side-effect.

Addenda: Even OpenAI power-walked away from mocking Studio Ghibli to "maybe we shouldn't do that" [3].

[0]: https://www.theatlantic.com/technology/archive/2025/05/opena...

[1]: https://futurism.com/lawyer-studio-ghibli-legal-action-opena...

[2]: https://variety.com/vip/how-the-midjourney-lawsuit-impacts-g...

[3]: https://www.eweek.com/news/openai-studio-ghibli-ai-art-copyr...


> While I know that styles are not copyrightable for good-faith reasons, massive abuse of good-faith is a good siren for regulation in that area.

Nothing having to do with "good faith", but that style isn't really definable. There's thousands of artists that produce very similar outputs.

Also it'd be very stupid, because suddenly it'd turn out that if there's two people that draw nearly identically, one could sue the other even if that happened by chance.

> After OpenAI ripped Studio Ghibli, and things got blurred.

Nothing blurry about it. OpenAI is within full legal right to do it. It's kinda in bad taste, that's about it. Anyone can do it. Disney could make a Ghibli style movie if they ever wanted to.

I'm not sure why all the drama, because who even cares? The reason why I watched Ghibli movies wasn't ever about the particular looks.

> Then there's Universal and Disney's lawsuits against Midjourney.While these are framed as character-copying, when you read between the lines, style appropriation is also something being strongly balked at

You better hope it stays at characters, or we're going to have a mess of lawsuits of people and organizations suing each other because they draw eyebrows this particular way. I fail to see why is that at all desirable.

And of course the big corporations will come on top of that.


> Nothing having to do with "good faith", but that style isn't really definable.

We have something called AI which knows everything, maybe they should ask them. It's very fashionable. Even if the definition is wrong, it's an AI, it can make no wrong. That's what I've heard.

> I'm not sure why all the drama, because who even cares? The reason why I watched Ghibli movies wasn't ever about the particular looks.

Because a man and a studio which draws their movies by hand [0], frame by frame, and spend literal years doing it for a single movie deserves some respect even if you don't care about the art style.

Even a top notch studio like Pixar can pump a couple of minutes per week [1].

Doing this type of work takes immense dedication, energy and time. If you think it's worthy of nothing, I can't say anything about it. I deeply respect these people for what they do, and I'm equally thankful.

> You better hope it stays at characters, or we're going to have a mess of lawsuits of people and organizations suing each other because they draw eyebrows this particular way. I fail to see why is that at all desirable.

Maybe they should drink their own poison to understand what kind of delicate balances they're poking and prodding. The desire for more monies in spite of everything should have some consequences.

[0]: https://www.reddit.com/r/nextfuckinglevel/comments/1egdzja/t...

[1]: https://www.reddit.com/r/todayilearned/comments/8p71cb/til_i...


Sometimes AI is "just like a human", other times AI is "just a machine".

It all depends on what is most convenient for avoiding any accountability.


IP is a pragmatic legal fiction, created to reward developers of creative and innovative thought, so we get more of it. It’s not a natural law.

As such fair use is whatever the courts say it is.


Then let's abolish all of them. Patents, copyrights, anything. Let's mail Getty, Elsevier, car manufacturers, chemical plants, software development giants and small startups that everything they have has no protection whatsoever...

Let us hear what they think...

I'm for the small fish here, people who put things out because of pure enjoyment, waiting nothing but a little respect for the legal documents they attach to their wares they made meticulously, which enables most of the infra which enables you to read this very comment, for example.

Current model rips the small fish and feeds the bigger one forcefully, creates an inequality. There are two ways to stop this. Bigger fish will respect smaller fish, because everybody is equal in front of law (which will not happen) or abolishing all protections and make bigger fish vulnerable to small fish (again, which will not happen).

Incidentally, I'm also here for the bigger fish, too, which put their wares in source-available, "look but not use" type of licenses. They are also hosed equally badly.

I see the first one as a more viable alternative, but alas...

P.S.: Your comment gets two points. One for deflection (it's not natural law argument), and another one for "but it's fair use!" clause. If we argue that only natural laws are laws, we'll have some serious fun.


> Then let's abolish all of them. Patents, copyrights, anything.

This, but without the irony. Let us be like bacteria, freely swapping plasmids.


Thanks! Yeah, there's a lot of "well, it's 'standard practice' now so it can't be wrong" going on in so many different ways here too...


Yes, all this highly public hand-wringing about "alignment" framed in terms of "but if our AI becomes God, will it be nice to us" is annoying. It feels like it's mostly a combination of things. Firstly, by play-acting that your model could become God, you install FOMO in investors who see themselves not being on the hyper-lucrative "we literally own God as ascend to become its archangels" boat. You look like you're taking ethics seriously and that deflects regulatory and media interest. And, it's a bit of fun sci-fi self-pleasure for the true believers.

What the deflection is away from is that the actual business plan here is the same one tech has been doing for a decade: welding every flow and store of data in the world to their pipelines, mining every scrap of information that passes through and giving themselves the ability to shape the global information landscape, and then sell that ability to the highest bidders.

The difference with "AI" is that they finally have a way to convince people to hand over all the data.


It's interesting how I think our experience differs completely, for example, regarding people's concerns for AI ethics you write:

>People are far more concerned with the real-world implications of ethics: governance structures, accountability, how their data is used, jobs being lost, etc. In other words, they’re not so worried about whether their models will swear or philosophically handle the trolley problem so much as, you know, reality. What happens with the humans running the models? Their influx of power and resources? How will they hurt or harm society?

This is just not my experience at all. People do worry about how models act because they infer that eventually they will be used as source of truth and because they already get used as source of action. People worry about racial makeup in certain historical contexts[1], people worry when Grok starts spouting Nazi stuff (hopefuly I don't need a citation for that one) because they take it as a sign of bias in a system with real world impact, that if ChatGPT happens to doubt the holocaust tomorrow, when little Jimmy asks it for help in an essay he will find a whole lot of white supremacist propaganda. I don't think any of this is fictional.

I find the same issue with the privacy section. Yes concerns about privacy are primarily about sharing that data, precisely because controlling how that data is shared is a first, necessary step towards being able to control what is done with the data. In a world in which my data is taken and shared freely I don't have any control on what is done with that data because I have no control on who has it in the first place.

[1] https://www.theguardian.com/technology/2024/mar/08/we-defini...


Thanks for the perspective. For me I think it's a matter of degree (I guess I was a bit "one or the other" when I wrote it).

These things are also concerns and definitely shouldn't be dismissed entirely (especially things like AI telling you when it's unsure, or, the worse cases of propaganda), but I'm worried about the other stuff I mention being defined away entirely, the same way I think it has been with privacy. Tons more to say on the difference between "how you use" vs "how you share" but good perspective, and interesting that you see the emphasis differently in your experiences.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: