Hacker Newsnew | past | comments | ask | show | jobs | submit | freetime2's commentslogin

Sorry for piling on here. To be clear, I don't think you've done anything terrible that requires an apology, and I think it's admirable that you are here after the hoax was debunked and willing to admit you were wrong and discuss it openly. It's just interesting (and somewhat rare on HN) to be able to go back and pick apart your comment less than 24 hours later with perfect hindsight.

You comment was:

> What's the alternative here? A rapper went to the effort to publish an MV, then figured out how to display a fake disabled message in the vehicle, then faked a C&D, knowing that these actions would give Tesla a very legitimate claim against them?

> Ockham's razor is not favorable to the alternative.

I think the issue is that you greatly underestimated how far people are willing to go for likes. There are billions of people online, and while most would not bother to do what you said, some of them are indeed willing to go to incredible lengths for views. The YouTuber who intentionally crashed his plane, for example [1]. This stunt with the Cybertruck feels relatively low-effort by comparison.

Or as my favorite response to your comment summed it up:

> "You really think someone would do that? Just go on the internet and tell lies?"

I don't typically like sarcasm in a thoughtful discussion, but in this case it felt warranted.

You also failed to apply Occam's razor to the other side, and consider the legal and reputational risks that Tesla would face by remotely disabling someone's car while they were driving on the expressway. Yes, Musk has done brash things before, which certainly increases the believability of this hoax. But this would be new ground even for Musk. And you have to weigh Musk's capacity for doing brash things against the entire internet's capacity for generating fake news and hoaxes.

You probably should have known better than try and apply Occam's razor to determine the likelihood that an instagram post is a hoax. There are just too many irrational people out there (and rational people acting in bad faith) for Occam's razor to be applicable. And the fact that you were able to overlook the overwhelming number of counterexamples to your application of Occam's razor suggests to me that there may have been some confirmation bias at play.

[1] https://www.bbc.com/news/world-us-canada-67622247


It really is concerning. I have a friend who went off the rails in the past couple years and is constantly sharing twitter rage bait. When things are proven to be fake news, it doesn't even phase him. It's like reality doesn't even matter, and maximizing outrage is the end goal.


Presumably you are talking about this case, where Meta is accused of having downloaded a bunch of having torrented a bunch of copyrighted works. [1]

Of relevance here is the fact that 1) Meta denies having seeded the content, and there looks to be no hard evidence that they distributed the content to other users, 2) the case is ongoing, so a decision has not yet been reached about whether they broke any laws, and 3) the fact that Meta is being sued for this shows that even corporations worth trillions of dollars are not immune to the consequences of breaking the law.

[1] https://www.tomshardware.com/tech-industry/artificial-intell...


Of course they’re not immune to consequences. It’s just that the consequences are so relatively small that they don’t really care. Reminds me of the quote about how the law treats everyone equally: both rich and poor are forbidden to sleep under bridges, beg, and steal bread.

> I don’t understand why corporations can violate copyright laws at hyper scale

Can they, though? Isn't that why Perplexity is being sued?


So it sounds like they definitely scraped the content and used it for training, which is legal:

> Japan’s copyright law allows AI developers to train models on copyrighted material without permission. This leeway is a direct result of a 2018 amendment to Japan’s Copyright Act, meant to encourage AI development in the country’s tech sector. The law does not, however, allow for wholesale reproduction of those works, or for AI developers to distribute copies in a way that will “unreasonably prejudice the interests of the copyright owner.”

The article is almost completely lacking in details though about how the information was reproduced/distributed to the public. It could be a very cut-and-dry case where the model would serve up the entire article verbatim. Or it could be a much more nuanced case where the model will summarize portions of an article in its own words. I would need to read up on Japanese copyright law, as well as see specific examples of infringement, to be able to make any sort of conclusion.

It seems like a lot of people are very quick to jump to conclusions in the absence of any details, though, which I find frustating.


> So it sounds like they definitely scraped the content and used it for training, which is legal

It certainly seems legal to train. But the case is about scraping without permission. Does downloading an article from a website, probably violating some small print user agreement in the process, count as distribution or reproduction? I guess the court will decide.


According to the article, they are complaining that the downloaded content had "been used by Perplexity to reproduce the newspaper’s copyrighted articles in responses to user queries." Derived works.

Reproducing articles is not "deriving" anything. It's reproducing.

“Reproduce” in this context reads like “copy/republish”, which would not be a derivative work.

Yes, if it's an exact copy, but I don't know if their system is actually presenting entire articles, or just fragments (copyrightable, perhaps) and perhaps mixing them with other text.

Generally the court practice so far was that if you don't register or login, you never accept the user agreement. If the website is still willing to serve content to non-registred users, you're free to archive it. How you can use it afterwards is a separate question.

LLMs are able to reproduce the entire IP. Sometimes it requires more than one prompt. I've seen examples in the wild where a single prompt was sufficient:

https://jskfellows.stanford.edu/theft-is-not-fair-use-474e11...

Therefore, their output is a derivative work and violates copyright. The 2018 amendment is driven by big capital and should be reverted. Machines can plagiarize at huge scale and should have have no human rights.


I'm aware of the fact that LLMs can reproduce IP used in training data, and consider the example NYT article in your link to be "a very cut-and-dry case" of copyright infringment. And commercial AI companies especially should be held liable for damages if they can't or won't implement effective guardrails to prevent this from happening.

I'm somewhat optimistic this problem can be solved, though, with filters and usage policies. YouTube, another platform with basically unlimited potential for copyright infringement, has managed to implement a system that is good enough at preventing infringement to keep lawsuits at bay.

It's also not clear if that's what Yomiuri Shimbun is alleging here. In their 2023 "Opinion on the Use of News Content by Generative AI" [1] they give this example:

> Newspaper companies have long provided databases containing past newspaper pages and articles for a fee, and in recent years, they have also sold article data for AI development. If AI imports large quantities of articles, photos, images, and other data from news organizations’ digital news sites without permission, commercial AI services for third parties developing it could conflict with the existing database sales market and “unreasonably prejudice the interests of the copyright owner” (Article 30-4 of the Act). Also, even if all or part of a particular article communicates nothing further than facts and hardly constitutes a copyright, many contents deserve legal protection because of the effort and cost invested by the newspaper companies. Even if an AI collects and uses only the factual part, it does not mean it will always be legal.

So basically arguing that 2018 amendment which allows the use of copyrighted works to train AI models without permission from the copyright holder is not applicable because the use would "would unreasonably prejudice the interests of the copyright owner in light of the nature or purpose of the work or the circumstances of its exploitation". [2]

... which I think is a much more nuanced argument. I don't think we can just lump all of these cases together and say "it's infringement" or "it's fair use" without actually considering the details in each case. Or the specific laws in each country.

[1] https://www.pressnet.or.jp/statement/20230517_en.pdf

[2] https://www.cric.or.jp/english/clj/cl2.html


It's so crazy and dangerous I still have trouble believing it's true. Looks like we absolutely need laws regarding when and where a car company can remotely disable a vehicle.

You shouldn't need any laws, because it can be argued that by paying for the vehicle the owner purchased actual functionality, not just bare parts and chips. Any post-sale agreement is therefore inherently suspect from the beginning. If this is actually real I can't wait to see how it develops legally.

I have a similar understanding that the TOS doesn’t matter in this case and should not be enforceable, therefore making the company liable for civil penalties. But I really think such a reckless action should be tried as a criminal case and individual people within the company need to be held accountable. Not sure if there are any laws or precedent for that though.

Would that logic also apply to subscriptions like Full Self Driving and supercharging? I think that even if Tesla can't disable the car over a terms of use issue, they could cancel full self driving and maybe ban you from supercharging.

If the self driving is fully paid for then no, they should not be able to disable it.

For denying the monthly subscription and supercharging, a case could be made that these were the features that influenced the decision of the purchase, and denying them has now made the purchase less valuable. A civil suit could be filed, but not sure how the courts would rule.


How about a law that a company can't disable your product remotely, full stop? Or at least make it a negative right, so that any company can't disable your car/dishwasher/thermostat/video game, unless there's a very good reason to (say theft for example), with the burden of proof being on the company and not the consumer's shoulder?

Why? Tesla has long been run by an extremely petty person.

Fake news is so ubiquitous these days that I think you need to apply a higher standard than "it feels likely to be true". Especially for things that trigger outrage.

I think the question you should be asking is "has it been verified by a trusted third party?" And if not, you should treat it as something with a significant probability of being untrue.


I think that's likely the reason this story is getting traction.

It’s all fun and games until your car is stolen.

I went and looked up my former CS professor who I knew had worked on Multics. And unfortunately learned that he had passed away a few years ago.


The Economist regularly goes on sale at discountmags.com. For years it was $1/week, but recently the price increased to $75/yr. It's not on sale at the moment, but for anyone interested, I recommend setting up a deal alert at slick deals [1] to get notified the next time it's on sale - probably around Black Friday / Christmas.

I have been subscribing to the Economist through DiscountMags for over a decade now, and consider DiscountMags to be a totally legitimate business. DiscountMags does automatically enroll you for their "DiscountLock" auto-renewal when you place an order, but you can turn it off at any time through their website without talking to anyone (and I would recommend turning off DiscountLock as it no longer locks in the original price like it used to... so better to just re-up during a sale period).

[1] https://slickdeals.net/f/18290980-the-economist-magazine-1-y...


> We also excluded those listed as “athletes” in the database and those with TEEs greater than 25 MJ/d (all of these excluded individuals were from High HDI populations)

This study specifically excluded athletes, so its conclusions would be applicable to someone running 26km per day.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: