Hacker News new | past | comments | ask | show | jobs | submit | tapoxi's comments login

Oh I see, so training on copyrighted content is fine unless it's your AI model...

Don't worry, OpenAI definitely didn't improperly obtain any of their training data nudge nudge wink wink.

it's not copyright, but TOS violation that they're going for.

Let's just hope that TOS violation is considered a contract breach, if it even is a valid contract, and not anything crime related.

I don't want the state to expend any taxpayer dollars to fund civil offense prosecutions. The music industry has already managed to push their enforcement onto taxpayers via laws such as DMCA etc. Can't have this be done again, and with wider reach.


Legalities don't make them any less hypocritical.

Their strategy right now is to socialize the negative externalities (copyright does not protect anything from them) and capitalize the gains (ToS says you can't use their outputs).

PS: I do understand your point. Using taxpayer money would be even worse.


"We have no moat. But we are well-capitalized, so in classic early-mover monopolist fashion we'll try to pull the ladder up, then use funds to bri--er, lobby--the government into smacking down climbers."

If I understand their strategy, it's to limit the use of LLMs to American-based companies, by saying that it is dangerous (when used remotely) and illigal (when using remote models). So they strive to make it so that all American companies use American models whereas government agencies use their dedicated gov infra.

Government agencies would use dedicated infra anyways. Don’t many European governments insist on EU segregated cloud providers or open source software to avoid having critical infrastructure be under US control?

And the US is an official ally, whereas China is not.


It seems to make sense but when you look at the dynamics of it, if you legally lock organizations in this way, the Americans will be the only one using expensive models and the rest of the world would be using cheap Chinese models. This wouldn't be great for the USA in general.

I'm sure Microsofts scrapers read the ToS of every website they scraped to make sure it wasn't a ToS violation.

Lots of websites have a TOS. Do you think OpenAI's scraper reads all of them and follows all of the terms?

> Lots of websites have a TOS.

openai did not agree to the TOS of other websites they scraped, so they're not bound by it. It's unclear if the TOS is automatically agreed to by virtue of merely accessing the data that is already supplied in a webpage, without having first actively agree by an action like clicking.

See https://en.wikipedia.org/wiki/Shrinkwrap_(contract_law)#Unit...

therefore, openAI is only ever bound by copyright law, and not the TOS of the website. And according to some interpretation of copyright law, these AI models do not constitute distribution of the original training material.


DeepSeek may not have agreed to OpenAI's TOS either. They probably used a bunch of third party intermediates to gather o1 outputs from OpenAI.

Similarly, when you "buy" many ebooks, you're agreeing to a TOS, but then, in violation of the TOS, they were uploaded to Libgen, which OpenAI downloaded.


ChatGPT seems pretty knowledgeable about Spotify's TOS, for example... So no way for the bot to plead ignorance. :)

"Hey, ChatGPT, what are the terms of service for Spotify?"

"I can't provide the full, up-to-date terms of service for Spotify, but I can give you an overview of some key points typically included. For the most accurate and comprehensive information, I recommend visiting Spotify’s official website and reading their Terms of Service directly. That said, here are some common points that are usually part of streaming platforms' terms:

1. *Account Creation and Use*: Users need to create an account to access Spotify’s services, and must provide accurate, up-to-date information. They’re responsible for maintaining the security of their account.

2. *Subscription Plans*: Spotify offers different subscription tiers (Free, Premium, Family, etc.). Terms often specify the limitations of each tier, such as access to ads, offline listening, and music quality.

3. *Content Usage*: Spotify provides users with access to music, podcasts, and other content. However, the usage is typically for personal, non-commercial purposes only. Users are not allowed to download, share, or redistribute content unless Spotify allows it (like through the Share button or via a playlist).

4. *Licensing and Intellectual Property*: Spotify owns or has licenses for the content available on the platform. Users are granted a limited, non-exclusive license to access and use the service, but they don’t own the content they listen to.

5. *Privacy*: Spotify collects data, including usage patterns and preferences, which they may use for things like recommendations, advertisements, and analytics. Their privacy policy will detail what information is collected and how it’s used.

6. *Cancellation and Termination*: Either party (Spotify or the user) can terminate the service agreement. Users can cancel their subscription anytime, but Spotify may also suspend or terminate accounts for violations of their terms.

7. *Disclaimers and Limitations of Liability*: Spotify usually limits its liability in case of issues like service interruptions, data loss, or content inaccuracies.

8. *Modifications*: Spotify may update or change its Terms of Service at any time, and users are typically notified of these changes. Continued use of the service after changes indicates acceptance of the updated terms.

9. *Governing Law*: There’s often a clause specifying which country’s laws govern disputes related to the terms, and how disputes will be handled (for example, through arbitration).

If you're interested in the full details, you can check Spotify’s official site or app, where their Terms of Service are readily available for review."


Most ToS are fairly similar, so I can't tell if this is from it actually having read the Spotify ToS, or from guessing ("hallucinating") the ToS based on what Spitify does.

Everything is copyrighted in some countries. The swiss urheberrecht is generated automatically upon creation of the object. You dont have to add © or add it to a register. You cannot transfer the urheberrecht, only usages of your object.

Before anything else happens, the world will fall of it's axis from the irony of this.

These are the sort of litigious death throws that the threatened orthodoxy throw out when they smell their own blood in the water.

A Trillion dollar write down will do that to you.


AI models and their outputs aren't copyrightable because they're not made with human creativity :)

But their data is.

And I'd just add the quote from the non-paywalled fragment is "Microsoft’s security researchers in the fall observed individuals they believe may be linked to DeepSeek exfiltrating a large amount of data using the OpenAI application programming interface, or API"

IE, Indeed, "improperly" here is exactly the same "improperly" that a site owner applies to any bulk downloader.

So... Scrape not lest ye be scraped, one law for me... etc


You can't be "exfiltrating data" from an api that you pay for access to. That's just called "using what you paid for". Exfiltration is when you extract internal data that's private to the company.

Indeed,

I think MS was trying to use the scariest term they could come up with. To belabor the point a bit, "we train, you exfiltrate" etc.


I'm not surprised. This is the expected behavior of a monopolist.

These guys need to pick a damn lane, is it ok to steal data to train your chatbot or not? You can't have it both ways.

AI model output is not copyrightable, though presumably that will change soon.

Yeah… Gonna file this under Who cares or Good for them….

This is probably compositor specific, so I wonder if this is a mutter issue or if it can be replicated in kwin.

Mutter has one of the laggiest Wayland implementations imho.

Is this the absolute death of the high school essay? Even if you didn't want to cheat by avoiding ChatGPT, AI is now right there, in your word processor, and you have no way of turning it off.

We will solve the cheating problem with more AI, all essays will need to be written in 3 hour time windows in web portals with key-logging + copilot off and children on webcam the entire time. An AI will assess all the data and tell you if the child cheated or not.

Of course no one will care if you're good at writing essays in the future, and having that skill just means you're working a low paying training data creation job, but we will carry on pretending otherwise for a few years.


No one has ever cared of you're good at writing essays in the future, that was true 50 years ago.

The point of writing an essay was to (imo) get good at writing (actually assembling words cogently), thinking about a cohesive viewpoint/argument, and understanding the source material (book, novel, historical event, political concept, whatever).

I'm


The human ran out of tokens writing this response

Please scan your can of participating Mountain Dew or Starbucks to unlock the next 150 key presses on your keyboard.

(Yes Thank You) (Remind Me Later)


I tend to think of any substantial writing we do at work as an essay. Proposal, summary, RFC, employee evaluation, whatever. You can tell who writes well, and who is copy/pasting plausibly relevant text into an unedited draft that they then pass off as the final result. Not AI, just sloppy writing. I don't have numbers to back it, but I think that the good writing gets more done in less time. So, people care if you're good at writing essays.

Then there's the whole "clear writing is clear thinking" angle, but I suspect that people who write poorly do so out of laziness rather than any deficiency.


I want to work with people who can communicate thoughts effectively and will write some damn documentation.

People who whine about writing being difficult or unimportant are going to be deficient in both.


That sounds dystopian…

Essay writing is not a task intended to make you specifically better at writing more essays. It’s supposed to train your ability to explain your point of view clearly and with sound reasoning.


I sort of had that in my HS English classes more than 30 years ago. Every paper we ever wrote for my final two years was in class for one hour. Occasionally we would get a second class period to work on our papers but the teacher held them until the next class period.

I got really good at getting my thoughts down quickly and efficiently. My freshman English course in college nearly broke me however since we were required to revise our papers at least twice after getting feedback, even if the feedback as overwhelmingly positive. It was a skill I had never developed.


Blue books are back baby!

Honestly the most impactful writings I ever had to write in my education were required to fit within a standard bluebook.

https://uvamagazine.org/articles/blue_books


Ours were a certain beige colour.

Just for their essays to fed into an llm at the end of the day owned by mega corp.

> with key-logging + copilot off

That assumes Microsoft allowing you to do so.


I think it's the death of essays and also of reading. Why read a book when AI can read it for you? Teachers I know have already seen this happening.

> Why read a book when AI can read it for you?

When I was a kid I used short summaries and others' essays for composing my "own" essays on the books I did not want to read (for any reasons). I'm sure generations before me did the same thing, maybe just had it less accessible.

If you are interested you're gonna read that book, most likely no matter how many alternatives you may have. If you're not interested it's not like you're gonna do it anyway (if you're required to do something with it short summary, you'll naturally read the short summary - that was a thing way before the "AI" hype).

Text transforming language models only make accessing short summaries easier to access (with a caveat of being potentially less reliable), but they don't change anything else.

If limited scale was only thing that was holding the whole system working - well, that wasn't reliable, fair or meaningful system in a first place.


> maybe just had it less accessible

Yes. That's the whole point. The old way of avoiding the work was:

- find someone else's essay, maybe buy a Cliff's Notes or search the internet

- read the summary

- write your paper

It would still take you hours.

Now, you can avoid the work by just typing a two-sentence prompt into ChatGPT. It's free and fast, and it does the actual writing exercise (or your homework questions) too.

You don't need to take my word for it that things have changed. There is a huge amount of empirical evidence that kids are doing less of their own reading and homework because of AI.

> If you are interested you're gonna read that book, most likely no matter how many alternatives you may have. If you're not interested it's not like you're gonna do it anyway

This is absolutely untrue and discounts the entire concept of education. There are lots of things that people end up being interested in, but they someone has to force them to try it.

You're basically suggesting that you can leave a kid in a library and they'll end up reading every book that appeals to them, and we know that isn't true.

> Text transforming language models only make accessing short summaries easier to access (with a caveat of being potentially less reliable), but they don't change anything else.

You're underselling how much easier the access is.

> If limited scale was only thing that was holding the whole system working - well, that wasn't reliable, fair or meaningful system in a first place.

Just because some new efficiency allows cheaters to break a system doesn't mean it was a bad system. This is just a nonsensical concept.

A perfect example is online gaming. Now there are incredibly sophisticated aimbots and other ways to cheat that are almost impossible to scrub out of the system.

Does that mean online gaming was never fun, valuable, or entertaining when it was just humans playing against each other? Of course not.


> Just because some new efficiency allows cheaters to break a system doesn't mean it was a bad system.

I must apologize, because my parent comment wasn't well thought out.

Rather than saying "wasn't reliable, fair or meaningful system in a first place" (which wasn't logical, sadly, I got carried on an emotion) I should've said that it has a limit to its usefulness. It probably was a meaningful system back in the day, just not future-proof. So it eroded over time and isn't a reliable or fair system anymore, with questionable meaningfulness when it comes to the new reality.

> Does that mean online gaming was never fun, valuable, or entertaining when it was just humans playing against each other? Of course not.

I'm sorry, but I'm going to disagree on this, hard. It happens that I'm a person who holds a very unpopular opinion on this matter.

Online games that competed on things like manual dexterity were and still are fun, valuable and entertaining. But it's also true that they're based on a fundamentally flawed principle (the existence of a "perfect shot", if we're talking about aimbots) that simply won't stand the progress.

As someone who believes in transhumanism, I perceive things like aimbots or wallhacks as aids, similar to glasses, exo suits or thermal googles. And I believe that tools are always "good" (if that's applicable term) as the very humanity has foundations in tool invention and use. It's only the consequences of their application (which are heavily context-dependent) have different moral values depending on the outcome.

The world is inherently unfair because everyone is born and raised in different conditions, gaining different bodily and mental capabilities. I despise the idea of leveling everyone down below any arbitrarily "acceptable" capability ceiling of what they can do with their "bare hands, eyes and minds" - in my book, this goes against the idea of progress. I rather wish to see the very opposite. Put bluntly, I want every single gamer to have the best aimbot there is, and the game mechanics is changed to keep things still competitive, challenging, engaging and fun. Which means some games (or possibly even genres) are going to die, but - hey - we aren't playing 3x3 tic-tac-toe anymore either, even though we used to do so when we were little kids with limited brainpower.

Last, but not least, I strongly suspect that the aspect we all hate is not some "cheating" (I believe it's a misdirection) but rather griefing - such as playing against the opponents of drastically different capabilities, aka punching the babies.

Heck, I want to live in a world, where someone making a good bot is celebrated, not stigmatized. Playing against bots can be made fun, too. I loathe the current trends of the industry towards tightening things down, with all the rootkits, and people hating others for something like having a programmable mouse.

I recognize it could be a very naive worldview, but that's what I currently believe in.

And, uh, sorry for a probable off-topic. It just happened that your comment tickled one of my pet peeves :-)


Yeah, I think the existence of Reader's Digest makes your point for you. I remember the first time my dad explained that it was misnamed because it wasn't really for the readers. It was for the people who didn't want to read.

>Why read a book when AI can read it for you?

because learning to read and to write is learning to think, and if you're the only person with some autonomy while everyone else regurgitates the same AI slop that's going to give you a lot of opportunity.

Ever since the internet has been around it's been easy to outsource your work, it won't do anything for you in the long run.


I mean, summaries of books have existed for, pretty much, as long as books. People still read books.

I'm a bit of an AI doomer, but I don't think AI will have such a bad impact on school tests. They'll just have to move back to oral exams.

I remember reading that Socrates was against writing. His argument was that books don't hold real knowledge because you can't interrogate team. Real knowledge exists only in a man's mind because he can make use of it and answer questions about. I don't quite agree but I do see some merit to the idea. For example, NASA couldn't fly to the moon again by building a Saturn V because even though they still have all the documentation, most of the people who knew how to make use of that documentation are dead or retired.


Kids have to write essays with pencil and paper now. Of course they have to write in print because they stopped teaching cursive a decade ago.

No I'm not joking.


Hopefully its just a return to the in class bluebook essay. That is like a force multiplier in learning imo due to how much you need to prep to feel confident going into them.

The death of the high school essay will be when people realize best learning is done on the job and we get rid of highschools altogether.

We had child labor and it was horrible, if you only learn to work, and only learn through work then it leaves you vulnerable to exploitation by others. And an employer is highly motivated by capital to exploit you as much as possible.

Take 2 minutes to realize how having highschools solves none of the problems you describe.

They won't get that far because the kids with more education will get the job.

SteamOS is Arch, Bazzite is Fedora if you want a more Fedora experience.

I agree mostly because I find myself playing a lot of smaller games these days, and it's much easier for devs to release and patch their games on Steam than it is a Nintendo platform. They also have a much friendlier refund policy.

For the masses though, a Nintendo system just works. I can hand a Switch to my daughter and know she can play Nintendo games with little bullshit, it's easy to play couch co-op, the parental controls are very solid, etc.

In terms of hardware it's ARM and Nvidia, which is a solid foundation, and Nintendo titles look great without being technically demanding. I fully expect to see a 60 FPS Zelda game that uses DLSS upscaling to look great on my 4K TV. The Steam Deck is somewhat limited by FSR2.


> SteamOS is Arch, Bazzite is Fedora if you want a more Fedora experience.

Oops, edited, thank you!

> I agree mostly because I find myself playing a lot of smaller games these days

Same here, I play mostly indie <$20 games and have a blast doing it. These games would (almost) never launch on the Switch (or any console). Either that or I'm playing games that would never work well on the Switch (like Factorio, yes I know there is a port and I've also tried on my steam deck and it sucks, you need a mouse/keyboard IMHO).

> For the masses though, a Nintendo system just works. I can hand a Switch to my daughter and know she can play Nintendo games with little bullshit, it's easy to play couch co-op, the parental controls are very solid, etc.

Agreed, this is huge, I wouldn't recommend a steam deck to the average person, just tech people mostly.


> They also have a much friendlier refund policy.

I can see why steam has an easier refund policy. It’s easy to buy a game that doesn’t work well (or at all) on your hardware.

But the switch shouldn’t have this issue, and that’s basically only reason I would ever return a game.


Steam has a refund policy because consumer law requires them to have a refund policy.


I’m unaware of any law that would require steam to have a more customer friendly refund policy than Nintendo does, which was the point I was addressing.

The current Steam refund policy is a result of consumer law in Australia.

https://arstechnica.com/gaming/2016/03/valve-steam-refunds-v...


> It’s crazy to think that some software engineers might actually intentionally degrade user experience on non-Google browsers or for people using adblockers.

Why would I, as a developer whose income stream is based on advertising, intentionally cater to users who are costing me money? There is a web based on hobbyist platforms like PeerTube and Mastodon, and you can clearly see why they haven't captured the masses.


There is no reason I, a user, will intentionally use your product when you fill it with ads that are, at best annoying, and at worst malware vectors.

You have your right to develop things your way, I have a right to say no thank you. Google, though, is so big it is basically saying "you don't have a choice." That's the problem and one that Google spends billions to enforce. They use the weight of the uninformed to apply pressure to the rest of us.

It was no better when Microsoft did it with IE, nor is it any way proper, now.


Because it's not catering, it's actively making it worse for the rest of us? Because not everything is about money? Because of ethics?

Why would I, as a doctor whose income stream is based on people getting sick, intentionally support policies that make people healthier


This is basically true. The ad supported web sucks, but the solution is to not use it.


The solution is how we solve it. There’s a technical end to every demand, and at the end of the day regulation only can do so much for both sides. The reality sorts everything else.

The solution you mentioned is valid too.

But you cannot ignore the fact that internet is for everyone and not for google. Google minus all the shit it does to the internet can definitely exist in some form. Claiming it’s either this or nothing is just defeatist.

If google and youtube disappeared tomorrow, I’d be the first among those guys who buy hdds and torrent videos from these. For no money, like I did with all torrents in my life. There would be less professional videos obviously, but almost everyone agrees it’s a good thing (quit SM, anxiety, kids social issues, etc talks).


Sorry, I was ambiguous. I just meant not to use the ad-supported part of the web. Yes, the rest of it is fine.

Even a large chunk of the ad supported internet is happy to continue sending you bits if you don’t render their ads. This is fine, the convention has always been I’ll send whatever (non-malicious) bits I want, you send whatever you want, and we’ll render it however we want. YouTube specifically doesn’t send bits to people who don’t render their ads, on purpose, which is also fine, they just don’t want those of us who don’t render ads around.

Entitled ad guys don’t get to change the social convention to add some obligation to render their ads. If they don’t want to serve bits to users that block their ads, that’s fine, but if they send bits I’ll render them however I want on my system.


Exactly. It’s amazing that ads-ers send bits regardless and expect them to be consumed as is, as if it was some fundamental law of nature to do so. Simply don’t send, wink. That wink makes them feel uneasy because it breaks that wonderful narrative of theirs. “If you don’t want to watch ads, just don’t visit”. Yeah, just paywall us then, come on, we’re all yours, signed in, vendor locked. I’m so ready to leave and delete the bookmark, what are you waiting for?


People are playing this cat-and-mouse game with YouTube specifically, where they’ll circumvent the desire to block them for not rendering ads. I think this is a bad thing for users to do. But I mostly think it should make the ad providers uncomfortable. Because they know that most people won’t play cat and mouse for their content. In any case other than YouTube, people would just move on.

They know it. We know they know it, because if they really didn’t want to send bits to ad-blockers, they’d copy the first step of the back-and-forth that Google did with YouTube, and those users would no longer be a problem.


> Why would I, as a developer whose income stream is based on advertising, intentionally cater to users who are costing me money?

Thank goodness you're not a doctor.


You can tell who has never been in control of a budget and had to fire people because more than half their audience is using adblockers.


Why you even start a business so risky and bad mannered?


I get your points, but have you tried with less invasive advertising? Like, you know, static pictures downloaded from your domain with a HREF on them?


Because you cannot even control your ads to users? No one of you devs gets punishment for tracking users' personal information, pushing scam, phishing and malware to users, and now users are not even allowed to protect themselves? Users don't drop trackers and malwares to your servers, why do you drop trackers and malwares to users' machines?

Because you are working for a corporation that joins in World Wide Web Consortium, who literally says this in the Ethical Web Principles?

> People must be able to change web pages according to their needs. For example, people should be able to install style sheets, assistive browser extensions, and blockers of unwanted content or scripts. We will build features and write specifications that respect people's agency, and will create user agents to represent those preferences on the web user's behalf.

https://www.w3.org/TR/ethical-web-principles/#render

If you cannot maintain your service, paywall your features, not forcing malwares and trackers to users. No one forced you to serve 1080p, 1440p or 4K videos to everyone for free. You were the one literally "advertised" yourself as a "free" service at beginning, in order to hoard how many users you could. And now when you cannot control your own costs, you push malwares and trackers to users? The mentality of hoarding users with "baits" like "free" are the real poisons for the internet, for both of you and your users, NOT users who are doing exactly what World Wide Web Consortium tells them.

Where are all your MBAs in your corporations? The ones bragging about themselves on LinkedIn and now the only resolutions you can think of is pushing malwares and trackers to users? All of the finance classes in your college should be simplified to advertisement classes I guess? That would save a lot of resources for everyone.


Let me remind you that Ad-Block Plus collapsed when it's users revolted over their plan to whitelist simple vetted advertisements in a truce with advertisers.

ABP was foolish and actually believed it's users were trying to make a statement about invasive ads. Really their users just didn't want to see any ads at all, ever, regardless of the circumstances.


ABP didn't even address any trackings with their program. It's just pure cosmetics.

> Really their users just didn't want to see any ads at all

Because the internet was filled with malicious ads before any content blockers having more people? The hazardours time of Windows XP/7 with malwares-affected from the ads appear like meals in every day's news? Sorry, internet ads are doomed from those times. They are migrained to everyone's minds that users are walking in a landmines with those ads. If a business is entirely dependent on those ads, that business should not exist. Doing business is hard, right? I mean, like, most of other ethical jobs on the world.

Users are just doing what World Wide Web Consortium says.


And most people don't have issue with static ads like those in newspapers and magazines. People would be fine if business wants to advertise something, but the tech ads industry does not only want to promote things to you, they want to capture your attention as well, regardless of if you want the thing or not. They think they can only do so by knowing you as well as your mother does.

If a business have something to sell, just let people know. No need to help create a surveillance state.


You forgot to mention it involved paid whitelisting, and the requirements for compliance were so weak that even major malvertising vectors, like Google, were considered acceptable.


Quite frankly, at this point, I just want any business that runs on ads to crash and burn. The whole business model is insanely toxic and sociopathic and shouldn't be tolerated at all.


Those corporations with MBAs will find another way-- you're only hurting the independents and destroying the open web with your hipster nonsense


> Those corporations with MBAs will find another way

Not YouTube.

Those "nonsense" are from World Wide Web Consortium, users are just doing what they say. The "hipster" are the ones not respecting those Ethical Web Principles. Users are not injecting trackers and malwares to those independents' servers. Why do those independents inject trackers and malwares to users' machines?

The ones who destroyed the open web are the business, including independents and corporations, with the mentality of luring more users to use their "free" services, without any plans of controlling the cost, ETHICALLY and MORALLY. Scale, scale, scale, more users, more beautiful number; until their pocket is burnt and now their resolution is pushing those trackers and malwares to compensate the cost.

Ads, malvertisements and trackers are not the open web.


It's such an incredible science fantasy universe, probably my favorite piece of fiction from the past decade.

The political history, nature of reality and shape of the world, even things about how computers ("radiocomputers") work are all fascinating to me. It's a shame what happened to the studio. There deserve to be more stories told in that universe.


> It's a shame what happened to the studio.

And there's an extremely funny winking self-reference in the game:

https://fayde.co.uk/dialojue/4600609

For context: ZA/UM was, for a time, called Fortress Occident.


ZA/UM is also the name of a physical artefact in the Disco Elysium universe. It gets used in the novel Kurvitz wrote: Sacred and Terrible Air.


> ZA/UM was, for a time, called Fortress Occident

Hah! I've spent some time reading trivia about the game recently, but I hadn't come across this. That makes a lot of sense.


I bought a Samsung Odyssey G9 on a whim when visiting a Microcenter. It's a 49" 5120x1440 32:9 display, essentially two 1440p screens combined. You can use it as a single screen or use picture-in-picture to split the screen (50:50 and 70:30). For gaming stuff it's also 240hz with Gsync/Freesync and supports HDR.

I have it on a VESA mount. I love it, I typically use two screens anyway and now I get a seamless screen without bezels and half the cables I'd otherwise need.


Which are that the government can't censor speech. Forcing a private entity to support any form of speech is actually against the first amendment, it is compelled speech.


Who's forcing what? They are just inspired by it, at least that's what the comment you are replying to is saying. If a private corporation wants to support or enable free speech, they are allowed to, just like you said. This is literally from Facebook itself so I'm not sure who is compelling who.


Nobody is forcing anyone to do anything.

Meta, as a large and powerful entity with the ability to censor as much or more than many governments, is opting to allow free speech on its platforms. So is X. That spirit is inspired by the Bill of Rights, not Elon Musk.


But the client is designed to not trust the server, that's why encryption is end-to-end. So does it matter?


In some sense, no - the protocol protects the contents of your messages. In another sense, yes - a compromised server is much easier to collect metadata from.


Metadata, yes. Of course, the protocols, and thus all the inconveniences of the Signal app people constantly complain about, are designed to minimize that metadata. But: yes. Contents of messages, though? No.


If Signal, the service, was designed to minimize metadata collection, then why is it so insistent on verifying each user's connection to an E.164 telephone number at registration? Even now, when we have usernames, they require us to prove a phone number which they pinky-swear they won't tell anyone. Necessary privacy tradeoff for spam prevention, they say. This isn't metadata minimization, and telephone number is a uniquely compromising piece of metadata for all but the most paranoid of users who use unique burner numbers for everything.


This is the most-frequently-asked question about Signal, it has a clear answer, the answer is privacy-preserving, and you can read it over and over and over again by typing "Signal" into the search bar at the bottom of this page.


The answer is not privacy-preserving for any sense of the word "privacy" that includes non-disclosure of a user's phone number as a legitimate privacy interest. Your threat model is valid for you, but it is not universal.


The question you posed, how Signal's identifier system minimizes metadata, has a clear answer. I'm not interested in comparative threat modeling, but rather addressing the specific objection you raised. I believe we've now disposed of it.


I don't believe there has been any such disposition in this thread. There have been vague assertions that it's been asked and answered elsewhere. Meanwhile, the Signal source code, and experience with the software, clearly demonstrates that a phone number is required to be proven for registration, and is persisted server-side for anti-spam, account recovery, and (as of a few months ago, optional) contact discovery purposes.


Yes. There's also libraries that do this, like libsignal.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: