Hacker Newsnew | past | comments | ask | show | jobs | submit | more DisjointedHunt's commentslogin

Because it doesn't presently have memory or look things up in a table or the internet.

You will notice that both are very easy fixes that computers have perfected in retrieval over the past 5 or so decades.


Just stick Google's pre-search tools in front of the current version and it would solve a large chunk of those problems. The right tool for the job, essentially. After all, you wouldn't ask your English professor to solve a math problem either.


Imagine if this was Trump. That for me puts the reaction in tech circles in perspective.

We would see two impeachments, wall to wall news coverage, protests by employees at Facebook, Apple and Google, all asking for deplatforming everyone associated and so much more.

Its ridiculous that we're unable as a society to separate actions from political affiliation.


Imagine if what were Trump? Per the article, the Trump administration sought "help from the tech companies to combat misinformation."


They live in this alternate universe where the COVID pandemic didn't involve Trump at all.


That's dishonest, look at most of the interactions highlighted in the article, they were from the Biden administration.


It's not at all dishonest, the article itself highlights that Trump's administration was also concerned about misinformation, especially early on in the pandemic in regards to things like economic stability and availability of goods.

> The United States government pressured Twitter to elevate certain content and suppress other content about Covid-19 and the pandemic. Internal emails that I viewed at Twitter showed that both the Trump and Biden administrations directly pressed Twitter executives to moderate the platform’s content according to their wishes.

> At the onset of the pandemic, the Trump administration was especially concerned about panic buying, and sought “help from the tech companies to combat misinformation,” according to emails sent by Twitter employees in the wake of meetings with the White House. One area of so-called misinformation: “runs on grocery stores.”

It goes on to talk mostly about the Biden administration, but that's an editorial decision. The author in no way claims that Trump's administration wasn't sending moderation requests to social media companies. This is not a Republican vs Democrat thing; every government under every administration coordinates with social media companies.

If you're going to approach censorship through a purely partisan lens, you're not going to get very far.


I should have been more specific so asinine comments like these who can't read the story before commenting don't show up:

Imagine if the Trump Administration had people who donated predominantly to their cause making decisions at the highest level at Social Media companies and directly affecting policy of posts being taken down such as Jim Baker asking why "don't be afraid of COVID" isn't taken down.

Imagine if employees at social media companies took the view that they knew better than abortion and lgbt activists and overruled coverage of their protests, flagged their tweets, locked their accounts from gaining any distribution.

Imagine if the Trump administration established a department within the San Francisco field office of the FBI to EXCLUSIVELY check social media feeds of their political opponents to have them reported to twitter under government pressure to take them down.

I thought all of this was implied and anyone reading this comment would at least acknowledge the severity of these revelations, but here we are.


The hackers got access to their dev environment and source code back in August and used that information to hit them harder this time around: https://blog.lastpass.com/2022/12/notice-of-recent-security-...

We may not be seeing the whole picture yet, but very concerning that the dev environment had access to Production database backups stored in a manner that was easily decrypt-able :/ The whole point of a Dev environment in a security focused company is that you reduce the surface area for access and failure.


Re: dev environment containing database backups, here’s the full text on that point from the blog:

> some source code and technical information were stolen from our development environment and used to target another employee, obtaining credentials and keys which were used to access and decrypt some storage volumes within the cloud-based storage service.

It sounds like they didn’t literally have backups in the dev environment (which would be absolutely terrible if they did).

I’m guessing they learned enough about the architecture from the dev environment to social engineer their way into getting someone to give them credentials to production.


Wooo.

> dev environment had access to Production database backups stored in a manner that was easily decrypt-able

Say what you like about old dinosaur companies like banks, but when I worked in the IT department for a bank back in the early 2000s developers absolutely couldn't touch production systems, data or backups. Ever. They always has to go through ops even during major incidents.

Of course banks aren't immune to fuckups, but at least on paper they tend to have sound security policies.

Startups are great at moving fast because there are no guardrails. Like lemmings, corps, banks and government entities externalize their risk to the third party because they recognize the name and logo. "Should we go with SecureCompany(tm)?". "Yeah, everybody uses them".

It's great for startups. Without red tape and heavy policy your lean team can create products that the hamstrung bigcorp could only dream of (just don't look under the UI!). They slowly build their customer base, and when they're ready to IPO, they run through some security and process certification theatre, and the investors and a handful of executives can make billions.

Overall, both sides seem to be happy with the situation. Bigcorp gets to cut tech staff and outsource, and they have a convenient finger to point when the regulator asks questions. The SaaS apologises and maybe loses a customer or two, but based on stock prices after major issues like this in the past, nothing bad really happens.


This does not address the inherent conflict/manipulation possible with the HFT relationship:

1.Imagine a Robinhood user places an order to sell 100 $GME

2.Robinhood doesn't send it to an exchange first, it holds onto it for a few milliseconds.

3.Meanwhile, Citadel/Virtu or others execute orders below what the bid would have fetched a few milliseconds ago.

4.Now, the order is routed to the exchange above what the market was trading at and thus certainly goes unfilled.

5.300 milliseconds per the regulation elapses and the order comes back to the HFT firm to fill.

The bigger issue here is the nepotism in providing wholesale prices to HFT firms. The stock exchanges do this and so does Robinhood. In an open and competitive market, the playing ground should be regulated to be equal for all.

Or infact, as motorsports participants know very well, the cost of access to markets should INCREASE with size, not decrease. . . If you're the present Formula1 team winner, you pay SIGNIFICANTLY more to enter next years championship than the last place team.


The last paragraph of the story literally says that they're using it as an alternative to the icloud scanning plan.


The last paragraph of the story is a concluding statement by the author on the difficulty of countering CSAM, and it says no such thing.

The announced opt-in feature for iCloud family accounts (Communication Safety for Messages) will scan content that is sent and received by the Messages app, and alert the associated parent or caregiver directly, without informing Apple.


This was the last paragraph before WIRED edited the article to add commentary from RAINN as the last paragraph:

>"Technology that detects CSAM before it is sent from a child’s device can prevent that child from being a victim of sextortion or other sexual abuse, and can help identify children who are currently being exploited,” says Erin Earp, interim vice president of public policy at the anti-sexual violence organization RAINN. “Additionally, because the minor is typically sending newly or recently created images, it is unlikely that such images would be detected by other technology, such as Photo DNA. While the vast majority of online CSAM is created by someone in the victim’s circle of trust, which may not be captured by the type of scanning mentioned, combatting the online sexual abuse and exploitation of children requires technology companies to innovate and create new tools. Scanning for CSAM before the material is sent by a child’s device is one of these such tools and can help limit the scope of the problem.”

Those quotes are a continuation of the statement from "The Company" ie Apple.


Ah, I think you're confused by the way the preceding paragraph ends ("Apple told WIRED that it also plans to continue working with child safety experts [...]").

The paragraph you're quoting ("'Technology that detects...scope of the problem.'") is entirely commentary from Erin Earp at RAINN, and is what was added by WIRED with the edit.

And, sorry to nitpick, but "Countering CSAM is a complicated and nuanced endeavor [...]" has always been the last paragraph (both before and after the edit).


Has no one read the article?

>"Technology that detects CSAM before it is sent from a child’s device can prevent that child from being a victim of sextortion or other sexual abuse, and can help identify children who are currently being exploited,” says Erin Earp, interim vice president of public policy at the anti-sexual violence organization RAINN. “Additionally, because the minor is typically sending newly or recently created images, it is unlikely that such images would be detected by other technology, such as Photo DNA. While the vast majority of online CSAM is created by someone in the victim’s circle of trust, which may not be captured by the type of scanning mentioned, combatting the online sexual abuse and exploitation of children requires technology companies to innovate and create new tools. Scanning for CSAM before the material is sent by a child’s device is one of these such tools and can help limit the scope of the problem.”

I see so many top level comments about how they're stopping on-device scanning of any photo which is more intrusive. This is not true. It appears the opt-in is limited to receiving alerts and blocking sending harmful images, but the implications here is that Apple is moving iCloud scanning of media onto the device.

If anyone is going to vote on this comment or respond, please atleast go to your iPhone settings -> Accessibility -> Hearing -> Sound Recognition. Those are on device ML models working on an audio buffer that is ALWAYS ON. The idea is to do the same for visual media. Any images you take or potentially even hit the image buffer of your device could potentially be classified by increasingly powerful ML models.

It doesn't matter if the actual content is then end to end encrypted, because the knowledge of the content is now available to Apple. This is DANGEROUS.


I think you confuse two different things: The optional nudity detection and the mandatory iCloud CSAM scanning.

They abandoned the later and that’s what the article is about.

The first one is a child safety feature that scans content and warns the user, if enabled.

The second one scans everyone’s all content and if it detects more items than some number your data is decrypted and you are reported to the police. A deeply disturbing, dystopian feature where your personal devices become the police and snitch you. Today they say it’s CSAM but there’s no technical reason not to expand it to anything.


Im not confusing anything.

iCloud CSAM scanning does not do what you described in your last paragraph, you're describing the proposed on device csam scanning. There are many "product features" with a finite set of technical building blocks. Lets break it down:

----------------------- Encryption , Storage and Transport:

-Presently, this is considered "strong" on-device where Apple claims they cannot access content on your device and it is encrypted.

-Data sent to their servers as part of the automatic (opt in) photo backup service (icloud photos) is considered fair game to scan and they do that routinely.

-This data is encrypted "End to end". What that means is up for debate since it's taken on a commercial brand shape and isn't a technical guarantee of anything.

------------------

Explicit consent for access and storage:

-This is the point. Your device belongs to you. No one should be snooping around there.

-Apple is presently retiring the aforementioned icloud photo scanning and has simultaneously released a statement that's very obscure in what precisely they mean.

-They indicate that scanning on the server is waayyy too late in preventing CSAM and indicate that they think the best way to prevent CSAM is at the point of origin.

-Of course, what that means is : "Scan everything, when something matches, do an action"

-The "Opt-in" here is for the warnings on a child device, but it is an OS LEVEL FEATURE.

-If some media is being scanned, ALL media is potentially being scanned.

-Presently, for audio, one easy way to reduce your bandwidth costs of backing up, scanning on a server etc is to move everything onto the device itself.

-This has been demonstrated by the Accessibility menu feature i have already called out in my parent comment. You have an audio buffer on your iPhone TODAY that is ALWAYS ACTIVE. When the accessibility toggle is turned on, the contents of this buffer are regularly classified against a trained model.

-When a match occurs, the OS responds with the configured response.

THIS IS A DANGEROUS FRAMEWORK. Swap the media type to any generic type and swap what you're looking for from CSAM to a mention of a political phrase such as abortion. You're asking us to TRUST that the company will never be compelled to do that by authoritarian governments or any hostile entity for that matter? No fucking thank you.


That’s not how Apple’s system worked. The matching only happened server side. Your device had no knowledge of whether a photo was csam or not.


Nope. In the proposed system the scan was performed by the device, the entity that evaluated the result of the scan is a implementation detail. Technically it was also Apple that would call the police but that’s also an implementation detail.

It’s just your device scanning your files for content deemed illegal by authorities and snitching you to the authorities but with extra steps.


Your device attached an encrypted fingerprint of each photo as you uploaded it. The scanning and matching happened server side.


And how do you think that fingerprint is attached exactly? By scanning your files on your device.

Anyway what is the point of this conversation? Are we going to argue over what scanning means?

I’m sorry but I find this intellectually dampening. Okay, not scanning but analyzing and sending the results of the analysis to Apple where Apple can scan. So?


Because you said this: “personal devices become the police and snitch you”

Which makes it seem like Apple’s approach is worse than what Google does where they actually do scan through everyone’s full resolution photos on the server doing what they please with the results.

My point in making this distinction is that if a company is going to do server side scanning, apple’s approach is far more privacy preserving than a company like Google’s and that point is being lost.


> all content

It was only content going to iCloud.


Correct but this means all content if iCloud is enabled, Photos doesn't give you an option to create a folder or album where you can store your photos on-device only.

You also can't have drop in replacement alternative cloud provider if you are not OK to be scanned and reported to the authorities for government disallowed content(There's no technical reason for the reported content being CSAM only, can be anything) because alternative apps can't have the same level of iPhone integration with iCloud.


> Correct but this means all content if iCloud is enabled

That's a pretty big distinction between all content and all content on it's way to iCloud.

> (There's no technical reason for the reported content being CSAM only, can be anything)

Now we're back to all sorts of assumptions. There's also no technical reason Apple can't scan phones right now.

Apple did say at the time the hash db would be checked against different sources to prevent random content checks. And now with E2E, Apple's proposed method was more secure than what happens in the various clouds today where LEO could show up and ask them to search for anything.

Obviously the most secure is not to check at all, but the law may force the issue at some point.


Because the author seems to have a bone to pick.


Large accounts >$1mm spend routinely get zero support on platforms such as Facebook. That side of the business also goes through ridiculous re-orgs.

This quote makes it sound as though they had an AE/AM visit them every week ? Most advertisers count themselves lucky to hear from the ads teams once a quarter outside regular scripted emails.


That’s not my experience at all. My employer is way below $1m annual spend on FB and our in-house ad manager has regular check-in calls with a FB rep who is local to our city. Same with Google and LinkedIn.


Those teams help grow revenue. The more help they give, the more the advertiser is likely to increase their spend. They do a lot of heavy lifting to help educate ad teams about about new ad products and software changes.


Its sad that censorship decisions are treated differently in the US media based on political affiliation.

Throughout the pandemic, I witnessed people questioning the science behind lockdowns and government overreach being banned, censored, de-patformed from social media sites and the common response in the media being "start your own platform"

"The shoe on the other foot" is such a bad place to find ourselves as a society, it speaks to us being incapable of reconciling our differences for a common good.

Since the 2016 election, the US has gone batshit insane it feels like. It was crazy before, but its incredibly hard to have any sort of a conversation about touchy issues these days without an overwhelming negative, non-critical response in the form of downvotes, aggressive bucketing into political buckets, pariah treatment etc.

I hope, at least on HN, we're able to hold discussions on political topics without such tribalism.


This isn't a case of partisanship from the critics. If one side claims to be on the side of anti-censorship, then engages in censorship once in power, that is a case of hypocrisy. It doesn't take tribalism to call that out.


[flagged]


A man can be a side. Your pedantry is showing.


I genuinely lol'd at the hypocrisy of you calling me pedantic while coming out with that. Thanks.


You're welcome.


I've had a mixed experience. I've had people call me a mass murderer and a fascist because I identify myself as a communist. (Not a Tankie, fuck tankies.)

But I've also had some great discussions here with people who listen, trade book recommendations, and at least try to learn before writing me off.

A lot more of the former these days, unfortunately.


To be fair, being considered a threat to national security and held as a pariah for having communist ideas is hardly a recent development in the US.


Yeah, turns out of one thinks, "maybe the relentless pursuit of growth above anything else is bad" is pretty scary to some folks?


If that's the defining characteristic of being a communist, then the defining characteristic of being a capitalist must be thinking "maybe the relentless pursuit of financial equality above anything else is bad".

If both of those things are true, then I would guess that at least 90% of people in the world are simultaneously capitalists and communists.


There’s no point in discussing US politics. It’s not possible to do that anywhere US people are allowed to post, because people in the US have nearly literally lost their minds. All because a mentally ill real estate huckster was elected president, found the cracks in America’s foundation that conservatives spent 40 years widening (with, to be sure, much help from the liberals/centrists), and stuck TNT in them.

Buckle up; the next decade is going to make the 1960’s look pastoral.


I wish more people start holding politicians accountable for their promises.

All these sacrifices that residents of California have had to make under the premise of a better environmental future, banning straws, forcing a specific complex fuel mix forcing refineries in state to go out of business and thus people at the pump paying a lot more for this imported complex blend etc . . . all of these are decisions that have consequences.

The premise thus far has never been challenged that the "Good" outweighs the "bad" in these decisions by the legislature in California.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: