Don't want to be the devil's advocate here but just because camera is on doesn't mean anything is being recorded. As an iOS developer who worked with apps that access camera frequently, I can confidently say that it's just a UX optimization. Starting the camera takes a brief few hundred milliseconds which is easily noticable from a user perspective. By keeping the camera ready, they are making the experience smoother. I think this is it. Just because Facebook is an evil company doesn't mean this directly is a malicious move.
I don't understand why people read yet another story about FB's bad behavior somehow conclude all of big tech are bad guys. Is it really so hard to keep them straight?
I mean, there are plenty of stories about Google, Amazon, Apple behaving in ways that could lead someone to that conclusion. Microsoft has the best reputation to me, but I'm young enough that the EEE days don't hold much resonance.
You would think so, but I think the general consensus is that every SV tech company will do evil in a heartbeat if it will improve adoption, retention, or any other KPI, making privacy-respecting companies the exception, not the rule.
Incentives. FB behaves like this because it's profitable for them to do so. Do you expect this sort of behavior is somehow not profitable for other companies of their scale?
Because only one of these companies needs to make a mistake before they all should learn from it. It would be like watching every construction company make the same egregious safety error one after the other. The first one to make the mistake is not as guilty as the 10th company to make the mistake after having just watched the previous 9.
This could be a disconnect between engineers who wanted to make a smooth experience on all devices and PR manager who does not want anything that could be construed as unethical behavior. IMO if this does affect performance it should probably be an option.
This. It's a handwavy familiar term (without defending its usage).
Remember that nowadays people call soundless videos "GIFs", browsers curtail web adresses, and companies report leak passwords as being encrypted when they were actually hashed.
Tangentially related. Big tech (FB, Google etc..) used to be favourite workplaces for programmers. Now what are the exciting workspaces for engineers considering big tech is doing lot of unethical things?
Everyone will have a different experience - it’s difficult to fathom the scale of how much work is being done at these companies from the outside.
Consumers often have the mistaken assumption that they know what a large tech company is basically doing. In reality, there are thousands of projects, most of which you’ll never hear about.
In my experience people are more affected by their manager and working group than the company clashing with their ideals.
If you are passionate about privacy, I believe one of the best things you could do in practice is is join a big tech co and work on E2E encryption, security, privacy controls, homomorphic encryption, etc. Change is easier from within
> As an iOS developer who worked with apps that access camera frequently, I can confidently say that it's just a UX optimization.
Here's what Facebook does once you give access to the iOS Photos app. Every time you open the Facebook app, it asks you - "Would you like to post these photos on Facebook?". Basically, they are reading the Photos library repeatedly to scan for recent (or past) photos along with their meta data. It could be just UX optimisation. It could also be UX optimisation for harvesting data.
> to have people post more photos and keep those metrics up
But... that is data harvesting. When you upload photos to Facebook, you are giving them that data. That's not a conspiracy, that's just how Facebook's TOS works. It gets to look at the photos you upload and use that data to help build its internal profile for you.
What's your definition of data harvesting? I would say that getting people to upload useful data to your servers qualifies.
Photos and videos of people are not generally useful for ads. You know what is valuable? Actual purchase decisions sent in by advertisers. Photos and videos of people’s dogs and food and kids etc is strictly to entertain the other Facebook users and keep them coming back.
An advertiser is willing to pay tons more to serve an ad to someone who has bought similar products in the past, and is likely to buy their product again, or has recently abandoned a non-empty shopping cart. Or has even just interacted with a previous ad. That is the data that increases ad revenue. Not facial expressions while scrolling or group photos at restaurants.
You're saying this with a lot of confidence, but I'm not certain I believe you.
When you have facial recognition software, photos are a contact graph. When they're photos that a user explicitly uploads, they often come with location data. All of that stuff is useful to advertisers.
Facebook can and does target ads based on who you know. If you upload a photo, and the metadata says it was taken today, and it has you location attached to it, and it shows you smiling next to your friend who's recently searched for the new Avengers movie, then yeah, it makes a lot of sense to show you an add for the new Avengers movie, because maybe the two of you will go to it.
It doesn't even need to be that detailed. Just knowing that you're that person's acquaintance means that now Facebook can suggest that person as a friend, it means that Facebook can add that person to its social graph if they're not currently on the service. Just knowing your location means that Facebook can advertise you something from a nearby store.
All of that is stuff that Facebook wants to know, all of that falls pretty squarely in the middle of the category of data harvesting. Not all data harvesting is just to serve ads, some of it is to know how to tailor the service to you to make you more active (so that you look at more ads), or to suggest friends (which increases your investment in the platform), or how to engineer the service in general (so that more people use it and look at ads).
> Not facial expressions while scrolling or group photos at restaurants.
Facebook used to analyze unposted status updates (where you type out the status and then delete it before you press post) so it could figure out why people were getting cold feet before sharing their thoughts[0]. You really, genuinely believe that they're not interested in your facial expressions? You really, genuinely believe that nobody at Facebook is looking into research like identifying branded products inside of user-uploaded photos?
Photos and videos of people are not generally useful for ads
If that's true, then how come there was a big hoo-ha a few years ago about Facebook putting pictures of people's friends in ads for unrelated companies, as if the friends were endorsing those products?
Sounds like a very useful way to use someone's photos for ads.
Also, with the state of machine learning, it's hard to imagine how image processing wouldn't be useful for advertising.
I can type "cat" into my phone, and its on-device learning shows me all the pictures with cats in them.
Facebook has the technical ability to scan people's pictures and tell all the cat food advertisers, "Here's all the people interested in cats." Or, even more precisely, scan the photos products or logos and tell an advertiser, "Here's all the people who use your competitor's product."
It's not hard to see where this is going, assuming it's not already there.
Photos and videos of someone and where they are is valuable for ads. The logo on the beer you're holding in your photo with your friends at the bar, the faces of those friends and what drinks they have, the geolocation information in the image metadata, the date and time the photo was taken, and so on. All of that is valuable data to Facebook, and they're operating at a scale that allows them to fully harvest and take advantage of all of that and more.
At Facebook's scale, even tiny, inconsequential data can be useful in aggregate.
I would imagine that on a large enough scale, analysis of photo contents, timestamp, location, and correlation with any other FB/app activity at that given time/location can provide a ton of new data points to add to their behavior modeling algorithms.
There may not be obvious one-to-one connections that we would make from the information we see in a photo, but the more data points that can be fed into a model, the more accurate the model becomes.
> Photos and videos of people are not generally useful for ads.
They're great for facial recognition, finding out more about relationships that exist in real life, and building sets of training data. Facebook is in all of those markets.
We're talking about people thinking "this company has done a lot of underhanded stuff in the past, maybe they're doing another underhanded thing that they clearly have the ability and motivation to do."
I fail to see how that is a conspiracy theory at all, much less one on par with believing the earth is flat.
> Or, more likely, it's an engagement optimization to have people post more photos and keep those metrics up.
You're responding to a statement regarding impact with a statement regarding intention. You can both be correct in this situation, and comparing your parent to flat-earthers is completely inappropriate.
At best, facebook is willfully neglectful of the impact of their data harvesting activities.
Not all of them are conspiracy theory. Facebook was using a "free VPN" app to MITM and read all of users web traffic, including HTTPS. Google had to change Android key management to hardcode the browser keys so nobody would do it again
Doesn't keeping camera on increase battery drain, though? Also, with Facebook being evil and all, what's stopping them from hopping on "hey, the camera is already on, let's not waste that, and use the incoming visual data for something while we're at it" line of thinking eventually?
It does, indeed. Not like constantly recording though. I think they apply some heuristics that keep the camera on in the background (e.g if you open on story camera frequently and on the main screen, but not while at a screen when camera is not accesible). Otherwise the device would become noticeably hot, but I haven't encountered such an issue.
The way that would make sense, for that particular example, is to start the camera up when the user begins the interaction to open it. Like how https://instant.page/ works. If it takes 200ms to get the camera running, do it while the gesture to swipe over to the camera is in progress. It will take more than 200ms for the user to physically move.
iOS has events for gesture start, in progress, finished and cancelled.
Assuming that this happens only when the user is actively using the app, I'd guess that the display would consume dramatically more energy, making the camera's usage of the battery negligible, also compared to the benefit of then having a very responsive camera.
In any case, I do think that providing visual feedback to the user is very important. Like a small picture-in-picture-overlay.
> Don't want to be the devil's advocate here but just because camera is on doesn't mean anything is being recorded.
This is about consumer trust.
To the vast majority of consumers, a smartphone is a magic box with a lens and a display. What happens inside - how data is stored and processed, how the OS functions, how apps work,... - has been abstracted away behind those "UX optimized" interfaces you just mentioned.
Consumer don't buy black box devices because they trust the few big companies that manufacture the hardware and software: they buy them because there aren't any alternatives that provide the same level of convenience, and can solidly guarantee a due level of privacy as to what happens behind the screens.
When consumers buy a smartphone or download an app, they have to blindly put their trust in manufacturers acting in good faith. And just that has been damaged by plenty of scandals in the past decade.
So, next time you're in public, try to pay attention to this: how many people have taped off the camera of their devices? And why is this product a thing? [1]
It's because there's a fundamental trust issue. Consumers tape those things on their devices because they don't feel in control of what a device does or doesn't record. Covering the camera is literally the only way they feel they are 100% in control.
So, that should leave you with two questions here. (a) If there's a fundamental trust issue about "magic" technology, do you think people will remotely accept any rationalization about "UX optimization" and "not recording" (e.g. your asking them to trust them on your word)? and (b) If people go so far as to tape off their camera lenses because of a deep trust issue, how much sense does it even make to implement such an UX optimization in the first place? Isn't that a tone deaf act of sorts?
I totally understand why this happens, and I can see the convenience of having a snappy camera. It's just that the technical solution to enable this, flies against what people perceive as acceptable.
> So, next time you're in public, try to pay attention to this: how many people have taped off the camera of their devices?
I usually do pay attention and so far I have seen one person with a camera taped over. This is just not a thing, no matter how the media wants it to be.
My company's health insurance company spiffed everyone camera covers for their computers and phones. When we were in the office, I saw them everywhere, on both company-owned equipment, and on people's personal gear.
They had the health insurance company's logo on them, so when I saw one on a barista's laptop at the Starbucks down the street, I assume she got it from one of our employees and thought it was a good idea. Maybe people just don't know they're available, or cheap.
> Consumer don't buy black box devices because they trust the few big companies that manufacture the hardware and software: they buy them because there aren't any alternatives that provide the same level of convenience, and can solidly guarantee a due level of privacy as to what happens behind the screens.
They still consent to it, though. People do perceive it as acceptable, and use it to share their private information. If they don't want to use smartphones, they're free to move to another country.
It looks to me like the media are still bitter about Facebook allegedly helping Russians elect Trump, and so they make up this non-news and instill panic and fear into people.
Pretty sure that now most people will be convinced that Facebook is recording them secretly.
Russians did not elect Trump. Americans did. It seems astonishing to me, how Facebook is credited with putting Trump in office, merely by being a means to communicate ads, like it's _supposed_ to be in the first place. If propaganda and ads targeted at swaying the voters one way or another are considered illegal, the whole of election campaigns, ads and party propaganda - because it's propaganda, from both parties - should be banned. Plus, American voters are implied to be, through these accusations, stupid kids that have no judgement on their own, and will fall off a cliff if an ad tells them to. What is true of all this? Can you, if you have first hand experience from the US, enlighten me?
It's one thing to enjoy benefit of doubt, it's another to just make up utterly unbelievable fringe crap that can be explained by hundreds of other explanations.
Snowden revealed a lot of stuff that was the just domain of crazy fringe theories before. Anyone morally okay with facebook's known problems would also most likely be morally okay with spying on users like this. Hence, I agree that giving them the benefit of the doubt is irrational.
If you missed my point: it's one thing to give benefit of the doubt (which they don't deserve) and it's another thing to just make up dream rantings about what companies are supposedly doing with the data which can be easily checked for truth with basic understanding of underlying mechanisms.
A developer went to their tech lead and said 'hey we can make the app much better by leaving the camera on all the time saves 200ms on camera startup time'
Tech lead: this is brilliant no potential problems here
Tech lead to management team: got this great new feature, we leave the camera on all the time makes the app 200ms faster.
Management: wow that's great can't see any problem with that, no chance of reputational damage there.
And it's all innocent and noone has any idea why anyone would get the wrong idea?
I see you don't have much experience in development. It probably went more like this:
business: "Opening the camera seems slow", dev: "it takes 200ms for the system to start it", "Can you make it faster?", "Not really, unless we keep the camera on", "Do that!", "Won't it have privacy issues?", "Don't worry about that".
EDIT: And in the end business is correct, because nobody really cares about this. Every happy instagrammer keeps happily instagramming.
This is so true that at other companies it often takes a lawyer (in-house counsel) reminding product that something is legally precarious to stop it, and they still push.
For something that has no real legal ramifications, there’s no way you’re stopping it.
The implicit one we can never admit to is something like: Instagrammer has a different sensitivity for 200ms than the dev and never cared.
Dev has to justify his egregious salary by manufacturing statistics to “experiment with engagement” nevermind the literal reality of having such a gadget is titillating as is, manufacturing belief that specific dev making a camera respond 200ms faster is what really made the app is where that paper is.
Camera start up time actually is quite important in my opinion. For example, I stopped using snapchat primarily because it felt laggy to get the camera open. That really grates on you when an app is mostly used for spontaneous image/video capture.
> Management: wow that's great can't see any problem with that, no chance of reputational damage there.
Absolutely agree, but it also wouldn't surprise me if they didn't think about it or didn't care.
If 2020 has taught me anything, it's that a surprisingly high number of people / companies / groups managers and/or leaders, do not think about or care about long-term repercussions of the vast majority of their decisions.
If 2020 has taught me anything, it's that a surprisingly high number of people / companies / groups managers and/or leaders, do not think about or care about long-term repercussions of the vast majority of their decisions.
2020? I was thinking 1980. To my memory, it seems to have started with the Savings and Loan scandals, and gotten worse.
Companies went from optimizing for 100-year growth to optimizing for the next three months.
Not sure why someone is assuming Facebook did this for an honest reason when Facebook is the best example of an extremely dishonest large tech company
*
The founder Zucker berg literally wrote - They trust me, the dumb fucks
He wrote this in college
He has shown a pattern of treating his users as 'dumb fucks'
Again and again more and more data comes up that Facebook is a dishonest company. not a 'by mistake' dishonest, but a fundamentally dishonest company
And always White Knights like this guy show up claiming 'it is to optimize loading speed)
*
Now Facebook is finally getting into trouble with even the left, because instead of helping Obama get elected it seems now they are helping Trump get elected
So their own dishonesty is so extreme that they are trying to help both Democrats and Republicans thinking both sides are dumb fucks that will forget what Facebook did to screw them over in past elections
Amoral does no less harm overall than evil, it's just done to 'the masses', not particular targets.
No one at Facebook gets to claim they were just following orders if they have _any_ understanding of the possible consequences of their work.
(If you believe the authors/operators that specifically maximize profits over responsibility are not responsible for their outcomes, you should read back into the posting about PG&E in CA: deliberately ignoring the wear & tear on 100 year old support hooks on high-tension power lines, thus sparking the fire that burned down Paradise, CA.)
It's true that initializing camera hardware takes noticeable time. We had the same complaint from app developers in Firefox OS and we implement a small trick to improve the experience: when launching an app that has the "camera" permission listed, we start to initialize the camera hardware in parallel with other app startup tasks. That doesn't trigger stream acquisition so it's fully safe, but we gain a few hundred of ms overall when the app actually needs the camera preview.
One of the most annoying things is to try and launch your camera on a phone and have to wait multiple seconds before you can hit the shutter, at which point the thing you want to photograph might already be gone. So I can definitely understand the logic behind this, even if it's creepy.
IIRC, Apple has invested specifically in reducing startup latency for the camera, but I don't know if that applies to apps.
From what I can tell the startup time is more or less the same for both the built in camera app and the third party apps. IIRC built in camera used to start a bit faster in the past but that's not an issue anymore. The problem is that we launch the camera app and because the icon animates from home screen to fill the device window the latency is less noticeable, but for example in Instagram, the app is already open in full screen so starting the camera from a user action, even though it takes the same time with launching the camera app, is more noticeable from a user perspective.
Your point is well said and accurate, and was the first thing I thought of as well.
That said, I think we're well past the point of no return vis-a-vis giving Facebook the benefit of the doubt on basically anything privacy-related. I don't know anyone who would argue that Facebook wouldn't do this if they thought they could get away with it; the "Facebook is listening to my conversations and showing me ads" conspiracy theory gained so much traction, not because people want to believe in conspiracy theories, but because people have no difficulty imagining that Facebook would do something like that if they could.
Sounds like iOS's fault for not providing a way to keep the camera on standby at the expense of battery life, without compromising privacy. Just like how there should be more privacy-preserving methods for querying clipboard contents, as in the similar TikTok scandal when a UX optimisation was mistaken for a data collection issue.
Very good point. User experience is very important. This is also why Amazon periodically breaks into your home and makes sure they have a complete inventory of what you own, so they can ensure a speedier and more appropriate ordering workflow. At face value this seems unethical, but you have to remember that they are improving your user experience and making money, so it’s all ok. And I mean it’s not like they are stealing anything anyway, and you can’t really expect them to respect your privacy when you’ve signed the eula.
Ups sorry, that seems to be a message out of Bizarro world that crossed over.
We've gone from "it's a service to share photos with friends" to "Sure, the camera is on, but it's not recording or transmitting!"
Advocating for the Devil generally gets you sent to hell. At this point I don't even really care to hear legitimate technical reasons for this because time and time and time and time again, Facebook continually acts in antisocial ways, and it does so with impunity. I'm so sick of it, and I'm so sick of the apologetics.
Facebook is incentivized to do bad things. Facebook indeed does bad things. Any action they take needs to be examined through that lens. Facebook started out as a social network (well, originally it was a tool to stalk women), and rapidly grew to be a democracy-ending ad machine.
So please don't sit there and make excuses for this company.
It seems like a dumb move, which is why I don't think they're doing what they're being accused of. It could certainly just be a technical glitch. I don't think they are spying on uses like that. They have -plenty- of data already and getting more everyday..
Yes. For example, Webex on Windows seems to keep the camera on (led lit) even though I'm not in a call. Irritating. One more reason why I don't love that software. Teams (and many others) handle the camera correctly.
Why is there no LED light when the camera gets power on a smartphone like a macbook? This is insane that smartphone manufacturers thought it was a good idea to not indicate when your camera(s) activated. I've never understood this antipattern...
Starting from iOS 14 which was released just a few days ago (I don't know about Android side), it does. It also displays which apps have used camera/microphone in control center recently in case you've missed the subtle dot. I think they've nailed it this time. (Though I personally think the design can improve)
> As an iOS developer who worked with apps that access camera frequently, I can confidently say that it's just a UX optimization. Starting the camera takes a brief few hundred milliseconds which is easily noticable from a user perspective.
Look at it from a user's perspective:
As a user, I don't know you. I have no reason to believe you. If I see my camera being accessed, I have no way to tell whether the data stream gets processed by the app or sent straight to /dev/null. I'd have to assume the latter on trust. But what reasons to do that does your app give me? The loads of ads assaulting me? Or perhaps the malicious UX dark patterns sprinkled everywhere? The regular media stories about omnipresent surveillance capitalism don't reassure me either.
It may very well be a performance optimization in Instagram, or in many other applications. But users have no way to know it, no way to be sure of it, or (in most cases) no way to even conceive that such optimization is needed. Meanwhile, they have perfectly good reasons to assume nefarious purpose.
Yup, I don't trust Facebook either. But many regular users (not the people here on HN or reddit) don't even know that that means, let alone care. Facebook knows only a minority would make this an issue whatsoever, and they keep it. Not defending right or wrong, but I think this is just it.
Let’s assume for a moment the story turns out to be true: would people stop using Instagram because of it?looking at the history of privacy violations and the market’s responds to it I doubt it. There is a minority of users who will feel rightfully violated in their privacy (imagine browsing instagram naked while possibly being ogled at by some creep at FB). The worst that will happen is that FB will be sanctioned to pay a laughably low fine. I think we need much harsher repercussions against these practices, up to barring them completely from operating.
People still smoke even though everyone knows it's bad for you. Maybe like smoking we need a minimum age for social media that's actually enforced and heavily penalized?
A lot of people would probably deny camera permissions to Instagram (and upload by sharing from gallery if they upload content), and their reputation would take a significant hit (currently they aren't known to covertly gather any personal data beyond generic OS-provided statistics).
Also Instagram isn't really essential to communicate and learn about events in the way the main Facebook service is, so stopping to use it is highly feasible.
Agreed. I just checked my setting and apparently did this "deny and post from camera roll" by default. I have no doubt I was just making a privacy conscious choice.
Of course people would stop using Instagram. A major app provably spying on you from your camera is an extremely big deal, and is something every non-tech user can easily understand.
The truth is though that as in ~100% of such sensationalist headlines there is nothing alarming in what the app is doing, which is why users don't care.
Blame has been placed on a bug, meaning you can safely assume there is irrefutable proof of camera access. (Why else would the claim of a bug need to be made?)
> would people stop using Instagram because of it?
Honestly, what other people do with their data/privacy concerns doesn't affect my life. So, if they are cool with that, that's on them.
For me (my family), we are fairly anti-technology at the home. 'Anti' in the respect that we actually enjoy the simple life and retirement for us is on a farm, off the grid. So, this behavior, for me, is the last straw. I will be removing IG within the next 12 months. (I won't get a FB,Whatsapp, etc)
This is why iOS is badly in need of a “Task Manager” and of an “Event Viewer”, so that users may easily audit what resources each app is currently accessing, and which resources they have accessed in the past, including logging each instance of camera or microphone use by any app.
Apple has the power to easily mitigate these concerns - yet they do not.
Because my mother is using an iphone and nothing you described would make sense for her.
She is pretty good with tech and understands how computers and phones work, but she just does not want to browse a process list to see which process is using the camera.
but she just does not want to browse a process list to see which process is using the camera.
How do you know until you've tried?
At the risk of provoking a tangential discussion, I don't think keeping users "in the dark" about what's happening is ever a good idea. They don't have to learn if they really don't want to, but deliberate opaqueness is bad --- if anything, it only leads to more learned helplessness and ignorance. Of course, that's probably what companies want in general, since it means they can control users more easily...
Additionally, even if the main user of the device will never pull up a task manager or usage log, that info can be useful for a friend, relative, or hired technician who is trying to help them figure out what's going on with their device.
Instead we have to explain to people that their photos aren't "in" the photos app and their music isn't "in" iTunes. It's the thing you mentioned: file managers will just confuse people so let's not include one. Easier to have them think of files as "living" in the apps we assign to open them.
The comment you're replying to does not describe a process list.
Users do understand the concept of apps, obviously. A log of "App X used the camera", "App Y accessed your location" is well within what non-tech-savvy smartphone users can understand.
My main point is that you need to present this information on a way that even those people can interpret it who can't/don't want to know about process lists because they are the majority.
It's the same as you have an engine temperature gauge in most cars. I'm not really interested on what's going on inside the engine, I just want to know if the engine overheats so I can check basic things (cooling system leaks, etc) and if I cannot see what's wrong I wait and try to get to the nearest garage or call roadside assistance.
As a power user, I see value in a process list but I see why it does not take priority over other features.
Not my intention for this to come off like a plug but Android has been doing this for quite a while now. It simply shows any process being ran as a silent notification and then goes away when it's done. I agree keeping a running log is probably unintuitive even for an advanced phone user but the way Android has pulled this off is surprisingly easy to wrap your head around.
You make a good point with 'your mom' but this would be an extraneous feature, not necessary and therefore not in the 'regular case path' of behaviours.
So I think it's a valid thing.
Though iOS is getting out of hand with complexity these days.
Though iOS is getting out of hand with complexity these days.
That's because iPhones were always full-featured general-purpose computers, just severely locked-down and restricted from user control. We are simply seeing the results of that opaqueness.
Not saying they shouldn't do it, but in a company with finite resources dedicated to the needs of 100,000,000 users, <1% of whom would ever look at such a thing, not getting around to that feature would make a non-zero amount of sense.
Reminder, this company you are saying can't possibly spend a few hundred thousand dollars to add a feature their underlying OS most likely already supports and is a definite net good for security researchers and industry watchdogs at a minimum is worth Two Trillion Dollars and has absurdly high profit margins.
iOS 14 shows a green or amber dot in the top right if the camera or microphone is active. But yes i generally agree that more robust monitor tools are still needed. iPad is "a computer", right apple?
There could be an industry wide agreement to reserve the top two pixel rows for realtime visual feedback of sensor use. 1st quarter of the display refers to the microphone, 2nd to the camera, 3rd to GPS and so on, with standardized ways of notifying what is happening, ie blinking means it's currently active, red that it has been used at least once during the last 5 minutes or so, with more detailed information in the notification bar.
There could be an industry wide agreement to reserve the top two pixel rows for realtime visual feedback of sensor use.
Apple already has this. Little orange dot indicates audio. There's a blue arrow for location access. I don't know if there's something for video or not.
You can then go into the Settings app and see which apps were using those features.
Or iOS shows an icon similar to location services. Full camera icon means it’s in use, hollow icon means it’s been used recently. Ideally it would have a list of apps just like the location services.
In the new iOS 14 there's a green (camera) or orange (microphone) dot in the upper right-hand corner of the phone. If you swipe down from the top right it'll show you what apps are using the input and which apps have used it recently.
Being lawful-evil person I am, I once imagined a “news” app such that display content or suggestion ever so unnoticeably slightly change depending on how user behaves judged by device camera.
A facial expression ML model would be trained beforehand so camera data would not have to be collected, but only pushed to users along app updates, and content would be analyzed and encoded at server side to determine what users are to feel, along with hints to aid learning and correlation between different stories, all subconsciously and automatically.
As for what to do with a system like that...I don’t know, make boatloads of cash by forcing in-app purchases? Push your favorite but little known novels and comics? Make group of people I like by my specifications? Promote good posture, regular exercises and force people to look up? I wouldn’t want to use it so specific elected officials pass or resign, or world’s atrocities to become accepted...
Then I got bored. But could it have been something like this?
I wonder if people make distinct enough facial expressions while reading online content for this to work. For myself, I pretty much stare blankly at the screen the whole time without much change in my expression. The ML would have to be very good at detecting slight nuance.
To your point, one of the things I worry about with apple news, for example, is that the underlying ai won’t understand _why_ I dislike a particular news story. Generally I downvote hyperbole, clickbait, and paparazzi content, but I would guess it’s more common for people to downvote things based on the substance of a story, e.g. downvoting a well-written, truthful article because the truth in that case is inconvenient for their political stance, or because it’s about a loathsome-yet-newsworthy person. A facial recognition system would have to know the difference between my face being frustrated at a high-quality story about the fires in the western states vs my frustration that apple news seems to find Andy Borowitz to be hilarious and relevant. Maybe possible, but lots of nuance under the surface.
Wasn’t there a proof of concept for exactly this about how social media apps were probably already doing this? I believe I saw it on HackerNews. It was a site where you gave them camera access and then ran something like a 60 second experiment. Seemed academic.
The word you are looking for is "sociopath". Oliver Wendell Holmes' "bad man" is an analytical tool, not a recommendation.
There's nothing inherently evil about the mood-detection feature you have described, provided it is deployed with continuing and informed consent from the user.
The feature is already deployed on billboards in public spaces, although this instantiation is evil as "users" are generally unaware of it, have not consented to participation, and cannot opt out.
Please don't throw mental health diagnosis willy-nilly around on the internet. Not only is it impossible for you to make such a diagnosis, it is rude to both the community you are using as a punch line and to the poster.
Existing implementations only detect moods and collect data for analysis, my idea is if you're just going to do open-loop brainwashing, you don't have to know users at all so it could be done without "invading privacy".
I was making a general statement that the ideology of "maximize profits by any lawful means" is a profoundly antisocial one.
I don't think many people actually subscribe to that ideology, but enough do that it's worth challenging whenever it is articulated (as it seems to have been in your comment).
> ok, I mean no harm to humanity, but, um,
Sorry to gate keep but then you're not lawful evil. A lawful evil character is completely okay with harming people (through legal means of course) as long as it benefits them.
Is the main reason the app store still has public support this belief that apple instruments the apps it reviews? Because it looks like they don't. If that's the case it's trivially easy to get malware on the iphone and the only thing the app store really does is give Apple power.
This is well known among iOS developers. Apple's approval process is far too cursory to catch malware. Also a lot of crapware gets through. The best you can say about it is that it filters out the really obvious stuff.
Is this the same scandal from before: where the app had the camera on because when you swipe to record a story they wanted it to be instantly available? Did it just take a while to reach lawsuit status or is this about some new thing?
One of us would have had to write this code, a PM would have had to authorize and break it down, a UX designer might have had to design its internal dashboard for "how to perv on users most efficiently" including flows and user stories, and then that code would have had to be pushed to the Instagram git repo which surely has some ~hundred devs on it that saw it go in, or may notice it at some time in the future.
Now I know it takes a healthy dose of cognitive dissonance to work for facebook in this day and age but the main reason this doesn't seem likely to me is that that's way too many people involved in something beyond shady (even for Facebook) for this to not get out. There's even a [formula](https://journals.plos.org/plosone/article?id=10.1371/journal...) to calculate how likely it is for a conspiracy to remain a secret depending on the number of people involved.
Multiple (horrible) events through history can provide the insight that as long as you partition each task enough, people will happily work and contribute to a horrible end goal and later claim they were only following orders and had no idea of the side effects of their individual actions.
You have a very optimistic outlook on corporate culture. Do you code review every piece of code that goes into a corporate project? I work at a relatively small company with three apps and 5 iOS devs, and I look at maybe 20% of things committed. If every dev looked at every line of code every other dev produced, 100 devs would get nothing done, ever.
The reality is this was probably a marketing initiative - "Can we tell what users are most interested in on the screen" and the answer is yes - on-device eye tracking is very simple to achieve using Apple's vision framework.
I suspected for a long time that they are doing the same on Android.
There must be a very small mechanical part in my phone's camera (probably the lens when it focuses) that makes a subtle noise when activated. I noticed the noise was present every few minutes when I browsed on Instagram, but only when I forgot to turn off camera permission.
It is creepy, no matter what the reason behind this.
I personally do not accept app camera permissions for any app that I don't fully trust. I stopped trusting the Facebook app years ago when it launched itself constantly in the background.
On a side note, IG works perfectly fine being denied the Camera permission. In fact, it doesn't even complain about it until you try to take a picture in-app... and most users' workflow is taking pics with their phone's camera app (which is almost always better than IG's) and uploading the actual files to IG (you can still apply filters and everything else this way).
Lately was also this thing, with App check always our clipboard. I wonder how many website does the same, and also check your clipboard all the time. And with more and more APIs in browsers, I wonder how many website start to listen and watch you.
Not even websites. Desktop apps also have access to your clipboard without any permissions. Can't wait until it becomes more common practice to provide an audit trail of any apps snooping on the clipboard.
Even if this was true, which I doubt, what would Facebook actually gain by doing this? They'd see a person's face from an unflattering angle with an unflattering expression.
Or they'd see whatever the front camera sees at the time, which in my case right now is the back of my phone case or possibly the floor.
They'd be able to track what your eyes focused on and for how long, whether you wore glasses, were bald, your approximate age and ethnicity, whether you have facial hair, your reactions to content, and a lot of other behaviors and metrics that would never cross your mind.
Instagram already does all of the above, but so does a lot of digital signage in shopping districts, in addition to tracking your BT HWID, to monitor trends in real time. Then they sell this info to eg- financial companies that operate things like mutual funds.
That's what I was doing with mine in the mid-2000s, anyway.
Yes. Have you ever used one of those face swap filters that turns you into a funny character that mimics your facial expressions?
Well, how does it know?
A mirth-filled squint of the eyes, dilation of pupils at something unexpected, a tightening of the mouth signalling ennui, the movements that accompany a brief nasal exhalation at something on the nose to your taste in humor... all add to something.
One very useful thing would be running facial recognition on all the faces the camera incidentally sees, in order to infer new connections between people.
With the VR/AR initiatives they have, people will voluntarily share what they are looking at. Make sharing information integral part of the experience and you get everything without any wrongdoing.
I wonder how unfettered Mark’s access is to all their data and systems. The answer to this question should be public in order to bolster trust, but as far as I know it is not.
More on topic, I doubt this particular feature is nefarious, but those who would spy often seem to rely on others’ doubt that it is happening.
This thing where Facebook collects a lot of data in ways to which most people would object for years and years, and then when they get caught they claim it was a "bug", is getting really old.
To make matters worse, they usually tend to re-introduce the "bug" as some kind of feature a few years later, as their way of saying "yes, we really do thing our users are that stupid to not realize what we've done, why do you ask?!"