MICS without physical disable switches that actually short input to ground through a resistor (open circuits can leak) are always a problem. It's more of a "Darwin in action" issue because during a critical period that people could have demanded this real and effective solution (the 1990s and early 2000s) they didn't seem to care.
Then laptops took off as primary computers and people were no longer installing their own peripherals any more, and demanded no external circuit-interrupting switches not even for cameras.
Back in the 1990s I was telling people, your plain speakers are not safe either (from being used as listening devices) as long as they do not have their own amps and are driven directly by a chip in the computer. How much do you know about that chipset and who makes it? One of the major early chipset players was Realtek.
Then some Ben Guriron researchers burst onto the scene in 2016 ( https://www.usenix.org/system/files/conference/woot17/woot17... ) with a real proof of concept where they used a jack assignment matrix in (you guessed it, a Realtek chip) to turn an idle pair of headphones into a listening device.
I'd be interested to hear anyone who heard the other shoe drop. I don't doubt that an internal assignment matrix facility exists in many modern chipsets. It would take a concerted and deliberate effort to mitigate this kind of vulnerability. Has anything like this been done?
In the U.S., it's definitely illegal to listen in in any environment where there's an expectation of privacy.
Disney recently argued that its TOS for a Disney Channel free trial subscription could be used to force the husband of a woman who died from anaphylactic shock after her allergy diet requirements were mocked and disregarded by staff at a Disney restaurant because he signed it and it included a forced arbitration clause.
So, if you've given apps permission to use your mic, their lawyers may be telling them it's legal to use them at any time for any purpose.
For a contract to be binding, there is supposed to be a meeting of the minds, a shared understanding between the parties of what is being agreed to. Their argument is you signed the 50 page contract, so you knew.
If that doesn't sound right to you, talk to your congress person.
Yeah, some people believe Facebook listens to their conversations and use them for ad targeting, in theory it's possible, but in practice it would mean needing to upload billions of audio files (how many conversations happen in average day near how many phones with active FB/IG accounts on them) and having the computing power to transcribe them.
And even if they had the resources, being caught doing it would be too damaging...
Then again, some dumb ad salesman on some cable network might believe the myth and think "If FB can do it, why can't we?" and try to implement this surveillance-capitalism-for-ads idea..
Even a few years ago it would be perfectly possible to do audio based content ID on known content to uncover viewing preferences on streaming media - a proven business model, ref https://samsungads.events/acrguide-pr
These days you could easily run Whisper locally on each cable box and report back whatever information advertisers will pay for. It will run a bit hot, but what do they care if they waste your electricity.
Yeah, listening/matching for specific songs or words is trivial locally. It has been done commercially since the early 2000s (for ASCAP) on mobile phones, when they were dumb bricks. The idea that every single company is following some righteous non-profit seeking stance, is sort of laughable. Maybe FAANG have decided it's not worth it (they have other ways), but that doesn't mean they couldn't or other apps don't.
Unless laws exist to prevent this, you can bet that ALL BigTech are doing this, to varying levels. If they are not doing it in your country, you can bet they are doing it in other countries. The US government incentivised them to do this with the PRISM program ( https://en.wikipedia.org/wiki/PRISM ) and showed them the value of collecting user data even if they do not need it (right now) for their services and products.
That services like Alexa, Siri, Cortana, Assistant etc. exist with 24x7 listening devices shows that cost is no longer a hurdle in deploying this at scale. And note that they do not have to upload the audio files (which by the way, they can - I remember Google showcasing an audio codec - https://github.com/google/lyra - that is suitable for very low bandwidth, so neither bandwidth nor storage is a big issue today). Today's phones also have enough power to transcribe the audio on device itself (e.g. Google Live Transcribe feature now works when offline - https://9to5google.com/2022/03/10/google-live-transcribe-fea... ). Why do you think there is a sudden push now to put AI on SoCs and thus on device? It's partly because BigTech want to offload more and more processing on to your device. We are at a stage where hardware has outpaced system software development and is actually underutilised.
(And we are also at the techno-cultural cusp where the ownership of most of our devices are questionable, and moving towards a dystopian future where we will no longer be able to claim rights on these computing devices).
I know they arnt, at least I believe they arnt listening.
But sure as hell feels like it sometimes!
Still I’m smart enough and knowledgable enough to know that combined signals used to target and my own confirmation bias can make it seem like we are being listened to.
We can sometimes talk about something and then literally see an Ad for it.
Well there are strange cases when we talked with a friend about something and then suddenly the algorithm decided to push content or ads that are related to the topic. While we didn't search stuff on the topic.
Yeah, saying "some people" believe they experience this is an understatement. People my age seem to generally be aware their phones send them targeted ads based on their conversations. It happened to me frequently with my Nexus in the Google News app. Happens to my girlfriend on her Android in the Facebook app. I switched to iPhone, never installed Facebook, and haven't seen the same effect in Apple's news app.
> And even if they had the resources, being caught doing it would be too damaging...
Would it really, or is your comment the response to them already having been caught? How would we really know the difference if nobody wants to beleive it? This is exactly how things were before Snowden. Your argument begs the question, in other words.
I share the author's skepticism that Cox would have ever been capable of this. Especially not at the scale that was claimed in the deleted B2B marketing materials.
While there are reasonable concerns over access to microphones in various devices found in consumer's households, especially cheap IoT devices (although, I'm EXTREMELY about claims that any agent is listening to unfettered microphone access on smartphones, given the restrictions that have been put in place to get access to microphones), I don't believe that a limited scale (compared to any big tech) cable company could have the engineering chops to pull this off.
Things that would be required:
* getting this system to work with an exceptionally large variety of devices from nearly as many brands.
* having nation-state quality malware to get access to the microphones on many of these devices (either from the manufacturer not wanting to work with them or not incentivized to work with them)
* battery powered devices not facing user-noticeable power draw (think a Dualshock controller)
* having advanced enough AI (bigger issue 2 years ago than it would be now) at a very large scale filtering out noise and categorizing conversations such that they are useful to advertising partners (probably possible now with the vectorization techniques in LLMs)
* having the AI be cost effective at that scale without local processing (unlikely to locally process on the devices themselves due to processing power limits; unlikely to locally process on their modems because cables modems are made as cheap as possible to offer the services that they are selling, without a FLOP more); this is a much bigger roadblock than the advanced enough AI
* actually hiring talented enough engineers to accomplish these things (especially the AI)
And frankly, they are such a small player in the grand-scheme of things, they would be idiotic to not also be selling this tech to every cableco/telco in a developed nation, in order to get their money's worth. This would be 'discuss it at the shareholder meeting' important if that much money was spent and they wanted to get their value out of it, since they are otherwise limited to where they can physically roll out Cox cable.
Frankly, a bunch of marketing hot air (that they've since removed).
You're getting too far into the weeds with the IOT and smartphones. TFA talks about mics in the cable boxes and smart tvs. I wouldn't put it past them to monitor mics in cable boxes.
Or have 470 really shady apps with a small install base that actually include a listening SDK (with the microphone warning from the OS showing), then pad the rest with garbage data from other sources to pretend their solution covers more than the very few users it does.
Then laptops took off as primary computers and people were no longer installing their own peripherals any more, and demanded no external circuit-interrupting switches not even for cameras.
Back in the 1990s I was telling people, your plain speakers are not safe either (from being used as listening devices) as long as they do not have their own amps and are driven directly by a chip in the computer. How much do you know about that chipset and who makes it? One of the major early chipset players was Realtek.
Then some Ben Guriron researchers burst onto the scene in 2016 ( https://www.usenix.org/system/files/conference/woot17/woot17... ) with a real proof of concept where they used a jack assignment matrix in (you guessed it, a Realtek chip) to turn an idle pair of headphones into a listening device.
I'd be interested to hear anyone who heard the other shoe drop. I don't doubt that an internal assignment matrix facility exists in many modern chipsets. It would take a concerted and deliberate effort to mitigate this kind of vulnerability. Has anything like this been done?