Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Microtargeting as Information Warfare [pdf] (army.mil)
95 points by donohoe on Jan 11, 2022 | hide | past | favorite | 25 comments


Sure, microtargeting could be potent.

But its effectiveness depends so much on the message, we could be calling anything information warfare.

I feel that the automated, microtargeting part is often over-estimated. We are routinely exposed to a huge range of content and are pretty resilient, the delivery doesn't radically change that [1].

[1] An exception are new media such as radio and TV in their infancy and perhaps Facebook for elderly people today - see the paper from Andy Guess, Josh Tucker etc.: https://www.science.org/doi/10.1126/sciadv.aau4586


Agreed. And the dangers of over regulating information exposure are pretty severe. The whole point of free and open society is to avoid over reliance on a central authority and allow for emergent authority. Having a DoD regulatory program determining what is and isn’t information warfare seems infinitely worse than targeted advertising.

One little talked about counter strategy is just giving those same people targeted ads with better information. If you need to prevent certain messages from reaching certain people entirely and can’t counter them maybe that means they have some validity that needs to be addressed to make counter messaging viable. Jumping straight to regulation rather than a change in counter messaging is a huge red flag that reflects poorly on the level of humility and need for introspection I think is needed to prevent these kinds of problems without making things worse.


Sorry but I feel like this is a bit of a straw-man.

Micro-targeting != information exposure

Let’s be clear what we are talking about here. We are talking about persistent ubiquitous surveillance apparatus paired with the ability to push media to people’s devices; and using to conduct population scale behavioral experiments / modification.

We are not talking about banning books, websites, music, advertisements, etc.

Put another way…

They want to regulate your ability to push content to others, not their ability to request content from you. The big difference between these situations is the consent and intent of the person receiving the information. I do not feel that anyone is a “right” to bombard me with information I did not ask for.


> We are not talking about banning books, websites, music, advertisements

Regulating micro targeting implies restricting what people can advertise to specific people. If you restrict people’s ability to advertise knowledge of something’s existence you’re effectively banning it without making it illegal to access.

Your notion of “bombardment” is legitimate, but I think should apply after initial contact. If a user doesn’t want certain advertisers to send them certain information, users should be able to decide what they do and don’t want to mute, but they can’t know what they do and don’t want until someone reaches out.

The primary point of advertising is to introduce a user to something they might want to either have or know about but were not previously aware of. At least that’s the point I find most legitimate. That means giving users information they didn’t specifically ask for. The type of behavior modification that’s more contentious is not about presenting a user with something they didn’t know about, it’s usually about reinforcing something they already wanted.

If you ban information you think users don’t want or that shouldn’t be advertised to them, you’re reducing the discovery aspect of advertising. A user can’t know if they want to have certain ads or information presented until they have it presented at least once.

Preventing initial contact and trying to get a product or idea to someone you think would be receptive but isn’f aware of seems to me like it’s not preventing either bombardment or behavioral modification, and would probably lead to people being even more siloed than they already are. If the DoD has the power to steer how people are siloed, that seems even worse.

I agree that the behavioral modification capabilities and scale of surveillance is extremely worrisome and there should be things done to mitigate it, I just don’t see how a lot of what I’m hearing in documents like this would make it any better. The impulse seems to create something top down which would harm discovery.

I wouldn’t have a problem with regulations that force companies to provide more access to the knobs of what gets them or what advertising networks they want to be a part of, for example. But I don’t see a lot of evidence that’s the direction things are going.


> Micro-targeting != information exposure

Yes it is, lets say you have been exposed to some data which creates a knee jerk reaction, consciously and subconsciously you have now become manipulated. You may even choose to share that data you have been exposed to. You dont know if its real or not because it could be a photoshopped image, some data that influences some of your concerns, like a health concern. The point is, what ever that information is, whatever that data is because data and information are the same thing, this is now manipulating you whether you realise it or not.

Search engines "bombard" you with information you didnt want by presenting some links, you click one and find its not the website or data/information you are expecting or looking for, however the citation in the SE results was enough to tease you into clicking that link where you found it was not what you were looking for. You then go back to the results and click another link. The last link you click also helps train the Search Engine, pushing the last link you clicke don up the list of results next time someone else enters the same search term or one close to it. That is basically how to fine tune a Search Engine results page.

I think people also dont realise or can quantify the effects of their words and actions, but with people suiciding themselves because of social media bullying as one example, and parents in particular not really paying any attention to what their kids do online, I can see their justification to want to regulate it.

https://www.theguardian.com/technology/2021/oct/05/facebook-...

Businesses primary Maslow Hierarchy of Needs is money, money is to business what food and shelter is to a human. after that they can get lawyers, accountants and other specialists in and out source anything controversial or questionably legal to keep at a safe distance which is something the law in its current implementation fails to address. If they cant regulate this, then maybe the law needs to be changed so that shareholders and board are held accountable, ie tackling the problem from the other end. Another problem with law is an institution like a business rarely has its culture held accountable, only individuals can be held to account with law, a culture never can but on a countrywide basis, wars usually settle those disputes, but there is nothing to tackle flawed corporate culture, except perhaps some bad press, but that rarely works as we see with Facebook and more and more "experts" coming out against them like we see in the news article above.


> Search engines "bombard" you with information you didnt want

Not for long. If a search engine returns bunk results I use an alternative.

If I type a query into a search engine and press enter I am requesting information. If I type a url into a search bar, I am requesting information.

In both of these cases there is clear intent on my part to retrieve the content and consent to receive it. I am the agent that initiates the transaction. If the search engine or webpage sends me something I do not like then I can opt out and go somewhere else. They will not continue to send me content.

There is a big difference between this and pushing content onto someone’s device.


> The whole point of free and open society is to avoid over reliance on a central authority and allow for emergent authority.

To me, "free and open society" suggests that the people of the society intentionally put limits on any authorities that can influence their society, whether centralized or emergent. Currently we've got two major emergent authorities: Alphabet and Meta. Neither of them has significant accountability to the people who use their services to obtain news and learn and communicate, and both of them have strong incentives to get people to spend more time using their services so that they can sell more ads.

> One little talked about counter strategy is just giving those same people targeted ads with better information.

Who gives the "better information", and who decides what information is "better" in the first place?

As a potential alternative (... very much spitballing here), what if there was a government-provided website where citizens could vote on potential regulations to be applied to companies that, say, provide media services and get over a billion hits per month? It could also be set up to let people provide supporting and opposing arguments and vote them up and down, and proposed regulations would have a limited period of time for voting on the regulation overall.

Something like that. And of course, companies would be able to say, "No, we really cannot change our software in [x] way because we would go out of business," but there could be room for discussion there, too. And I don't think these would be too unreasonable, because these companies generally succeed because tons of people use them and appreciate many of their services. The overwhelming majority of people are unlikely to vote for things to try to destroy these companies.

I think something like that coupled with an enforcement mechanism would provide a way to steer away from the underlying issues that have been polarizing our society and guiding people down rabbit holes to steal their attention, which prevents our society from being free and open. Things like the youtube autoplay button, targeted advertising, and the damnable youtube autoplay button.

And it would actually be able to keep up with rapid changes by tech companies, unlike legislation.


I’m not sure voting on something universal at that scale would be practical, but I do think your underlying point that the people should be deciding what information is “better” is the right thing to emphasize. Allowing people to have more control over the types of networks that advertise to them and what they want to opt in and out of would I think be a lot safer and more practical than trying to come up with some universal criteria for what is and isn’t “better” advertising.

My point in advocating the DoD should maybe try counter signaling with better info was meant to imply that the people receiving the advertisements are the ones deciding what information is better. If the DoD can’t make an honest counter against certain types of alleged information warfare, that should prompt them to ask “whats wrong with our message”, not “what do we do to make sure users only see our message”.


I ran this game on the local political establishment in 2015, and it was scary effective. I assume they are more resilient today, but at the time entry-level social media advertising techniques were able to have a massive influence on politician's perceptions of their constituents's concerns. I wasn't surprised at all by the impact social advertising ended up having in the 2016 election.


That is super interesting to me. Can you provide any additional details? Was it age range specificity ( "40-45 consider you weak on X" )?


Not the poster, but

When I was at $STARTUP we had ads on Instagram and what have you. I am told we targeted a few at the group which was essentially [$STARTUP employees and family]. Unlike our real targets ($N million per year income - it’s a boutique investment product), it was not very expensive, and there were few of us, and it left us more aware of how the company was positioning itself.

This was a small company. Targeting this group is remarkably narrow.

Imagine that your senator can be targeted and goes online and sees ads of your choosing… I don’t know, imagine just a tremendous number of ads addressing $STATE residents and offering help with the record high price of heating oil (then touting insulation, all year payment plans, new boilers, anything).

Will he think people are worried about the price of oil? Could that affect his input on some climate bill?

And that is without planting or promoting anything that even looks like politics or citizens actively complaining.


But its effectiveness depends so much on the message

I think it's best to think of this as messages (plural) when talking about microtargeting. Everybody could get a different message but, in aggregate, the set of (Target, Message) tuples could add up to moving the needle towards some desired outcome (for elections, in particular: activating some voters, discouraging others).

For example, we might look at the infamous pre-election 2016 meme that cited the fake "Crime Statistics Bureau – San Francisco" and think it's not an effective message because it's so easily disproven. But the real question is whether or not it's an effective message for the subset at which it was aimed.

A better phrasing might be "its effectiveness depends so much on the messages in aggregate," maybe.


To rearrange into a (more-or-less-sargable) extension of your tuple: (Time, Target, Channel, Message)

This is an elaborate form of what advertisers call a "campaign" -- a sequence of information unravelled in front of an audience (large or small) to produce an effect.


> We are routinely exposed to a huge range of content and are pretty resilient, the delivery doesn't radically change that

Microtargenting has never happened before; that's an enormous change. Regarding our resiliancy, the results in our society seem opposite your optimistic prediction.


If it is as you say, how come Trump became president?


If it is as you say, how come Trump isn't still president?


"The Department of Defense must place greater emphasis on defending servicemembers’ digital privacy as a national security risk."

What stood out for me was: "The objective of surveillance capitalism-enabled advertising and information warfare is the same: to influence an individual’s behavior change in support of someone else’s goals."


Also see Christopher Wylie’s book “Mindfuck”.

2018 interview with the author here: https://www.theguardian.com/news/2018/mar/17/data-war-whistl...


I found I was being targetted with a lot of pro Pakistani propoganda on tiktok. I am Indian and the Pakistan is friendly with a big neighbour.


Turns out the military is solidly 20-25 years behind the cypherpunks and the EFF, and the failure to set the stage at a higher level has lead to ready exploits.


Username checks out.


> One now notorious story about successful digital targeting of advertising is the story of a father who received advertising for babies only to discover that his daughter was pregnant. The algorithm knew before she had told him.[44]

You could do this with supermarket loyalty cards, which here in the UK were introduced in the 90's, but using other meta data, namely dietary changes.

Water companies can spot it with hormones detected in waste water, then you can narrow it down depending on where the sample was taken and what type of properties are around plus time of day window of sample, but living in a big tower block generally offers a level of privacy from that intrusion.

There is so much meta data we give away, early adopters or dedicated followers of fashion, are the easiest to manipulate though, so that could be latest car, smartphone, laptop, computer game, clothes, we just give away way too much without realising it.

Only now when you consider device fingerprinting eg https://browserleaks.com/ and more, states setting up fake/popup advertising companies to access the data from the likes of Google adwords and other networks who collect alot of data in the first place and present it to advertisers to bid on, nobody has any real privacy online, even VPN's users can still be detected, if you noticed a vpn going slow, lets just some people have the ability to throttle your communication over a vpn to work out who you are!

Cambridge Analytica was spun out from the British Military, which might be why Hollywood is fixated on British Spies! I think Hollywood is trying to tell the world something.

I think that sums up some of my knowledge.

Edit: > There is no way for any individual to tackle the surveillance economy.[73]

Yes there is, but they need to be aware of the panopticon, so that means peoples cctv in their homes, see Shodan.io, be aware of peoples normal routines to blend in when possible, like neighbours who use the internet intermittently on a device, new neighbours preferably immigrants/refugees can help the most here as the spooks need to establish patterns, have a mobile phone where you can control what cell towers you connect to, there is/was an app on android that let you do this, I guess this is what would be called spycraft.

>Furthermore, the algorithms being used are opaque and not widely understood. You can troll algorithms to establish what data sharing is taking place behind the scenes, knowledge of the advertising networks bidding process can help here. Always work on the basis that your (device) fingerprint is like a supercookie. With sufficient resources, you can disappear into the crowd but most people are just not that interested in privacy, because who would they get their special offers like a heavily discounted products?

An example, replace vehicle battery, with privacy you are looking at nearly £600, without privacy you can get the price down to just over £100, and thats simply from whether you wipe cookies during/after a browsing session or not and what a search engine gives you. Same search terms, just one with cookies always enabled and one without.

Privacy is expensive but its now possible to start quantifying the cost.


So, are people who work for microtargeting platforms (FB etc) war criminals?


I think the intelligence work referred here is not within the scope of war as defined in war criminal.

But it could be collaborator to foreign intelligence, or agent for foreign intelligence, which already is punishable. Now the knowingly or unknowingly factor is important usually is qualifying these crimes.


I think so yes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: