I reported some of those and got a response in return that I could block the item question but that it's basically green light by them, doesn't break their community standards or some wording like that. At this point I'm surprised there aren't more lawsuits coming up, this is egregious behavior that needs to be penalized fast.
This seems like the kind of thing that can be resolved by emailing a screenshot of that to zuck@fb.com with the head of the DEA on the CC list. I'm sure one of them will be happy to act.
Advertising networks get away with so much, it's quite ridiculous. The same company that will ban your account for posting about such content, will regularly run ads about such content. Worse, there's nothing you can do about it except run ad blockers (which they in turn will use various measures as punishment for you doing so).
Guess we didn't learn anything from Cambridge Analytica.
I'm trying to get more into self hosting and things like federation that can allow technical people to be admins of low-maintenance services for their friends and family.
It's a slow push but it's gonna happen. Microtransactions are stuck in deployment hell, advertisements are a deal with the devil, and this is the only third option I know of
It's a pure Rust server so it's extremely lightweight to self-host compared to things like Mastodon, plus it supports events (with iCal/Google Calendar export). I'm also happy to host an instance on my DigitalOcean Kubernetes cluster for anyone interested and integrate it into my CI/CD pipeline, which is an option on my GitHub sponsors page (and I'm working on a load balancer which should eventually bring the cost of this down as well).
I appreciate the feedback! While I haven't had plans to ship binaries to platform-specific package managers, I'll certainly consider it. (I do have plans to publish it to Cargo, but have yet to get to that.) PRs for this would be very welcome, as I have no experience shipping things to apt/yum/etc. :)
Agreed, also a lot of room for improvement in this general area. I remember when Mastodon instances used to be a giant pain to get up and going, and now there's one click deploys and even entire businesses who will manage your server.
I don't think widespread self-hosting will ever be easy enough, since anything that makes it easy enough for anyone to do necessarily has as much control over your account as Elon Musk does. Plus it costs a few bucks a month. But what might be possible is widespread co-op-hosting - run one for your whole family or friend group.
I'm surprised that cops haven't started arresting people for receiving ads about contraband, which would imply (to a cop brain) that they must be creating the demand for such items.
There are loopholes in the US currently for drug advertisements. As an example, a very simplistic one so don't take it as legal gospel - while you cannot advertise Ketamine as a pharmaceutical on print or tv you can advertise "direct to consumer" on social media/web as long as your company doesn't specifically pack/distribute the drugs themselves.
There are many ads on Instagram that makes me wonder how this is legal; for example, I saw a fake site that pretended to be Marine Layer where the clothes were suspiciously cheap. I reported the ad, but they dismissed it.
And don't get me started on all the "mushroom alternative", "caffeine alternative", drugs. And it's like everyone and their grandmother has their own Viagra or Hair Loss drug company now
Google got hit for internet pharmacies and forfeited all the revenue they got.
I’m sure Google had no problem paying the same number of dollars in 2011 but really liked that money in their earlier days when there wasn’t as much big $ legit ad demand.
Its not just youtube. I unsubscribed from LA times after their email newsletter had one too many "Doctors don't want you to know about this!!!!" esque ads. Pure poison marketing. I can't believe people greenlight this crap to run on their platform. Maybe they already fired who greenlights ads.
For me Washington Post and Daily Beast newsletters were filled with ads for "Grunt Style" brand qanon clothing, sometimes all 5 or 6 of the ads in a given newsletter email. I ended up DNS blackholing some FQDNs, but I suspect that Grunt Style was targeting ads to "own libs".
There's a sucker born every minute. I suppose that when Googlers look at their huge paychecks it's easy to rationalize building tools that help scammers steal from suckers. From a legal standpoint they have plausible deniability but most YouTube advertising is just so slimy, like even more unethical than TV infomercials.
I never ged ads for "real" drugs, just the normal OTC/prescription stuff and occasionally some legal analogues (non-psilocybin mushroom gummies come to mind). Very curious what they look like as well and what their sites say about the purchase process (what forms of payment do they accept, how do they ship, etc).
I'm pretty sure I've seen some for actual psilocybin before. I've seen some for switchblades too, which are illegal a bunch of places, but they at least drum up interest by pretending it's a switchblade spoon and then when you click to their site they are like "it's not a spoon", which is annoying because all of the comments on the facebook side are people saying they actually wanted a switchblade spoon.
I find it really interesting Facebook collect money from someone to run these ads, and they can continue to do that without getting into legal trouble.
When I moved countries it took 6 months for Facebook to stop showing me ads for the old country which were very clearly only supposed to be shown to people in that country. Again, people are paying money to run these ads and Facebook are clearly showing them to the wrong people.
YouTube has been hitting me up with "ads" which are Ethereum scams¹ recently. Of what I've reported, 60% are still up. Some of these were reported in March.
Recently I also got an "ad" for what was almost certainly Sovereign Citizen propaganda, which … wow, I didn't even know they ran YT ads?²
Today's ads (since I checked just to see for this comment) is all normal stuff.
¹they allege to have written an "arbitrage" trading bot with ChatGPT, usually, claiming it earns them passive income of varying amounts, but often on the order of $1M USD/y. As the saying goes, if it sounds too good to be true, it probably is. The code actually just steals your money.
²(not that I really think SCs are a formal collective of sorts, I just … don't really think that would be a thing I'd see.)
On my phone, only about half of the Youtube ads look legal. They have it all: crypto scams, ponzi schemes, fake mobile games, get quick rich schemes...
Some stuff can always slip through but to have a result like this, there's a systemic lack of checks at Google.
I wonder how they decide who to show the scam ads to?
I get them, but only very rarely. What I'm getting recently are mainly ads for Old Spice, Walmart, Consumer Cellular, Verizon Visible, Exxon, a couple upcoming movies, Target, automobile insurance, and fundraising for the Harris campaign.
The one thing almost all the ads I have gotten on YT have in common is that they are useless. Some are useless because they are advertising something that I have no interest in. The ones that are advertising something that I am interested in are either for something I've already bought or for something that I've already evaluated and bought a competitor.
I did recently switch from T-Mobile Connect to Verizon Visible, and I learned about Visible from a YT ad, but I don't think the ad influenced my decision to switch.
The reason I left T-Mobile Connect is that I'm planning on replacing my replacing my Series 4 Apple Watch soon. I'm going to get a model with cellular this time and T-Mobile Connect does not support Apple Watch cellular so I had to look for a new carrier. That led me to a page at Apple that lists all US carriers that have watch plans [1]. Visible is on that list so I would have ended up looking at them even if I hadn't heard about them earlier from that YT ad.
[1] Appalachian Wireless, AT&T, C Spire, Cellcom Wisconsin, Consumer Cellular, Cricket Wireless, GCI, Nex-Tech Wireless, Spectrum Mobile, T-Mobile USA, UScellular, Verizon Wireless, Visible, and Xfinity Mobile.
I recently got an ad for some kind of scammy garden hose nozzle that supposedly turns your garden hose into a "military grade" (whatever that means) pressure washer but the ad had a weird conspiracy theory theme and kept talking about how the big corporations and Congress were conspiring to try to make the garden hose nozzle illegal because it was too powerful for civilian use. It was like it was trying to get the 2nd amendment crowd but it's a garden hose nozzle that you can buy from Lowes anywhere.
Also, as an aside, all of these things are sort of scams even when they're not taking a weird conspiracy theory route. While it is true that they can get decent velocity through the nozzle, it's nowhere near what a real pressure washer does because fundamentally it's not adding any energy to the fluid.
There are enough people with mental issues to make it a business. What you are seeing are old techniques for targeting these lower/disabled intellect individuals.
You see it and think it's ridiculous. Then you smoke enough crack and meth to develop scitzophrenia and you find yourself buying this shit every day from weird YouTube ads.
So your solution is for YT/Google/Meta to check the code, operations, finances, and business models of every advertiser that comes to them? Even if YT/Google/Meta could do this at scale, do you want them to have that power?
If Google/Meta/Youtube want to be multi-trillion dollar monopolies that have invaded every aspect of our lives then there should be some very basic corporate responsibility that comes with that power, even if its difficult and maybe even costs them a little bit of money. Lately it seems like the phrase has been inverted to "with great power comes less responsibility". If your small local TV station ran ads for cocaine and synthetic opioids nobody would be making these kinds of excuses.
Aside from YouTube, there’s an alternative for all of the services provided by Google and Meta. Fastmail, X (Twitter), a billion different messenger apps, Microsoft apps for office applications, etc etc.
A company doesn't need a literal 100% market share to have monopolistic power. Products like Google Search or Gmail that dominate their respective markets count.
Google and Facebook are ads companies. Everything they build, deploy, do, share, sell, and buy is about selling ads. Youtube is a platform to sell ads. Gmail is a platform to collect data to sell ads. Facebook is a platform to collect data to sell ads on Facebook. Both companies have "analytics" products that exist to collect more data for ads.
What competition is there in the Ad space for internet advertising?
Yeah, all the socialized losses from business have fallen on governments and citizens to pay for it, and figure out how to do it. Superfund sites always spring to mind but we seem to have socialized costs oozing out of every business. Capitalists should abhore this type of gross inefficiency. In theory the business could solve the issues internally: cheaper, faster, better than the government could ever hope, lowering the overall tax burden and ultimately preventing problems at the source.
Regulation is very expensive because you have to enforce it. You can tax anything, though. Maybe that is a better route to promoting good behavior "voluntarily."
What do you mean "power" ? It's not like parent is asking for them to have special powers, only to vet their advertisers, like a newspaper would do. If they can't do that at scale, it's their problem and should downsize accordingly.
Since half of the ads I see are very obvious scams, I'm saying that there's easy improvements from the wild west that it is now. Even basic checks would improve the situation.
At some point Google must have some responsibility regarding this mess, they can't reasonably say that they do not know anymore.
> So your solution is for YT/Google/Meta to check the code, operations, finances, and business models of every advertiser that comes to them?
Sure, why not?
I literally don't care if this causes them to go bankrupt. If your business doesn't work without breaking the law, your business doesn't work, and we're not obligated as a society to form our laws to make your business work.
I mean seriously, if your local convenience store was letting people sling cocaine out the back because they couldn't make a profit selling candy and sandwiches, would you be like "Oh that's okay."? Obviously not. I'm not even for drug prohibition, but applying the laws we have fairly seems even more fundamental.
That said, fears that this would cause them to go bankrupt are likely overblown.
> Even if YT/Google/Meta could do this at scale, do you want them to have that power?
They already have that power. They can put whatever conditions they want on doing business with advertisers.
Ads can be Premiumed/uBlocked. The countless fake channels posting fake videos for all possible search terms can't. Also those account hijacks that run cryptoscams as spacex livestream's are still happening and it takes days/weeks for Google to remove them.
For me it was ads from a shady guy that was telling me that schools are how the "elites" make sure the masses don’t now the truth. Then he explained how his online reeducation courses would open a new world to me (and get rich).
I reported it and YT said the ad wasn’t violating any TOS. Meanwhile I was getting this exact ad like 20% of the time.
When corporations are complicit with the spread of conspiracy theories, we’re headed for a dystopian and fascist future.
> You have to hint to the YouTube AI to give you different ads.
I mean, I could, but there's nothing in that for me except the work of doing it.
Even reporting them to YT is a charity I perform mostly on the off hope that they'll get taken down so that some other person isn't scammed, but as you can see, that is met with limited success.
I've been seeing "AI" apps which offer to take a pic/vid you upload and make it a nude. These ads often have nudity in them, which is wild. Likewise, I saw some ads seriously offering drugs. I reported them and got a reply from Facebook: "We found this post does not go against our community guidelines"???
This is part of a larger pattern for Facebook and other "large scale" advertisers where it's clear they are breaking laws at some rate and are protected from paying a real consequence by the cost of discovering the exact rate of misbehavior. Facebook recently settled a lawsuit related to racially discriminating in housing ads (a federal crime)[1]. I suspect this will get added to the pile of illegal services Facebook provides but argues it shouldn't be held responsible for providing.
This is, I think, the real moral hazard you saw back in the 2007 financial crisis: companies can reach a scale where it's very costly to definitively assess who is to blame for crimes and can therefor commit any profitable crime up to a certain threshold. It both makes a mockery of the rule of law as a concept (along with many other things in the US legal system) and is an enormous competitive advantage for large companies. I'd include Uber's grey area stalking[2] and the eBay stalking campaign[3] in this category.
The flaw here is in expecting Facebook to be the police.
Here's what happens if you make Facebook do it. They become aware of someone running an illegal ad and ban them. But the advertiser is a criminal and still wants to make money, so they make a new account or find some way to avoid detection, and the ad ends up back in the system, and the advertiser can do this forever because they're making a net profit by continuing to play the cat and mouse game.
Here's what happens if you have law enforcement responsible for enforcing the law. Someone runs an illegal ad on Facebook, law enforcement subpoenas Facebook to find out who it was, and the advertiser gets arrested. This provides an actual deterrent because there is a penalty in excess of a ban, so it exceeds the profits from the criminal activity, and criminals dumb enough to not be deterred go to jail where they can't run any more ads.
Why are we trying to do it the stupid way and expect a non-stupid result? Corporations are not law enforcement. Stop trying to make it otherwise.
I fully agree it's a hard problem, but Facebook is the one telling us they can be the police. If you really feel you can't operate a business without doing crimes, you shouldn't operate the business. It's understandable - there are some businesses that it takes too much skill to operate without doing a crime and so people don't (there are lots of things that banks won't do that fall into this category). Facebook, by operating in this area, is saying they think they can do it.
> Someone runs an illegal ad on Facebook
The problem, to me, is that everything you said applies down stream too. You can't be sure the person purchasing the ad is trying to commit a crime either (perhaps they were hacked, perhaps they were dumb, etc). If you are promoting the "this is complex, actually" view - it's complex at every level.
The problem for society is that Facebook, as the company offering the service that occasionally breaks the law, is in a nice position. They get to profit off law breaking every once in a while and, as you say, it's hard to feel like it is possible to offer this service in such a way to perfectly avoid doing all crimes. So you get into this situation where Facebook (and everyone else at that scale) can do some crimes, but not so many that it's a big part of their business. It seems bad.
> If you really feel you can't operate a business without doing crimes, you shouldn't operate the business.
It isn't Facebook doing crimes, they're a company whose customers are doing crimes.
> The problem, to me, is that everything you said applies down stream too. You can't be sure the person purchasing the ad is trying to commit a crime either (perhaps they were hacked, perhaps they were dumb, etc).
Which is why we have investigators and courts, to sort this out. When the police execute the warrant and find the drugs, the person's claim that they were hacked is not going to hold a lot of water. Whereas if they find no drugs but seize the computer and find malware on it, then they can investigate the malware network and find the actual perpetrators.
This is law enforcement's job. To figure out who actually did the crime and charge them with it. Facebook can't do that and shouldn't be expected to.
> So you get into this situation where Facebook (and everyone else at that scale) can do some crimes, but not so many that it's a big part of their business. It seems bad.
Why is it bad? Why is it even expected to be bad? It's true of every business whatsoever. A major hardware store that sells duct tape will have as customers some number of kidnappers who use it to tape the mouths of their victims. The kidnappers will go to a gas station and buy gas. These companies are thereby profiting in some small way from crime. But so what? Go arrest the kidnappers, the hardware store is irrelevant.
Aiding and abetting requires intent. If I am a bus driver and a bank robber happens to ride on my bus on his way to rob a bank I am not aiding and abetting because I had no intent or knowledge of the crime.
> To convict as a principal of aiding and abetting the commission of a crime, a jury must find beyond a reasonable doubt that the defendant knowingly and intentionally aided and abetted the principal(s) in each essential element of the crime
We are talking about a bank robber with the cartoon outfit with big bags with dollar signs on them boarding the bus to and from the bank repeatedly. But he pays for the ticket so I guess it is fine in this proto-dystopian "late stage capitalism".
> We are talking about a bank robber with the cartoon outfit with big bags with dollar signs on them boarding the bus to and from the bank repeatedly.
Okay, let's proceed with your cartoon example.
The bank robber goes to the ticket machine, puts in money, gets a subway ticket, swipes the ticket through the turnstile and rides the subway. If the subway operator posted guards at all the entrances to the subway they could see the guy, but they don't, because that would be crazy expensive when they're not the police and their concern is just to make sure people pay the fare, and they can do that by installing automated floor-to-ceiling turnstiles that block entry to the subway unless you pay the fare.
Why is it the subway operator's obligation to investigate this crime, instead of the police? The subway operator could investigate it, the crime is happening in public view, but so could anyone else. Moreover, we don't want random common carriers to be denying service to innocent people based on scant evidence just for CYA purposes. We want penalties to be handed out in court once the prosecution has met their burden of proving the crime beyond a reasonable doubt.
If somebody uploads illegal content (like CSAM) today, does Facebook simply delete the account and call it a day? I'd hope for a system where the police handle enforcement and Facebook simply reports its.
If this was happening on my platform, I would want to know about it. Facebook is the police, of their own concerns. I'm sure they are genuinely trying to identify all these people and report them to law enforcement. What is the miss, here?
Any decent journalism on this subject? NPR takes too much money from Facebook to preclude them from rooting around in their business, and they pretty much stopped covering Facebook, other than mentions and basic AP press releases. The same is probably true with other media interests as well.
Nonetheless, I doubt Facebook wants these people and it certainly threatens their business by overrunning it with creepy drug ads, so what we see is the tiny bit that manages to bubble up through the cracks. Everyone wants Facebook to magically seal all the cracks.
> I'm sure they are genuinely trying to identify all these people and report them to law enforcement. What is the miss, here?
The miss is assuming any company would do something out of genuine concern for the law. Their shareholders don't reward them for morally upstanding behavior. They are concerned about money, and the law is only a concern insofar as it impacts their real concern. We've seen this story a million times people! It us what for-profit companies are. It is how they operate and that will never change.
What can change is government regulation and enforcement. That is the one and only answer to this problem.
Facebook is the only one with access to all the information to know that a crime has even happened. Practically, there is no way for someone on the outside to be able to detect many of the illegal things that Facebook is facilitating. Unless you want all of big tech's data to be explicitly funneled to law enforcement, this is not a solution, especially when Facebook's incentivized to turn a blind eye and keep collecting checks.
This is obviously not the case, otherwise how is anybody detecting the crimes in order to accuse Facebook of them?
If someone is running an ad to sell drugs or violate some other law, the users will see the ad and can report it to law enforcement. Law enforcement then investigates to find out who placed the ad and goes to arrest them.
I don't believe GP was suggesting Facebook do it; rather just pointing out that the second option is extremely costly and difficult, which is a consequence of the scale and influence of Facebook
> the second option is extremely costly and difficult, which is a consequence of the scale and influence of Facebook
Is it? Suppose that instead of Facebook we had a federated social media with ten thousand independent operators and ten thousand ad networks. Then there would be ten thousand ad agents (really automated websites) who compete on price and whose job it is to place your ad in the ten thousand ad networks.
Drug dealers would still try to run drug ads, wouldn't they? What would be any different? The internet has the scale that it does whether the service is centralized or not. Centralization/monopolies are the cause of many problems but not every problem.
Since ads are targeted, only Facebook can know when an illegal ad is run. Are you suggesting Facebook should be reporting the people running ads to law enforcement?
They claim the ads are targeted. Given that I've had ads for:
• Both dick pills and boob surgery
• A lawyer specialising in renouncing a citizenship I don't have for people who have moved to a country I don't live in
• Local news for a city in Florida that I didn't know existed (and I'm not an American resident or citizen)
• A library fun run in a city 903 km away with an entire extra country between me and it
• An announcement, from a government of a country I don't live in, that a breed of dog I've never heard of, is to be banned (and I'm not a dog owner)
I think that the claim "Meta knows how to target ads" is itself in the set of scams. The contents of my email's junk folder is significantly less wrong than this.
If everybody is using an ad blocker then it doesn't really matter what ads they run, does it?
If some users don't use an ad blocker then there is obviously someone other than Facebook who knows when an illegal ad is run and can report it to law enforcement.
> law enforcement subpoenas Facebook to find out who it was
See, here’s the catch - this requires law enforcement to be a functional organization. Judging on how widespread drug abuse and drug dealers are, I don’t think this would be a winning approach.
Also, I wonder how newspapers dealt with this approach? Human vet everything?
The last thing I'd complain about is YouTube not doing enough copyright enforcement. If anything, they do far too much, beyond what is reasonable or necessary! And in a lot of cases, the studios don't care, because they slurp up most of the profits, not Google.
They basically do copyright enforcement in two cases:
1. Automatically, based almost entirely on sound, using ContentID - this is heavily weaponized, gamed, and generally over the top.
2. "Manually" (but really automatically) in response to requests from "verified" (sure) rights owners - this one is also heavily gamed, but seems to make up a much smaller fraction of takedowns.
YouTube is probably a bad example in terms of copyright. They in fact have the most restrictive system of reuse of others work in your work, as a large portion of the videos that are auto struck would probably err more on fair use.
I'm not in this space, but from the user's perspective I thought that youtube's "Demonitization" doesn't mean the ad doesn't show, just that the channel doesn't get any benefit from the show of the ad.
So they're actually incentivized to have many creators struck, because struck media means ad impressions whose revenue they need not share.
> So they're actually incentivized to have many creators struck, because struck media means ad impressions whose revenue they need not share.
This revenue would go to the copyright claimer. EG if Warner bros struck you for showing their movie, the ad revenue from the video would go to Warner, not simply YouTube
I'm not privy to their deal, but while I'm confident that Warner Bros' has negotiated a revenue sharing agreement better than a typical Youtube Partners' 55% rate, I also suspect it's not 100%.
Yes and no. Youtube's moat is it's content creators. A gready algorithm might make them more money in the short run, but it would destroy their moat, as content creators migrate to other platforms.
I would disagree on the simple fact there's TerraBytes of copyrighted content on there, uploaded by people that don't own the content and subject to a fairly weak black box.
Seems pretty simple to me. They should pay crippling fees until their business shrinks to a point they can operate it legally. It’s unclear what benefit there is to society that an organization like Meta can scale beyond responsible operation. There is a natural self-regulation to hyperscaling. At some point you’re too big to exist. I see no reason we should forgive Meta’s incompetence. They’re not a natural monopoly. Let them fail.
"This is, I think, the real moral hazard you saw back in the 2007 financial crisis: companies can reach a scale where it's very costly to definitively assess who is to blame for crimes and can therefor commit any profitable crime up to a certain threshold. It both makes a mockery of the rule of law as a concept (along with many other things in the US legal system) and is an enormous competitive advantage for large companies. I'd include Uber's grey area stalking[2] and the eBay stalking campaign[3] in this category.
"
If a company is too big to be managed properly it shouldn't exist. We saw that in 2008 with "Too big to fail" banks. I also remember the AG back then stating that some companies are too big to prosecute which is also big problem. Seems health insurances also have reached that scale where they can screw with people without consequences.
companies can reach a scale where it's very costly to definitively assess who is to blame for crimes and can therefor commit any profitable crime up to a certain threshold
Succinctly put. Just as LLCs function as a legal device to distributing and limiting financial risk (but strangely, not profits), they increasingly perform the same function for other kinds of legal liability. It's the worst of both worlds.
Meta has an obligation to ensure those they do business with are not criminal enterprises. The "advertising" you mention doesn't have a specific illicit enterprise paying for a call-out in the songs; it's just people expressing themselves.
Merely _hundreds_? At Facebook's scale that seems like a really low number, no? Like, even if they were doing manual review of every single ad (which they probably aren't) I'd expect at least a few to slip through merely due to laziness or incompetence on the part of the reviewers, wouldn't you?
The headline "Facebook's ad filtering is so good it only lets 0.0000001% bad add through" doesn't sounds like good click bait to me, despite it being an impressive achievement if true. So here we are.
Facebook has about 3 billion daily users. I expect each will see many ads - probably 10 or more. But if we assume it's just one, 100 displayed in a single day would give you 0.0000001%. The study didn't look at one day of course - the observation period was over 100 days.
As many others have stated, I've reported all kinds of sketchy drug sales to FaceBook and been told they won't do anything about it. I'd love to see them raked over the coals for this.
Most of what’s being discussed here is naturally subjective views on propriety. Not many years ago, God forbid should one see an ad for cannabis. Even what is and is not considered a scam is quite subjective, is it not? To what extent are these platforms themselves legalised scams? Or the oh-wow-look-what-we-found false editorial strategy that most rags market.
There are ads on there for ecommerce shops which clone the websites of known brands, with the exact logos and colours and names of those brands but which don't actually have anything to do with those brands.
These websites ask for your credit card number to purchase items which are advertised significantly below the retail price. I don't know what they do with the credit card number once they have it, but I can guess.
Setting aside the legality and free speech philosophizing and corporate apologism, don’t you find it tasteless, doesn’t it make you angry that there’s a hot granny-sex prostitution / blackmail outfit that’s taken over your local library? It used to be a nice place man. I know we can still check out the books but after 4 hours of abusive conditioning where they pin your eyeballs open and and jab the monitoring device into your brain stem who’s got the energy left to read anything anyway
I would argue that. It's probably the least offensive thing, compared to alcohol you can buy everywhere. DMT and other entheogens are far less harmful than alcoholic drinks available. It's confirmed by many academic papers, and DMT is not something people use for recreational purposes.
Moreover, some of these "psychedelic drugs" are used by people who have serious illness, wether it is life-threatening like cancer, or all kinds of PTSDs. It is very well known that these "drugs" help to prevent suicides, and very often it makes people's lives less miserable.
While I agree with the narrative, and totally against unregulated drug sale, I can't say WSJ put a lot of thought into the article posting a picture of DMT powder as the main evidence to make entire article look more scary. Average Trader Joe's with vodka looks many times scarier to me.
There is no euphoric effect or any other effect that makes it recreational. It's more of a self-reflection tool that helps to breakthrough into the perception of things that are so much different. And it helps to break bad habits actually: alcohol, tobacco, being aggressive, etc. Also, because it's not euphoric, it is not addictive as other drugs (read: alcohol and above).
It's definitely a tool to break bad habits, but the psychedelic effect can also be recreational, especially before breakthrough doses - not that I use it myself, but I know of some people who do ; )
I've been getting a lot of these and reporting them all. They never take them down. It really surprised me at first because I ran a CBD business from 2019 to 2021 and they refused to run my ads even though it was legal. Same thing with Google. Google really screwed me over because I had my ads running for a bit and then they stopped them because it went against their rules for hemp/CBD products. Eventually they changed the rules and allowed those ads. So they started running my ads again but I didn't get a notification until my $300 bill came. And this was after I had already shut down my business. Of course I couldn't get a hold of a real person and I paid the bill because I didn't want my account to get banned.
These are likely for muscimol gummies as opposed to psylocybin gummies. Despite having psychoactive components muscimol containing mushrooms (Amanita Muscaria, etc.) are legal, as is muscimol itself.
> Perhaps the most plausible possible cause identified so far is yet another compound identified by Diamond Shruumz itself.
> In its recall notice, the company reported that third-party lab testing of some of its candies identified higher than normal amounts of muscimol, a psychoactive compound found in hallucinogenic Amanita mushrooms, including the iconic toadstool A. muscaria.
> These mushrooms contain a combination of muscimol and the related ibotenic acid, both of which resemble neurotransmitters.
> Together, they could cause the symptoms seen in the cases so far, including seizures, central nervous system depression (loss of consciousness, confusion, sleepiness), agitation, abnormal heart rates, hyper/hypotension, nausea, and vomiting.
And in pharmaceutical manufacturing, diluting things is definitely a science. Gotta worry about demixing, adsorption, absorption, granulations, cracking or creaming of emulsions (if present), caking... lots of fun when you need to make something consistent at low concentrations.
The link I initially posted was about 4-aco-dmt being found in gummies and chocolate — types of candy. It came in packaging with recommended dosing on it.
>You won't find any of them are conspicuously sold for human consumption, it's easier to stamp "not for human consumption" on the packaging.
That's not the get out jail free defense that idiots think that it is. Gummies aren't made and packaged and distributed for any other purpose than human consumption, judges aren't stupid.
How does one get these ads anyways? Search history of “drug rehab” or “drug overdose”? If you are addicted to heroin, cocaine, or other opioids (fentanyl), these ads are a test of your resolve.
They can prove in civil court standards their severs weren’t part of drug dealing? Because that’s the standard when a person gets pulled over and the cops->feds take their cash without any criminal charges.
They won’t have to prove anything. Cops pull people over and take their money assuming the person can’t or won’t defend themselves because it’s too expensive or time consuming.
I know someone who has had actual drug money seized in California and it took them less than a month to get it back by court order because they knew a lawyer specializing in those cases. There are a lot of judges that don’t take kindly to asset forfeiture without a conviction and a legit corporation like Meta with a big legal team would have their stuff back by the weekend.
I'm actually really glad that some companies like this are not really beholden to advertisers anymore. I don't like how that influences social norms per country outside of the legal system, based on what advertisers believe. I like that there is no collective voice possible that could get enough advertisers to leave Meta over disagreeing with a Meta practice.
I'm into lawsuits from private persons and the government. Just not social mobs.
My favorite Meta ad that I keep getting is a deepfake video of Elon Musk saying he will be handing out free money for “Neuralink AI” to the first people who will invest. They’re saying thousands of dollars PER WEEK are guaranteed! Nice!
And I just got the same exact scam on FB, but this time they used a deepfake video of my country’s president, saying they’re working with Elon Musk to give money to all citizens. Yay!
This is the reason I gave up on google ads for my android apps back in 2015. I turned off all 18+ content and it kept on showing sex and gambling ads to users. The advertisers probably categorized themselves as entertainment and didn't check the correct boxes and I couldn't be bothered hunting all of them.
There's just no excuse for this anymore – why don't they apply their own state-of-the-art LLMs to analyze these and flag the adds as pushing potentially illegal substances?
Because as it's painfully obvious now, Meta doesn't care, they're happy to take the money.
LLMs create text. They don't understand them in the traditional sense.
LLMs are backwards. Much like how Stable Diffusion is text-to-image, not image-to-text. You need to rework the neural network entirely to do what you're asking.
-------
Its a neat trick that you can train LLMs to perhaps, ingest "text" followed by a question (ex: "Is the previous paragraph spam?"), and then the text-generator generates a yes-or-no response that is somewhat pleasing.
But its not quite the same as what you're asking for.
Have you... actually used the image recognition features of even something like the web-accessible ChatGPT 4 interface, or are you just guessing here?
I feed it unstructured images of all kinds of things and it generally does a rather good job of describing them.
Just now, I found (what is allegedly) a screenshot of a Facebook ad about microdosing mushrooms. I fed the image to the bot and asked it to identify whether or not it was an ad, and the content of it, and to rate the legality of it on a scale of 1 to 50, with 1 being perfectly legal and 50 being criminally punishable.
It nailed all three questions and gave it a score of 45.
I then did the same thing with a screenshot of a Facebook ad for an Audi Q5, and it nailed all three questions and gave it a score of 1.
I followed this up with some human-made drawing of 4 horses that I found fairly randomly. It also nailed all three questions and gave it a score of 1.
Satisfied with the result, I then stopped. I don't get paid enough to make a study of this. I have little doubt that these results are reproducible.
(And is this an LLM trick or something else? Perhaps a combination of things? Is it AI, LLM, LMNOP, or some kid in a sweatshop overseas? I don't really know -- that distinction is also beyond my pay grade.)
> (And is this an LLM trick or something else? Perhaps a combination of things? Is it AI, LLM, LMNOP, or some kid in a sweatshop overseas? I don't really know -- that distinction is also beyond my pay grade.)
That's the thing: The hidden layers that process the text _MUST_ have some kind of language structure getting figured out. Otherwise none of the behaviors of ChatGPT make sense.
Interpreting those layers and passing them to the user in some way should be the core focus IMO anyway.
On the other hand: there's also the problem of brute force. If they paid 50,000 cheap "Mechanical Turks" to manually teach an AI that the "top left" of an image is pixel coordinates (0,0), that's... trickery. Its no longer intelligence. Its disconnected and not innate to the model anymore.
The nature of the training set is what would determine how useful any of this actually is, and how well it generalizes to new tasks.
If I understand correctly, every LLM has an embedding function which can reduce a block of text to a coordinate in high-dimensional space. That coordinate can be used to determine how likely something is to be talking about selling illegal drugs and automatically flag posts for manual review.
But given that the problem seems to be in the "manual review" because users constantly report these and get brushed off, that "neat trick" you wrote about seems like it could really help explain to manual reviewers exactly why a particular ad violate's Meta's content guidelines.
> If I understand correctly, every LLM has an embedding function which can reduce a block of text to a coordinate in high-dimensional space. That coordinate can be used to determine how likely something is to be talking about selling illegal drugs and automatically flag posts for manual review.
Sounds like Word2Vec, which isn't necessarily an incompatible technique. I'm not exactly on the cutting edge of Language Models here.
I'm sure the information is sitting in the "hidden layer" somewhere. But LLMs have very many hidden layers. Its not something that a programmer can just "pluck" out of the network.
I use LLM embeddings all the time at my day job (and I work in a very, very low-tech "tech" job, basically an IT department). It's not even remotely difficult and works incredibly well for sifting through absurd amounts of data to filter it down to something that is easy to human-review.
> I think you may be commenting "confidently" on technologies that you're not very familiar with. It is something that we all struggle with sometimes.
Your snarkiness aside, what I've said earlier I stand by.
LLMs are a network of neurons / self-learning units that point towards the "next" word to be predicted". This is fundamental to the architecture of this entire system.
I recognize that there's some kind of parsing and understanding from tokens -> hidden layers of the network. That's just how the math works.
Perhaps you should educate yourself upon neural networks and the fundamental math at play here before caustically criticizing everyone who contributes to a discussion.
I admit that I don't know what an embedding is but I know it's something that is created by a neural network's output layer (or the post-processing of the output layer). This is once again, fundamental to the math of the neural nets at play here. So anything I know about output layers applies to whatever an embedding is.
---------
Not everyone in this field is working off of just using the damn thing. Some of us have experience in the layout and underlying theory of neural nets. And I'm confident in what I've said.
I’m sorry that I came across so different from how I meant to. I was intending to evoke a tone more similar to one I had recently here[0]. That is my fault and I should work on my communication.
> I admit that I don't know what an embedding is but I know it's something that is created by a neural network's output layer (or the post-processing of the output layer).
The embedding layer comprises the first layers of a transformer model, definitely not the “output layer”. It is typically trivially easy to isolate the output of the embedding layer because it generates the “actual” input to the LLM. The LLM has a certain number of input parameters (typically 784) and some length of text usually won’t fit precisely into 784 parameters (it will be longer or shorter than that). All the embedding layer does is map arbitrary length text to a single 784-dimension vector, which actually can be passed to the input of the transformer.
Ever since the paper “Attention is all you need”, the embedding layers have always been the first layers of a transformer to act on the text input. Most of the output is also sent back to the embedder, but this is only so that it can be used as input again to the “attention” layers.
That's exactly what I'm asking for. Yes, technically LLMs do generate text but with some prompt engineering, can be used to classify things, especially ones that require Natural Language Understanding. We have literally used them for highly targeted classification tasks and they've done a great job.
Image recognition too, of course (using Machine Learning). My point is that, all the stuff FB has spent billions on in the last few years, they have the ability to fix these issues in their sleep.
As others have said, the key is that regulators are absent.
Because the cost of running the classifier is higher than fines imposed ( 0 euro dollar crowns ).
The moment the regulator asks them to do something, the cost of running the classifier will
instead be compared to legal expense and lobbying attempts.
Apparently it is, literally - get this - marketing without putting your face in from of a camera. Because we all know that was the only way to do marketing before. All that juice from just one bag of oranges!
Are drug ads bad because of the war on drugs? Are they bad because we won’t let tobacco make ads?
The only way this seems shockingly bad is that you can’t actually control what you see as a user of these ad-supported services or if you support the war on drugs.
This. They message to billions, every what, hour, and get how many ad contracts every minute? If the problem really is this order of magnitude they are doing an incredible job, aren't they?
I used to run a business in Canada that sold grow equipment and our primary customer base was marijuana growers. When legalization came we started running advertisements for our shop on facebook. Got our hand slapped really hard, really fast and our advertising account banned.
A few months later I see advertisements for chocolates with psilocybin mushrooms in them. Mushrooms remain illegal in Canada. The ads continued for several months.
Every single thing about the drug war is anathema to a just and rational society.
Even those who don't care about justice or rationality must care that, in exchange for the billions of dollars spent and the extreme loss to personal liberty, drugs have only become more potent, more readily available and much cheaper.
I find myself developing an inability to respect people who believe we should stay the course despite the overwhelming evidence that the drug war is beyond lost, and always has been.
... and in the meantime Meta won't let me create an Instagram account from either my home or work computer. I have very little history w/ either since I deleted my Facebook account in 2016 but started a new one to use with an Oculus VR headset. Yes I have run a webcrawler at home but never against Meta properties and I don't believe I've ever made trouble of any kind for Meta.
I badly want to support a friend's social media marketing efforts and it's just ridiculous that I'll probably have to create an account with false information despite me sending multiple emails to Facebook asking if they can clear whatever flag is on my account and/or IP addresses.
World's easiest problem to solve if we just get the political will to do so:
Pass a law requiring a human being to sign off in "writing" (can be eSigned through an internal moderation platform) on any ad that will be run that the platform will profit from and that will be pushed to users by choice of the platform as opposed to request by the user. This human serves as an agent who will be liable along with their firm if there are any illegal issues that should have been obvious to a "reasonable person" reviewing the ad. Require anyone doing this job to be supervised by a licensed attorney, who must document their hiring and supervision practices and can be disbarred for failing to run a reasonably tight ship.
Problem solved. Bam. So easy.
It is not fair that platforms get to pretend that being able to automate running ads means they have no responsibility for their choices. It's like saying you can't murder with a handgun, but if you set up an automated machine gun turret, now it's OK.
I don't think it's actually that easy. In the US, we have Section 230[1], which says
> No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
Seems to me, that says if you have a problem with the advertisement, you need to go to the source, not the ad network. You're proposing an exception for advertisement, which is reasonable, but you've got to define advertisement. It's not just money for distribution; I'm not an advertiser just because I pay to use a forum, which is money for distribution.
Section 230 isn't some immutable force, so yes, political force is all that's needed, but it's been the norm since 1996, and has been enabling ad networks to show ads without regard to their content since 1997 [2].
> Section 230 isn't some immutable force, so yes, political force is all that's needed,
Exactly. Glad you agree. It's just an act of congress. We pass many every year, including far bigger ones.
> you've got to define advertisement.
I did in my post. I'll repeat myself for you:
(1) the platform is profiting from it
(2) the platform is pushing and selecting the content to the user at the platform's whim, not the user's request.
Seems reasonable to me but these details can be worked out if the political will for the big picture change ever actually appears.
My definition would allow the main thrust of Section 230 to remain intact. Section 230 was, in my view, most nobly intended for cases like ISPs hosting content, such as GeoCities or Cloudflare or Google Sites. This is content that the end user specifically seeks and requests. Ads as well as algorithmically recommended content don't meet this test. If I go to criminalhatespeech.com on purpose, that's on me. If TikTok advertises criminalhatespeech.com to me or puts it in my For You for whatever reason they think it'll engage me, that's on them. Simple, isn't it?
All we're doing is putting humans and robots on an even footing instead of priviliging robots over humans as the current law does (never thought I'd type that sentence someday, but it's basically true-- the platforms are getting away with doing things they could not get away with if the code was being "executed" by "hand" by humans with flow charts instead of automated with CPUs and network cards.)
That's a workable definition at least, but what you're proposing is effectively a nationwide ban on algorithmically recommended content. Or is that your whole point?
The only algorithmic content that would be eliminated is that which is already illegal speech and/or that which is too valueless to be worth spending the approximately $2 of time it costs for someone to look over a 10 second video and sign off that it's OK.
Who pays that can be decided by the market, but probably the poster could pay it and if they choose not to, the platform will algorithmically decide which "free" content seems worth moderating at the platform's expense.
I'm literally just actually requiring a modicum of moderation, not banning anything.
Or is it that you believe that the vast majority of the content on the platforms today is ultra low value and/or illegal content? Frankly, if you do believe that, that's even more damning for the platforms, isn't it?
However, I don't actually believe that. Most content I see appears to be legal and appears to already have thousands of views before I see it. Algorithmic content is already mostly stuff that is popular enough that the tiny expense of vetting it wouldn't quash it.
As for how a newbie gets their foot in the door, this actually makes it easier for them. Yes, now you might have to pay $2 to post your first video if you want any shot of anyone besides your friends seeing it -- but now you're only competing with others who were willing to do the same. All the ultra low effort AI generated videos suddenly become totally unviable. In fact, this generally biases a platform towards higher production effort content. If I spent 2 minutes concieving of and making a low effort video, posting it 4 minutes after having thought of the idea, shot it in one take and immediately publish it -- then $2 seems like a lot. But if I spent two days planning, shooting, editing, then suddenly $2 is nothing.
This will make everything better and nothing worse except for bad actors who are parasites one our once wonderful internet ecosystem.
> that which is too valueless to be worth spending the approximately $2 of time it costs for someone to look over a 10 second video and sign off that it's OK
The problem is that's basically all user generated content. Who's going to spend $2 vetting this comment that I'm writing right now before approving it to be shown to users by the algorithm that ranks it based on upvotes?
Sure, there will still be some high-value content out that earns enough revenue to be worth the effort. But the vast, vast majority of user generated content doesn't fall under that category.
There are details, but we can get them right if the political will is there. The big picture is that platforms should be responsible for their arbitrary and self-serving publishing and promoting decisions, while being shielded from liability when they act more like a simple common carrier.
For example, I think I'd be OK with some sort of exemption for pure-text content. Most of the problematic content out there isn't pure text.
And/or maybe trivial to describe rules like upvotes, linear historical feeds, etc need to count as an algorithm-- If it's obvious to a "reasonable person" with a high school education exactly how the content is being selected, it does not need to count as the platform making publication choices.
Focus on the big picture and the details will follow. It in no way is a takedown of a big idea to point out that implementation will require some attention to detail. The big picture of social security is "If you're too sick or old to work, we should pay you" but the details fill volumes and volumes. This is nothing new; it's how laws work.
Section 230 was drafted at a time where algorithmic (and/or ML-driven) curation of content (whether ads or actual content) wasn't a thing.
If you use algorithms to selectively choose which content the user gets exposed to (in an objective to increase your profits obviously), this is literally the definition of a publisher.
We are way past the "we just provide a dumb pipe for content between publishers and users". The pipe is no longer dumb and hasn't been for over a decade.
This has a good chance of being the additional cracks in 203 that bring the whole thing down imho.
The cracks were created years ago when fb and ggle did a 180 on the sopa / pipa censorship.. I knew then that the cracks would only get bigger..
A fediverse will be more and more used in the future I feel, maybe not by the masses of grandmas but certainly as more is censored in main portals people will nose around elsewhere.
So just rewrite or repeal it. If we make social network and UGC companies common carriers, then they'll be subject to different kinds of regulation, too bad. Right now they are having their cake and eating it, to the general detriment.
Yeah, it's probably not going to be easy just because Section 230 is or at least was essential for the growth of the internet.
Is it as important now? I'm not sure, but I wonder how essential it is for big companies like Google or Meta to have this sort of protection when they don't have as much competition to incentivize moderation and may have enough money to do better research on what content they're hosting.
Section 230, if it's even remotely applicable to companies doing their own advertising on their own platforms, requires a "Good faith effort" to moderate that platform.
If a court is willing to call the moderation google and facebook do currently in advertising "good faith", that court is flat out wrong.
> (2) Civil liability
> No provider or user of an interactive computer service shall be held liable on account of—
> (A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
> (B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).
Good faith is only required to limit civil liability in case of taking actions to restrict access.
There's no good faith requirement for 230 (c) (1) which is where platforms get immunity for content from users. I don't believe the platforms are being accused of advertising their own drugs for sale; they're passing through drug advertisements from users (advertisers).
Because there is absolutely zero evidence that there are drugs being physically trafficked at Meta's offices?
What should happen is that a court should be able to drag responsible executives in front of a judge, issue massive fines for what has already happened, and order that Meta stop running ads until they can present an agreeable plan to prevent these kinds of ads from running.
Then they should be under a consent degree, and subject to regular government inspection, and if the agreed upon mitigation plan isn't being followed and these kinds of ads are running again, the same executives should be dragged back in front of court, Meta should be prohibited from running ads for months as a punishment, and to force them back to the drawing board to fix the issue permanently, and the responsible executives should be given significant personal fines, and a short jail sentence for violating court orders.
Rinse and repeat until either Meta is permanently barred from running ads, putting them out of business, and/or the responsible executives (including the CEO if applicable) have enough strikes that they get a decade or so in prison to make an example of what happens when you refuse to operate a business within the law and kill people with black market drugs.
>Because there is absolutely zero evidence that there are drugs being physically trafficked at Meta's offices?
Well that's not why they would raid Meta. They would raid them to preserve evidence and documents of Meta's complicity with the drug trafficking that does occur on their platforms.
They can do that with a court order. Meta would end up caught with their pants down if they deleted evidence. There's little to no chance that they could do that without either leaving behind damning evidence, or having an employee blow the whistle.
Yes, they can do that with a court order. Was that in question? As to whether or not they could get away with it, I don't agree. Depends what you mean by get away with it. They'd probably rather take a default on the issue than release any internal discussions making them look complicit.