I think there’s a good chance it was ghost written by a content farm for someone who wanted to publish a book in a popular category to make a quick buck. I recently learned that this is a soft-scam technique advocated for by a couple of guys called the Mikkelson Twins. They say you can make money by publishing audiobooks on audible by finding a popular topic on audible, then hiring a ghost writing service to write your book, then hire a voice actor to record it, publish it on audible, and then you too can become fabulously wealthy as the Mikkelson Twins claim to be. Except what the Mikkelson Twins actually make money on is selling you on a “money making course” where they tell you to do the above. So there’s all kinds of people trying to follow this method out there, it seems.
This excellent video [1] from Dan Olson at Folding Ideas is in my opinion a fascinating deep dive in to this whole scam. I don’t want to spoil everything in the video, but Dan does an exceptional job exploring how this ghost writing works, and why it produces the kind of nonsense writing that looks like GPT-3 but is actually being done by an overworked human who is trying to write an entire novel in three weeks or so on a topic they haven’t had time to properly research.
> I think there’s a good chance it was ghost written by a content farm for someone who wanted to publish a book in a popular category to make a quick buck.
The article literally says
> My first theory was that the writing must have been outsourced to a content farm, where multiple individuals worked on portions of the text without caring to understand the context. But the explanation had a flaw: there’s nothing you can find on the internet if you search for “NFP chemotherapy” or “NTF Strategic Development Group”. The text didn’t merely describe a garbled version of our reality; it invented its own.
Yes, and the parent literally generalizes this one-off observation, puts it in context, and gives examples and even a source with a deep dive on the issue.
I don't understand how you think this is responsive to the issue of fabrication. The article already considered the possibility of the human content farm and concluded it was unlikely on the basis of information that was completely fabricated.
The fact that the commenter proceeded to generalize, show a link, etc is not responsive to that. A responsive answer would be along the lines of "well, a content farm might fabricate too because _____."
"If you find that interesting, you might find this interesting" seems like a perfectly reasonable motivation for an HN comment. I think it's worth forgiving that and enjoying the broader horizons rather than treating HN threads as a way for lots of smart people to try and craft the most definitive statement about one narrow issue, while policing any deviation therefrom.
If I were a content farm I'd probably use text cut and pasted from text models, but I'd never heard of the Mikkelson Twins and it was interesting to learn about them.
My rationale was that, before I had seen this very well produced video, I was not familiar with the way that content farms operated. The idea that words being fabricated means it must be GPT-3 seems spurious to me. A person in a huge hurry, with a poor grasp of the English language, could easily generate fabricated words.
The video I linked is extraordinary in its level of research and quality, and interesting on its own merits. If someone is not fully aware of the extent to which content farms squeeze out content, it’s easy to assume some weird prose is GPT-3. I felt the articles dismissal of the idea of a content farm was too quick, and perhaps they or the readers might not be familiar with how they operate. And so I provided a summary as well as a link to this very high quality investigative source.
Well, it's "responsive" for me. In fact it's your comment that was off-topic and meta, the parent was perfectly on topic and responding on the issue.
>A responsive answer would be along the lines of "well, a content farm might fabricate too because _____."
A responsive answer doesn't need to have a particular formula, much less that one. It just needs to be relevant to the issue and provide more information and/or the commentor's own take (could be just their feelings about the issue too).
Also that, but they still claim that that's what produced the book in OP, a possibility OP only brought up to pretty conclusively rebut. Which means they went on a tangent with their comment based on what they knew without actually finishing the paragraph it was based on.
Of these patients, 44 were treated with HAIC and 20 with sorafenib. HAIC involved cisplatin (50 mg fine powder in 5-10 ml lipiodol) and a continuous infusion of 5-fluorouracil (FU) (1,500 mg/5 days), which is referred to as new 5-FU and cisplatin therapy (NFP).
> Except what the Mikkelson Twins actually make money on is selling you on a “money making course” where they tell you to do the above. So there’s all kinds of people trying to follow this method out there, it seems.
How do people still become wealthy selling "money making courses", when enough people have created "money making courses" that they are, themselves, a commodity?
As the saying goes, there’s a sucker born every minute¹.
An important feature of these scams is to frame success and failure as being entirely in your hands: it’s not the system’s fault that you didn’t achieve success, it’s yours and yours alone, you didn’t want it bad enough. Someone who falls in that hole is bound to do it again, this time for a different grift which looks easier. They’ll do as much as it takes until they strike gold, go bankrupt, or finally understand it’s all a scam. That last one can take a long time to come.
In sum, money making courses don’t need an infinite supply of new chumps when they can keep reeling in the same ones. Offer some discounts and rewards for bringing in new members, and you’ve gone full MLM².
Oh, no, I do get why people would buy into money-making courses. I'm basically wondering why the market for money-making courses isn't a winner-take-all market, with the wealthiest course-creators spending their advertising budgets (that they earned by building their money-making-course empire when the field was more green) to outcompete upstarts in the money-making course market. Presumably, people can only be actively trying the techniques of one of these courses at a time — so why aren't most people focused on learning + putting into practice, the techniques of a few very popular courses?
Because the techniques don't work, and when people are conned once they are likely to fall for the next con artist selling fundamentally the same thing but with different branding.
A lot of the marketing behind these scams is that they're different than everything you've tried before - the point being is that the number of people who fall for these scams isn't enormous, but they will fall for scams over and over again. Just probably not from the same person selling it.
It's the same as any kind of cult messaging. It's not surprising that the size of a cult designed to exploit people has an upper limit, but it is shocking that the same people will join multiple cults time after time.
But if you know that, then — as a cult leader — why not come up with a scalable approach to founding cults, that all share underlying infrastructure (marketing, accounting, etc) but just use different figure-head cult-leaders, different language, and different outreach techniques; and so form distinct isolated cult communities? Different marques of the same company, in other words — like how every "competing" soap or cereal in the grocery store aisle is still making the same company money in the end.
(I do know at least two real exampless of this meta-approach to scamming: 1. there are meta content-farms, which hire people and train them to run seemingly-independent content-farm YouTube / TikTok / etc channels; and 2. there are serial Kickstarter/IndieGogo scammers, that follow a loop of forming a new shell company, hiring a new actor to pitch a fake idea, steal people's money for it, and disappearing.)
And, while they're at it, why not constantly seek to acquire the pitch+content from smaller, nascent money-making course creators, and run their content through your existing business engine? Why not be the EA of scams?
Being a scammer doesn’t mean you’re a criminal mastermind.
It takes capital, time, and work to become “the EA of scams”, at which point you’re inviting further scrutiny on yourself, meaning even more work (and staff, who you’ll have to trust) to not be in hot water with the law.
Scammers, like the people they scam, are mostly after a quick easy buck. You’re posing “why not” questions as if there’re straightforward to accomplish. They’re not. They take effort and risk without a guarantee of success. Most scammers don’t become “the EA of scams” for the same reasons most game companies don’t become EA.
No, I meant, why aren't a few very popular courses the only ones people hear about, such that they're de-facto only focused on those? Money buys advertising / brand recognition; why aren't there big-name money-making-course brands that are outcompeting the new upstarts? Why isn't the market consolidating around the guys who won big first and then reinvested in making their brands the biggest?
What gets me is; if this was a money making opportunity, and it truly is as simple as they make it out to be, aren’t they literally creating their own competitors in this space by revealing their secret? I’m sure the argument is something like “the market is so vast, that helping others do the same won’t affect my income stream”, but that suggests The correct answer is to scale up, not populate the landscape with completion.
Yes. It turns out that there’s a whole lot of aspects of their scheme that would turn off any informed rational person. But there’s enough people who fail that test to keep a constant flow of new customers buying in to their scheme.
In fact much as with “obvious” scams, this becomes a feature: non-marks deselect themselves from the pool at no cost to the scammer, which means the scammer has a much higher average “prospect quality” than a random sampling would give.
As far as I understood from the video, the Mikkelsen Twins did get rich from that money making scheme, until Audible basically found out and started trying to stop them (most of the books they "published" are not in the page anymore). They milked the cow dry then they made a money making course and created a new way to get money
And now with the competition basically no one who bought their course can actually make money
The argument can also be that it actually benefits them by making other people richer and therefore, increasing the amount of their own customers (and the amount of money those customers are willing to spend).
insert a random comment about how "market economy is not a zero-sum game"
Well, it's only logical: what they propose is barely legal and sooner or later, more sooner than later, the scheme will be burnt, so it's better to teach others to make a quick buck than staying in the same place for too long.
I don't think legality is a significant part of the problem – I simply doubt there is really a lot to make regardless of legality.
Most get-rich-quick schemes are really either “make a few pennies here or there, a few $/€/£ if you are lucky, a few 100 if very lucky” or “make-a-bit-slowly-with-lots-of-effort” or both. The rest not included in that “most” are even more completely bunkum, you'll never make anything unless maybe you join the pyramid and resell the scheme.
There is more easy money to be made selling make-money-easily schemes to those that lack critical thinking, than there is to be made via the schemes themselves. Even if no one you sell the scheme to makes anything, even if they all make a loss in the grand scheme of things, the scheme seller still has some revenue.
A scheme seeming illegal/immoral/both might make some marks easier to catch, as they think they are being clever by knowing that crime does in fact sometimes pay so their guard is further lowered due to confirmation bias on this matter.
They are so rich that they are retired but want to pay it forward. But it's not free because they only want to teach people who take it seriously and respect the idea.
Can't tell if this is a joke, but the most likely explanation isn't nearly so noble. They realize that the stream for this particular scam has already run dry (thanks to Audible catching on), but realize that others don't know that yet.
You make courses about how to make money making courses.
This reminds me of an episode of Reply All. They ordered some cheap Rolex rip off, it was even worse than they expected. They found out they were all being sold on shopify and that if you sign up for a new account on shopify you'll fairly quickly start getting spam emails helping you setup a shop of all this crap. Then they'd try to sell you on lessons on how to make it more successful. I forgot if the rabbit hole went deeper.
Probably because the ultimate end of these course isn't to make money by yourself, but by engaging in an MLM and let it work.
There's a company engaging in MLM sales paying some news agencies to promote its scheme every day during the news (a subtle sponsor, so to speak; it's on regular TV so SponsorBlock won't help). As my family watched the news from this channel often, I would recognize how it works. It would promote an online course along the lines of "Make Money with a Smartphone" and advertised the course it claimed to cost $249.99 at $2.49 (price may vary, depending on who is the presenter, though the course are the exact same).
The news program is careful not to show the names of the ultimate company offering it, only presenting that this person will be teaching. Sharp-eyed viewer, however, can notice that those adverts intend to direct the prospective "client" to an MLM.
John Oliver (just the presenter, ha!) on Last Week Tonight also did an episode about easy it is for a scam to get a fawning advertorial piece on local TV news segments.
Or, alternatively, you create a money making course generator generator. If that goes well your next project should be a generator generator generator.
There's a famous old ad where someone put an ad in the newspaper that said "Learn how to make money, send $10 to XXXX". Sending the $10 in got you a letter that said: "place an ad in the newspaper that says "Learn how to make money, sent $10 to [your address]."
I've watched the video as well and wholeheartedly agree! His 'Line Goes Up' documentary about crypto is also exceptional; very critical but in educated and actually nuanced ways (https://youtu.be/YQ_xWvX1n9g).
If you haven’t gone through the entire Olsonverse yet, don’t sleep on “in search of a flat earth”. That was my gateway video to Folding Ideas and it remains stellar.
That was my first too! Really wonderful creator and I did find myself going through his whole back catalog after that! And I agree with the comment upstream, “Line Goes Up” is a masterpiece.
Given the way that OpenAI sources their training corpus and the amount of people using GPT-3 I would not be surprised if GPT-4 winds up getting trained on a large amount of GPT-3 output.
Think about it - the best niche GPT-3 has is generating plausible spam. If you just need a lot of text, but you don't care about what it says[0], you're going to write it using the cheapest possible tool. OpenAI's training corpus is sourced through web crawls, so all of that spam is destined to get recycled back into the next generation of GPT.
[0] For example, if you want to be able to post a bunch of political spam that looks like genuine comments on a web forum. See GPT-4chan[1] as a practical example of this.
[1] A tweaked version of GPT-3 using 4chan's politics board as training corpus.
Same with most AI image generating models. In 10 years 99% of images on the internet will be AI generated. Would it not regressive for the models to train on their own outputs? Should regulation require AI generated media label itself, or is it the responsibility to train detectors which can intuit the difference between the models and reality better than humans can?
This is a brilliant (but very sad) video. The ghost writers get about $2500 for a 10,000 word book researched and written in a month. I guess this is cheaper than using GPT-3 and having to prune out the flights of fancy/complete nonsense that GPT-3 can sometimes generate.
From the video, it is $2500 for a 25,000 word book, written in 25 days. That is $100 per day writing 1000 words per day. If this takes you 8 hours per day, that is $12.50 per hour. If you can do this every month, that is $30,000 per year. A comfortable wage for some people, and poverty wages for others. It also seems like a difficult pace to research a topic and write 1000 words per day, and seems like miserable work.
Yes, sorry - 25000 words, too late to edit my comment. But it sounds like miserable work, writing on something you are an expert in, or are interested it sounds like a great way to improve your skills and make a bit of money. But researching, writing, and editing content on a made up topic for some charlatan book publishers sounds aweful.
Corporate strategy consulting. 1 day to write 5k-10k words on a topic you have to research from scratch is not uncommon. It's all within a broad field of expertise and a sector you're familiar with, but could be products / companies / deals you'd never heard of.
I think the point here was that the ghostwriting is of really low quality. Overnight 10k reports are never groundbreaking or enlightening additions to human knowledge, but they need to be 90%+ accurate and polished enough to inform.
Are those 5k-10k words completely new, or are document templates part of this count?
I know from experience (of on-line commenting; don't judge) this word count is entirely possible to achieve, but I can't imagine sustaining it long-term, as a part of a job.
I suspect the first few chapters were well-written and researched with the rest slowly filling up with bulk filler; I find that with a lot of books, I read the first few chapters, get the gist of it and stop reading. I suspect a lot of people do that.
That said, there's a few books in my library / to-read list that are great content cover to cover. Timeless classics, etc.
Homer Simpson's adage that "you're both right" might really apply in this case.
Outsourcing to a content farm might very well result in lots of the text being rendered by GPT-3 and similar generative AIs. The two are certainly not mutually exclusive.
This excellent video [1] from Dan Olson at Folding Ideas is in my opinion a fascinating deep dive in to this whole scam. I don’t want to spoil everything in the video, but Dan does an exceptional job exploring how this ghost writing works, and why it produces the kind of nonsense writing that looks like GPT-3 but is actually being done by an overworked human who is trying to write an entire novel in three weeks or so on a topic they haven’t had time to properly research.
[1] https://youtu.be/biYciU1uiUw