Apart from the images, this post was created without any help from AI. But, beyond just trusting me, how would you know?
This Week’s Sponsor
Get this playbook launched by Amplitude
Keep acquisition costs low and maximize your return on advertising and marketing spending. Take a page (or two!) from our Acquisition Playbook and crush the competition. Learn: Acquisition fundamentals, How to lower CAC and maximize ROAS Tips from industry experts. Learn more here.
Last week I wrote about using AI for SEO content. A commenter replied:
I have clients that ask me to edit AI but I do not and don’t intend to ever generate AI and let that be the end product. I also have clients that still want human writing.
Regardless of how good a piece of AI content is, there will be people who want a human to create it (see last Thursday’s post on Proof of Work ($)). This is good if the human can increase the quality of the AI’s work, but what if they can’t? What if the AI’s work is better than what the human can come up with? In this case you have a few options:
Ignore the request and create it with AI anyway. Say it was done with a human
Have a human involved, but not do anything substantial
Have a human involved and lower the quality of the work
If #2 or #3 is the answer the question becomes “how much shall we sacrifice in effort and/or quality in order to say a human was involved?”. The good news is the question has already been quantified.
MIT ran a study on the perceptions of AI-generated work.
They created four types of content:
For the human-only approach, they enlisted professional content creators from Accenture Research to draft the marketing copy and campaign goals.
The augmented human approach first used AI (GPT-4) to generate ideas. Human consultants then shaped them into final products.
The augmented AI approach worked the other way, with humans creating drafts and generative AI being used to mold them into final products.
The AI-only approach had GPT-4 complete the task on its own.
The study then had participants rate the quality of the content. They did so under three conditions:
No knowledge AI was involved at all
Told what the four different types of content were, but not told which content was made under which condition
Every piece of content was identified by how it was created (from the list above)
They had two conclusions:
when people had no information about the source of the marketing or campaign copy, they preferred the results generated by AI.
When people were told the source of the content, their estimation of work in which humans were involved went up — they expressed “human favoritism,” as the researchers put it. Their assessment of content created by AI, though, didn’t change, undermining the notion that people harbor a form of algorithmic aversion.
The theoretical becomes reality — the AI content was better than the human generated content. But, when the participants KNEW content was being generated by a human, it felt more authentic, even if it wasn’t as good. People did not like the AI content any less, but they started liking the human-content more, because it was human.
This is a little like seeing a painting and thinking is is “okay”, but then realizing it is great when told that it was painted by Picasso. Only replace Picasso with “any human”.
Unfortunately it is easy to lie and say something was created with a human when it was not. Maybe the next level of NTF is something that helps authenticate content as human-generated. It seems like people would be willing to pay for it.
Keep it simple,
Edward
I'm worried we're overstating the results of the study. The paper can be downloaded for free from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4453958 and if you go look at Section 4 ("results"), figures 1 and 2, you'll note that:
1. The 95% confidence interval error bars almost all overlap each other...
2. ... and this is even after they truncated the y-axis to make the differences more visible (i.e. appear bigger).