Blog

What Happens When Human Marketers Are Replaced, But Human Viewers Are Not

And why the problem is even worse when those human viewers are kids.

| Laura Smith

Artificial intelligence is making waves. Literally and figuratively. Everyone – from just about every major news outlet to the White House – is abuzz with discussions concerning AI, a term that is generally defined as “the capability of computer systems and algorithms to imitate intelligent human behavior.”

And rest assured that such “intelligent human behavior” includes marketing. In fact, the vast majority of major marketers are already using AI. And the global value of AI in marketing is expected to climb from $27.4 billion in 2023 to $107.4 billion in 2028. In other words, AI is here to stay.

The issue, though, as far as TINA.org is concerned, is not necessarily the fact that AI is being used in marketing, but rather that it is being used in deceptive marketing. And not just that. AI is being used in deceptive marketing that targets some of the most susceptible consumers: kids.

Here’s a recap of the AI issues TINA.org has flagged so far, how they impact consumers (including some of the youngest), what regulators have done (and not done) to address these issues, and what needs to happen to make sure marketers know what their AI-bounds are and consumers are adequately protected.

Virtual Influencers

In 2020, when the FTC was reviewing its Endorsement Guides, TINA.org submitted a comment that flagged, among other things, virtual influencers as a growing marketing issue and one that needed to be addressed by the agency. In conjunction with this comment, TINA.org conducted a review of more than two dozen virtual influencer Instagram accounts collectively promoting more than 80 companies and brands (including Amazon, Puma, Lexus, Toyota, Dr. Pepper, Porsche, Calvin Klein and KFC, to name a few). Not only do many of these virtual influencers look human, they also act human. Instagram posts collected by TINA.org show them moving, talking and even eating. Some have boyfriends, New Year’s resolutions and elaborate backstories, while others support worthy causes like cancer research. Some are shown spending time with real people, and many emulate human influencers – posing in fashionable clothes next to expensive cars and going to red-carpet events like the Grammys.

Since then, virtual influencers have continued to spread their digital wings and accumulate massive followings. Lil Miquela, for example, who now has 3.5 million followers on TikTok, nearly 3 million followers on Instagram and more than 39 million views on YouTube, has been spending her time engaging in a plethora of human behaviors, including drinking soda, eating candies and frequenting restaurants, even though she can’t drink or eat, and applying makeup and testing out mattresses, even though she doesn’t have a real face and doesn’t sleep. And it’s hard to tell which of her posts are sponsored and which are not.

1 of 6

(Of note, while Lil Miquela isn’t truly an AI creation and isn’t powered by AI, her success has inspired some venture capitalists to invest heavily in AI-controlled virtual influencers that need no human assistance at all once they’re released into the wild.)

And there are others. Take Imma and AYAYI, for example. Imma, who was created in Japan and has promoted big-name brands that include Fendi, can be seen on Instagram hanging out with friends and even “eating” (and promoting) chocolates. Similarly, AYAYI, who was launched in China, can be seen on Instagram riding horses and promoting a Dyson hairdryer, as well as Tiffany’s jewelry.

1 of 7

So what’s the problem? As TINA.org previously explained to the FTC, some of these digitally-created influencers can seem so real that many consumers don’t know when they’re interacting with a bot who may be trying to sell them something. In a 2019 study, researchers found that 42 percent of those surveyed said they followed a virtual influencer without knowing it was a bot account.

Four years later, many consumers remain confused. A study published in 2023 found that nearly a third of participants (29 percent) could not tell whether a virtual influencer was human or not, and an even larger percentage (32 percent) mistook a virtual influencer for a human.

(If those percentages don’t sound huge, recall that the FTC takes the position that to be deceptive, the act or practice must be likely to mislead a significant minority of reasonable consumers under the circumstances, and that can be as low as 10.5 percent.)

Not convinced that consumers have trouble identifying virtual influencers? Let’s take a look at one illustrative example.

Remember Lil Miquela’s promotional post for Isamaya Beauty? It doesn’t look a whole lot different from other promotional posts (featuring actual humans) for the subversive, progressive and off-kilter makeup brand known to push boundaries.

1 of 2

In addition to their human-like appearance and behavior, virtual influencers are frequently not disclosing when their posts are sponsored (as some of the examples above show). And once virtual influencers are controlled by AI, this problem is not likely to get better, at least not on its own. A university study conducted earlier this year analyzed over 1,000 AI-generated ads from across the web and found that AI ads are only labeled as such about half the time. This led researchers to conclude that AI technology “has the potential to influence consumer behavior and decisions without viewers understanding whether the content was an advertisement or if it was developed by humans or bots.”

And if you’re a kid, then it’s even harder and more harmful.

As one expert explained, the mixed reality of virtual influencers presenting themselves as if they were human “can be very difficult for children.” Another mental health expert has explained that such “fake influencers are an example of computers being used to mimic a deep psychological process which allows people to trust others.” So it may not come as a surprise that, according to advertising industry sources, “artificial intelligence w[ill] allow virtual influencers to generate their own fresh Instagram posts using machine learning to analyse data about followers and work out how best to manipulate them.”

But AI manipulation is already happening.

Which brings us to…

AI Bots

Last year, TINA.org filed a complaint with the FTC against Roblox, the multibillion-dollar metaverse gaming corporation, for, among other things, surreptitiously pushing advertising in front of millions of consumers, including more than 25 million children and adolescents. One of the ways Roblox exposes its users to advertising without their knowledge, TINA.org found, is by allowing companies that publish sponsored video games on its platform to covertly use AI-controlled promotional bots within their advergames.

These AI bots, which have been programed by brands to engage with Roblox users in promotional interactions, function as undisclosed brand avatar influencers. (There are more than 40 million different games on the Roblox platform. Not all of them use AI bots. Below are some examples from TINA.org’s Roblox complaint to the FTC.)

1 of 4

In Roblox’s Nikeland, these agenda-driven artificial influencers, which look just like other avatars, have given away promotional items such as backpacks and caps, while others have acted as barkers trying to attract users to the Nikeland stores.

In addition to the generic staff bots found in Nikeland, there have also been avatar bots for real-life NBA stars Giannis Antetokounmpo and LeBron James. In December 2021, Antetokounmpo encouraged his more than 2 million Twitter followers and more than 12 million Instagram fans to “[c]ome find me” in Nikeland because he was giving away “free gifts.” However, it appears that neither Antetokounmpo nor James ever controlled their avatars in Nikeland – rather, the look-a-like avatars interacting with other users were simply AI-controlled agents of Nike.

Undisclosed AI-controlled avatars are present in other Roblox games as well. In the NASCAR showroom in Jailbreak, for example, simulated personas controlled by AI have hung out in the virtual store letting consumers know which areas of the showroom were off limits and that NASCAR was giving away a free car. And in the Hot Wheels Open World, AI-controlled avatars have been seen urging players to upgrade their cars.

As TINA.org explained to the FTC in its complaint, users participating in the Roblox metaverse – of which a large percentage is children – have the right to know when they are interacting with an AI-controlled brand avatar.

TINA.org has also flagged the issue of AI bots in the context of fake reviews, arguing, in a comment filed with the FTC last month, that reasonable consumers have the right to assume that online reviews and ratings of companies, products and services come from other genuine consumers — not bots, imitators or generative AI.

And with respect to stealth marketing directed at kids, which, as explained above, includes AI-generated marketing, TINA.org filed a separate comment to the FTC last summer addressing, among other things, children’s (lesser) capacity to identify and understand advertising; the efficacy of digital disclosures for young children; the harms that stealth marketing inflicts on children; and possible measures that could be taken to minimize the harms of such advertising on this young population.

What’s Been Done By Regulators So Far

In June 2023, following its review of the comments it received regarding its Endorsement Guides, the FTC issued revised guides, which, among other things, change the definition of “endorsements” to now include virtual influencers.

This means that virtual influencers are now subject to the Endorsement Guides, which require, among other things, that influencers properly disclose their material connections to brands that they promote and be bona fide users of the product(s) they are advertising.

A couple months later, in September 2023, the FTC issued a Staff Perspective and Recommendations report regarding ways to protect kids from stealth advertising in digital media. The report, which references TINA.org’s comments filed with the agency, discusses children’s use of gaming platforms, many of which allow them to interact with influencers, avatars and emerging forms of AI. According to the FTC, these interactions allow kids to form “parasocial relationships” that “may be supercharged by the use of artificial intelligence” and, as a result, kids are even less equipped to defend themselves against deceptive marketing.

The September report recommends, among other things, that there should be “clear separation between kids’ entertainment/educational content and advertising.” The report also states that it “likely will be impossible to confidently identify many types of blurred advertising without the cooperation of the content creator and the advertiser” and that “[p]latforms could have a role in education as well, given their unique ability to reach large audiences.”

On that front, TikTok, for example, updated its community guidelines in April of this year to inform its users that it “welcome[s] the creativity that new artificial intelligence and other digital technologies may unlock” but cautions that “AI can make it more difficult to distinguish between fact and fiction, carrying both societal and individual risks.” The platform then tells users that “Synthetic or manipulated media that shows realistic scenes must be clearly disclosed. This can be done through the use of a sticker or caption, such as ‘synthetic’, ‘fake’, ‘not real’, or ‘altered’.” The rate at which TikTok enforces this community guideline, however, is unknown.

Most recently, on Oct. 4, the FTC hosted a virtual roundtable on AI and content creation. Much of the roundtable focused on the issue of AI’s use of copyrighted material and the impact on creators. However, at the start of the roundtable, Chair Khan said that there is no AI exemption to the laws that are already in place (“all of the laws that already prohibit unfair methods of competition or collusion or discrimination or deception, all of those laws still entirely apply”) and that the FTC will not tolerate exploitative or deceptive business practices, a sentiment echoed by Commissioners Slaughter and Bedoya.

What Hasn’t Been Done

While small steps have been taken to address some of the issues TINA.org has raised and which are outlined above, no state or federal regulator has yet to take an enforcement action against any entity for deceptively using virtual influencers or AI-generated promotional avatars in marketing. Unfortunately, these issues aren’t staying stagnant – AI technology is continuing to evolve every day and millions of children, as well as older consumers, are continuing to be deceived by the manipulative and deceptive use of AI.

In fact, since TINA.org filed its Roblox complaint with the FTC last April, the platform has continued to grow – the number of children under the age of 13 who use the platform on a daily basis is now up to more than 28 million.

In other words, we simply do not have the luxury of time to nibble around the edges of this rapidly expanding, complicated and harmful issue. The time to act is now.

Laura Smith

As Legal Director, Laura is responsible for overseeing TINA.org’s overall legal strategy. She believes that efficient and ethical markets only work if there is complete – and accurate – information…

See More Posts


You Might Be Interested In