‘Free’ is rarely free.
Why the FTC should consider virtual influencers as it reviews its Endorsement Guides.
Virtual influencers (also known as CGI or computer-generated imagery influencers) are like human influencers without all the baggage. Like human influencers, they promote brands and their value is measured in followers and “likes.” But compared to their human counterparts, these “carefully curated” personas are easier to control, cheaper and less regulated — at least for now.
The term “virtual influencer” does not appear in the FTC’s Endorsement Guides, which the agency uses to tackle deceptive influencer marketing — ranging from undisclosed ads to the promotion of products the influencer doesn’t actually use — among other things. Last year, in a statement to the New York Times, the FTC acknowledged that it “hasn’t yet specifically addressed the use of virtual influencers.” The good news is the guides, which were first enacted in 1980 and updated in 2009, are up for review this year. As part of the review process, TINA.org has submitted a comment to the agency suggesting several updates and additions, including that the FTC confront this latest trend in social media endorsements.
This comes after TINA.org conducted a review of more than two dozen virtual influencer Instagram accounts, identifying a number of potential legal issues posed by these brand-loving bots. But before we get into those, a little more on the recent rise of virtual influencers.
For many, “influencer” has become a dirty word. It conjures up a person who takes orders blindly from multiple brands (sometimes mistakenly copy and pasting those orders in the sponsored post itself) and who may not even use the products they’re promoting (which is a violation of the FTC’s Endorsement Guides). So perhaps it’s not surprising that a 2019 study titled “Can CGI Influencers Have Real Influence?” found so little separating the perceived authenticity of a computer-generated character and that of an actual living and breathing human being. The study, conducted by the social content company Fullscreen, found that while 41 percent of those surveyed said they had never heard of CGI influencers before, 23 percent said they would describe a CGI influencer as “authentic.” Only 18 percent more — 41 percent — said they would describe a human influencer as “authentic.”
Mukta Chowdhary, director of strategy and cultural forecasting at Fullscreen, who led the study, said this indicates “two major shifts.”
“First, younger generations (the survey canvassed Gen Z and millennials) are open to developing relationships with bots and AI, and to some, that bot relationship can fill the need for human connection,” she said in an email. “Second, we have been ‘botting’ ourselves with filters and photo-editing and turning ourselves into avatars and some celebrities look practically like CGIs themselves, so we can understand how a CGI could feel authentic.”
The Fullscreen survey also found that 42 percent of respondents followed a virtual influencer without knowing the account was a virtual influencer, which is understandable when you see how realistic some of the virtual influencers in TINA.org’s sampling look.
Playing the part
And not only do some virtual influencers look human, they act human. A handful in TINA.org’s sampling have the ability not only to move but to dance, talk, eat ice cream, decorate cookies at Christmas (even if they’re not religious) and hang out with humans. Some even have boyfriends, New Year’s resolutions and “Life Hacks.” Others have dreams and support worthy causes like cancer research. And like us, they’ve been trying to make the best of the situation when it comes to being “housebound” due to COVID-19.
Skipping over the legal issue of whether these “lifestyle” posts are commercial speech or not, such posts boost the marketing value of virtual influencers by making them appear more real and popular. And for those virtual influencers created for the primary purpose of monetizing posts by promoting brands, lifestyle posts may be specifically designed to increase the monetary value of the virtual influencer.
Virtual influencers also emulate human influencers — posing in fashionable clothes next to expensive cars, going to red-carpet events like the Grammys and the British Academy Film Awards and forgetting to #ad it in sponsored posts — to the point where it can be difficult to tell them apart. (See slideshow below comparing virtual influencers’ posts on the left with human influencers’ posts on the right.) It’s no wonder that numerous big-name companies — including Amazon, Puma, Lexus, Samsung, Dior, Toyota, Dr. Pepper, Porsche, Calvin Klein and KFC — have incorporated bots into their marketing campaigns.
Better than the original?
But unlike with human influencers, brands don’t have to worry about virtual influencers missing photo shoots or coming to the sudden realization that they are a Pepsi person instead of a Coke person. (Fun fact: In 2016, the most-liked Instagram post at the time with 5.5 million “likes” was an undisclosed Coke ad, which later became a disclosed ad after TINA.org contacted Coca-Cola.) Air Asia can send its homemade virtual influencer Miss Ava wherever it wants — what else does she have going on? Likewise, KFC can share its handsome, computer-generated Colonel Sanders with brands like Casper and Dr. Pepper. Would the previous human versions of the KFC spokesman played by comedians like Jim Gaffigan have agreed to be used for such purposes? Based on Gaffigan’s sunny take on “why summer vacations stink,” we’re not so sure.
Virtual influencers — some created by the promoted companies themselves and others created by independent third parties — are also generally perceived as cheaper than their human counterparts. While some virtual influencers are backed by big money — and in the case of Lil Miquela, Silicon Valley money — the cost to use them isn’t likely to come close to the hundreds of thousands of dollars human influencers with large followings can charge per post.
However, the rise of virtual influencers has not come without controversy. Last year, Calvin Klein apologized for “queerbaiting” after airing a 30-second online ad that showed real-life supermodel Bella Hadid, who identifies as a heterosexual, kissing “female” virtual influencer Lil Miquela. Lil Miquela was also criticized in 2019 for posting a vlog that described her being the victim of a sexual assault in a rideshare. Meanwhile, Shudu (whose posts claim in easy-to-miss hashtags that she is the world’s first digital supermodel) has been cast as a “white man’s digital projection of real-life black womanhood.”
But by all accounts these public controversies have done little to dissuade brands from using virtual influencers. And the biggest benefit for brands may be that virtual influencers are not subject to the same level of scrutiny as human influencers. But that can change — and soon.
The issues the FTC needs to address
First, the FTC’s Endorsement Guides should expand the definition of “endorsement” to specifically include virtual influencers. A failure to do so could be taken as an indication that the guides don’t apply to virtual influencers (and the companies behind them).
Second, the guides should make clear that virtual influencers that promote products or services are subject to the same “material connection” disclosure requirements that humans are. That is, if a material connection exists between the creator and/or owner of the virtual influencer and the promoted brand — which, under current guidelines, can range from the gift of a free product to a lucrative endorsement deal — that must be clearly and conspicuously disclosed.
Third, because virtual influencers do not have opinions, beliefs or real-life experiences and cannot generally be genuine users of the products and services they promote, the guides should address the type of disclosure(s) necessary for these virtual influencers if they are permitted to ignore the principles established in the guides — which require that the endorsement reflect the honest opinions of the influencer and that the influencer be a “bona fide” or genuine user of the product at the time the endorsement was made. This issue arises frequently when virtual influencers promote products that are meant to impact or change a human’s appearance such as the application of makeup.
Finally, to avoid consumer confusion and the risk of deception, the guides should require virtual influencers engaged in influencer marketing to disclose that they are not human. Specifically, the FTC should require virtual influencers to clearly and conspicuously disclose that they are bots in all promotional posts. And while research is likely needed to determine how best to convey that to consumers (for example, a hashtag such as #bot may not be enough), virtual influencers should not go two years before being “outed,” as was the case with Lil Miquela.
Time to act
It has been over a decade since the FTC’s Endorsement Guides were updated in 2009. When the FTC last updated the guides, not only did virtual influencers not exist, but the platforms on which they have recently thrived, such as Instagram and Snapchat, had yet to launch.
Now, in a COVID-19 world, the use of virtual influencers who are unaffected by travel restrictions, stay-at-home orders and the basic truths of mortality will likely continue to rise. The multibillion-dollar influencer marketing industry has already proven itself a driver of sales. There’s every sign that the emergence of virtual influencers will only amplify that.
Simply put, the use of virtual influencer marketing demands the FTC’s attention. The time for the agency to act is now.
This article was updated on 6/23/20.