Consumer News

2025 Deceptive Ad Trends

A closer look at what we’ll be monitoring in the new year.

Consumers have much to watch out for in terms of deceptive advertising trends in 2025. Here are five TINA.org will be keeping an eye on this year.

Junk Fees

If 2024 was about complaining about junk fees, 2025 could be the year when companies that use this bait-and-switch pricing tactic finally have to answer for their deceptive marketing. After receiving more than 60,000 comments about a proposed rule banning these types of surprise charges, the FTC announced in December that it had finalized its Junk Fees Rule, which allows the agency to seek civil penalties of up to $51,744 per violation. Unfortunately, the final rule is much more narrow in scope than the proposed rule and only addresses junk fees in the live-event ticketing and short-term lodging industries. As TINA.org noted in its February 2024 comment in support of the proposed rule, which would have banned junk fees in all industries, hidden and deceptive fees are pervasive and permeate the entire economy, costing consumers billions of dollars each year. Still, as FTC Commissioner Rebecca Slaughter put it, a rule targeting hidden fees in two industries is “better than having no rule at all.” We’re anxious to see it put to work.

Software Tethering

Last February, Oral-B discontinued the app that allowed users of its Guide toothbrush to connect the product’s base to Alexa, turning the smart toothbrush into just a regular toothbrush if users lost Wi-Fi. Just a few months later, in May, Spotify notified purchasers of its Car Thing music player that the device would stop working in December, not even two years after the product launched. Both were examples of software tethering, which is when a manufacturer uses software to control how a connected device functions after purchase, if it continues to function at all. As consumers fill their homes with smart devices – according to the latest numbers available, the average home has 21 connected products – this practice of taking away features that were advertised at the time of purchase or completely “bricking” a device through software updates has both consumer groups and the FTC taking a closer look at the harms of software tethering. In fact, after a coalition of consumer groups called on the FTC to issue “clear guidance” to “help alleviate the worst outcomes of software tethering,” the FTC published a staff report in November indicating that the practice could violate multiple laws. What’s next?

Marketing to Kids

Deceptive marketing tricks even some of the savviest of adults. So what chance do kids, who depending on their age may not even be aware of the concept of advertising, have? While parents may have the ultimate say in purchasing decisions, there is a common misconception that mom and dad serve as an effective filter for ads that target minors. Look no further than the “Sephora Kid,” a moniker that emerged in 2024 to describe tweens obsessed with skincare. It’s not all harmless fun. In November, Sephora was called out for allegedly marketing anti-aging skincare products to children. The products containing potentially dangerous ingredients for kids appeared in search results on Sephora’s website for “skincare for kids,” “products for children” and “gift for tween,” among other search terms. Children are among the most vulnerable consumers, which makes them easy targets. Teens, in particular, spend hours a day on social media where, according to one estimate, they are exposed to hundreds of ads on a daily basis.

AI-Generated Reviews

Fake online reviews were already an issue before artificial intelligence. AI, however, has accelerated the spread of fake reviews on the internet. As with a lot of things, AI can do it faster and better. Dishonest businesses don’t have struggle to come up with something that sounds like it came from a real user; they can just have AI do it. Recognizing this problem, the FTC last year finalized a Fake Reviews Rule that prohibits fake reviews, including those that are AI-generated. Before the rule went into effect in October, the FTC sued a company called Rytr for allegedly selling an AI-enabled “writing assistant” that “generates genuine-sounding, detailed reviews quickly and with little user effort.” This resulted in a consent order against the company barring it from advertising or selling its AI-generated reviews tool. As with the Junk Fees Rule, the Fake Reviews Rule allows the FTC to seek civil penalties of up to $51,744 per violation. TINA.org filed multiple comments in support of the Fake Reviews Rule, in addition to a comment regarding the FTC’s proposed consent order against Rytr. Given the amount of trust consumers put into online reviews, it’s crucial that what they’re reading comes from a bona fide user and not a bot.

TRAPs

Last August, a district court issued an order blocking the FTC from enforcing a ban on noncompete clauses in employment contracts that could also be applied to training repayment agreement provisions, or TRAPs. TRAPs require workers to pay employers back for purported training expenses, even in cases where the training was advertised as a perk of the position, if they leave the job before a certain date. While the prevalence of TRAPs has been difficult to nail down, partly due to the fact that they are often buried in employment agreements, the initial use of TRAPs in the financial services sector in the 1990s has expanded to new industries, including healthcare, transportation and retail. PetSmart and Smoothstack are just two examples of companies that have been sued over their alleged use of TRAPs. The FTC has appealed the district court’s order preventing it from enforcing the ban. We’ll be keeping tabs on the case in the new year because there’s nothing worse than being stuck in a job you can’t quit.

See previous years’ deceptive ad trends posts here.


You Might Be Interested In