Tag Archives: Instagram

Pet stars of Instagram



Pet stars of Instagram – CBS News










































Watch CBS News



Adorable animals that have gone viral on Instagram have also won their owners some lucrative sponsorships. Richard Schlesinger talks with Loni Edwards, whose firm, The Dog Agency, represents all kinds of pets whose social media stardom can bring some big bucks. (This story originally aired on April 15, 2018)

Be the first to know

Get browser notifications for breaking news, live events, and exclusive reporting.


Source link

Meta’s platforms showed hundreds of “nudify” deepfake ads, CBS News investigation finds

Meta has removed a number of ads promoting “nudify” apps — AI tools used to create sexually explicit deepfakes using images of real people — after a CBS News investigation found hundreds of such advertisements on its platforms.

“We have strict rules against non-consensual intimate imagery; we removed these ads, deleted the Pages responsible for running them and permanently blocked the URLs associated with these apps,” a Meta spokesperson told CBS News in an emailed statement. 

CBS News uncovered dozens of those ads on Meta’s Instagram platform, in its “Stories” feature, promoting AI tools that, in many cases, advertised the ability to “upload a photo” and “see anyone naked.” Other ads in Instagram’s Stories promoted the ability to upload and manipulate videos of real people. One promotional ad even read “how is this filter even allowed?” as text underneath an example of a nude deepfake

One ad promoted its AI product by using highly sexualized, underwear-clad deepfake images of actors Scarlett Johansson and Anne Hathaway. Some of the ads ads’ URL links redirect to websites that promote the ability to animate real people’s images and get them to perform sex acts. And some of the applications charged users between $20 and $80 to access these “exclusive” and “advance” features. In other cases, an ad’s URL redirected users to Apple’s app store, where “nudify” apps were available to download.

Meta platforms such as Instagram have marketed AI tools that let users create sexually explicit images of real people.

An analysis of the advertisements in Meta’s ad library found that there were, at a minimum, hundreds of these ads available across the company’s social media platforms, including on Facebook, Instagram, Threads, the Facebook Messenger application and Meta Audience Network — a platform that allows Meta advertisers to reach users on mobile apps and websites that partner with the company. 

According to Meta’s own Ad Library data, many of these ads were specifically targeted at men between the ages of 18 and 65, and were active in the United States, European Union and United Kingdom. 

A Meta spokesperson told CBS News the spread of this sort of AI-generated content is an ongoing problem and they are facing increasingly sophisticated challenges in trying to combat it.

“The people behind these exploitative apps constantly evolve their tactics to evade detection, so we’re continuously working to strengthen our enforcement,” a Meta spokesperson said. 

CBS News found that ads for “nudify” deepfake tools were still available on the company’s Instagram platform even after Meta had removed those initially flagged.

Meta platforms such as Instagram have marketed AI tools that let users create sexually explicit images of real people.

Deepfakes are manipulated images, audio recordings, or videos of real people that have been altered with artificial intelligence to misrepresent someone as saying or doing something that the person did not actually say or do. 

Last month, President Trump signed into law the bipartisan “Take It Down Act,” which, among other things, requires websites and social media companies to remove deepfake content within 48 hours of notice from a victim. 

Although the law makes it illegal to “knowingly publish” or threaten to publish intimate images without a person’s consent, including AI-created deepfakes, it does not target the tools used to create such AI-generated content. 

Those tools do violate platform safety and moderation rules implemented by both Apple and Meta on their respective platforms.

Meta’s advertising standards policy says, “ads must not contain adult nudity and sexual activity. This includes nudity, depictions of people in explicit or sexually suggestive positions, or activities that are sexually suggestive.”

Under Meta’s “bullying and harassment” policy, the company also prohibits “derogatory sexualized photoshop or drawings” on its platforms. The company says its regulations are intended to block users from sharing or threatening to share nonconsensual intimate imagery.

Apple’s guidelines for its app store explicitly state that “content that is offensive, insensitive,  upsetting, intended to disgust, in exceptionally poor taste, or just plain creepy” is banned.

Alexios Mantzarlis, director of the Security, Trust, and Safety Initiative at Cornell University’s tech research center, has been studying the surge in AI deepfake networks marketing on social platforms for more than a year. He told CBS News in a phone interview on Tuesday that he’d seen thousands more of these ads across Meta platforms, as well as on platforms such as X and Telegram, during that period. 

Although Telegram and X have what he described as a structural “lawlessness” that allows for this sort of content, he believes Meta’s leadership lacks the will to address the issue, despite having content moderators in place. 

“I do think that trust and safety teams at these companies care. I don’t think, frankly, that they care at the very top of the company in Meta’s case,” he said. “They’re clearly under-resourcing the teams that have to fight this stuff, because as sophisticated as these [deepfake] networks are … they don’t have Meta money to throw at it.” 

Mantzarlis also said that he found in his research that “nudify” deepfake generators are available to download on both Apple’s app store and Google’s Play store, expressing frustration with these massive platforms’ inability to enforce such content. 

“The problem with apps is that they have this dual-use front where they present on the app store as a fun way to face swap, but then they are marketing on Meta as their primary purpose being nudification. So when these apps come up for review on the Apple or Google store, they don’t necessarily have the wherewithal to ban them,” he said.

“There needs to be cross-industry cooperation where if the app or the website markets itself as a tool for nudification on any place on the web, then everyone else can be like, ‘All right, I don’t care what you present yourself as on my platform, you’re gone,'” Mantzarlis added. 

CBS News has reached out to both Apple and Google for comment as to how they moderate their respective platforms. Neither company had responded by the time of writing. 

Major tech companies’ promotion of such apps raises serious questions about both user consent and about online safety for minors. A CBS News analysis of one “nudify” website promoted on Instagram showed that the site did not prompt any form of age verification prior to a user uploading a photo to generate a deepfake image. 

Such issues are widespread. In December, CBS News’ 60 Minutes reported on the lack of age verification on one of the most popular sites using artificial intelligence to generate fake nude photos of real people. 

Despite visitors being told that they must be 18 or older to use the site, and that “processing of minors is impossible,” 60 Minutes was able to immediately gain access to uploading photos once the user clicked “accept” on the age warning prompt, with no other age verification necessary.

Data also shows that a high percentage of underage teenagers have interacted with deepfake content. A March 2025 study conducted by the children’s protection nonprofit Thorn showed that among teens, 41% said they had heard of the term “deepfake nudes,” while 10% reported personally knowing someone who had had deepfake nude imagery created of them.

Source link

Student visa applicants advised to tread lightly as U.S. expands social media vetting

Counselors who work with foreign students eager to attend college in the U.S. are advising them to purge their social media accounts of posts that could attract the attention of U.S. State Department officials.

“Any new student who comes on board — especially an international student who doesn’t have a U.S. passport — we would be going through their social media with them and talk to them about what they are saying on Snapchat, in group chats,” said Kat Cohen, founder and CEO of IvyWise, an educational consultancy firm for college admissions. “Because, if the information comes off as being radical or anti-American in some way, it is not going to help them.”

The focus on international students’ online profiles follows a new push by the Trump administration to scrutinize social media accounts as part of the evaluation process for student visa applications. In a cable dated May 27 and obtained by CBS News, the State Department said it was preparing to expand social media screening and vetting. The agency did not specify exactly what type of content it would be looking for.

“President Trump will always put the safety of Americans first, and it is a privilege, not a right, to study in the United States,” White House spokesperson Anna Kelly said in a statement. “Enhanced social media vetting is a commonsense measure that will help ensure that guests in our country are not planning to harm Americans, which is a national security priority.” 

The new vetting measures build upon an April statement from United States Citizenship and Immigration Services announcing that the agency will be taking into account “antisemitic activity on social media” as “grounds for denying immigration benefit requests.” 

No politics

Advisers who cater to international students applying to U.S. schools told CBS Moneywatch they are reluctant to advise them to delete their social media accounts outright. But they are urging students to eliminate political-themed posts, especially if they relate to controversial topics such as the wars in Gaza and Ukraine. IvyWise also discourages foreign students from reposting any information they haven’t verified themselves, given that it might be inaccurate. 

“We don’t think students should delete their social media accounts completely,” Cohen said. “But we do need to make sure we go through their social media accounts with them to make sure that they are presenting themselves in the best possible light.”

Mandee Heller Adler, founder of International College Counselors, also recommends that students weed out potentially controversial posts, including any opinions or content related to politics. 

“I’m not saying that they have to get rid of the whole thing altogether, but certainly delete any political posts,” Adler told CBS MoneyWatch. “This is kind of an easy way for kids to protect themselves.”

Sasha Chada, who has led Texas-based college admissions counseling group Ivy Scholars for over a decade, said that asking students to delete their social media would be a “tall order” given how deeply ingrained the platforms are in their lives. Over half of U.S. adults between the ages of 18 and 34 report using TikTok, according to Pew Research.

Chilling effect?

Some critics think the State Department’s scrutiny of international students’ social accounts will inhibit their freedom of expression. 

“While social media vetting of visa applicants isn’t new, should the administration’s ‘expanded vetting’ consider political viewpoints, it will certainly scare some would-be applicants into silencing themselves on any topic they feel might contradict the views of President Trump, or his successors,” said Robert Shibley, special counsel at the Foundation for Individual Rights and Expression, which promotes free speech on college campuses.

The State Department did not respond to CBS MoneyWatch’s request for comment. “We take very seriously the process of vetting who it is that comes into the country, and we’re going to continue to do that,” State Department spokesperson Tammy Bruce told reporters this week, when asked about student visas. 

Mahsa Khanbabai, an immigration attorney based in Massachusetts whose firm assists with student visas, said she has spoken to dozens of foreign students — both overseas and in the U.S. — some of whom have decided to delete their social media accounts or change them from public to private for protection. 

Students, she said, are not just concerned about posts on political flashpoints like Gaza, but also their personal views on topics like climate change and reproductive rights advocacy. Recent consultations Khanbabai has had with foreign students have been focused, she said, on helping them determine how strongly they feel about publicizing their views, and giving them a sense of the potential trade-offs when deciding to post or not to post. 

“I meet with students to ask them, ‘Are you willing to pause engagement on social media to achieve longer-term goals like your career and education, knowing that in the short term you’re ultimately kind of maybe sacrificing some of your ethical or moral values?'” she said.

Source link

Your Brain Wasn’t Built for This. Algorithms Know It.

As algorithms guide our every move, the line between convenience and control gets blurrier. Unsplash+

In March 2021, a driver in Charlton, Massachusetts, plunged his car into Buffumville Lake while following GPS directions. Rescue teams were called to recover the completely submerged vehicle from 8 feet of water. The driver thankfully escaped with just a few minor injuries. When asked why he drove into a lake despite being able to see the water ahead, his answer was simple: the GPS told him to go that way. We’ve all heard these stories, and let’s face it, they sound ridiculous. But here’s the thing: We are all somewhere on this spectrum of conveniently handing over decisions to our friendly bots.

The Silent Surrender of Decision-Making

As a society that prizes autonomy and independence, it’s surprising that we’ve gradually outsourced more of our decision-making to algorithms, often without even realizing it. What began with navigation has now expanded into nearly every aspect of our lives. We defer to recommendation engines for what to watch, read, eat and believe. We consult A.I. for career advice rather than developing our own criteria for meaningful work. We ask chatbots about relationship compatibility instead of honing our emotional intelligence. It’s hard to see where the machines end and our minds begin. Digital tech is now a literal extension of our minds, and we urgently need to treat it as such.

The convenience is undeniable. Why struggle with choices when an algorithm can analyze thousands of variables in milliseconds? Why develop your own expertise when you have access to a myriad of geniuses in your pocket? But this convenience comes with a subtle cost: our agency as human beings. We think of technology as supercharging what we want to do anyway, but there’s a thin line between facilitating what we want and manipulating it. The way this happens is often so subtle that we barely notice it happening, explains Karen Yeung, a scholar researching what she calls “hypernudging,” or how A.I. shapes our preferences.

Music streaming services don’t merely serve what you like; they gradually shift your musical taste toward more commercially viable artists by controlling your exposure. News aggregators don’t just deliver information; they subtly emphasize certain perspectives, slowly molding your political opinions. Media theorist Marshall Mcluhan recognized this dynamic decades ago when he shared the observation that first we shape our tools and then our tools shape us. Today’s algorithms don’t just respond to our choices; they actively and intimately shape them.

Living in Narrowing Information Landscapes

Any skilled delegator will tell you that one of the most satisfying things about outsourcing decisions is that it frees up the mind. And it’s true. If Sunday mornings are always pancakes, you don’t have to think (or negotiate!) with anyone about what’s for breakfast. The problem is, when we delegate our primary information feeds—news, search and social media—it starts narrowing down our core understanding of reality. To some extent, this is essential for our sanity, as there is simply too much information to process. But what happens to our ideas, motivations and actions when what we perceive as the world—our reality—is increasingly limited?

Multiple algorithmic effects are at play simultaneously. Despite the illusion of infinite choice, our information landscape narrows through personal filtering and cultural homogenization, leaving us with increasingly limited perspectives. In Filterworld, Kyle Chayka explains how algorithms have flattened culture by rewarding certain engagement patterns. Content creators worldwide chase similar algorithmic rewards, producing remarkably similar outputs to maximize visibility. TikTok-optimized homes, Instagram-friendly cafés and Spotify-formatted songs are all designed to perform well within algorithmic systems.

This one’s for Algorithm Daddy!” explains actress and activist Jameela Jamil, as she posts a selfie in a revealing dress—a gamified move she feels she has to make whenever she starts noticing the algorithms suppressing her more substantive social justice content. Cultural diversity suffers similarly because content not in English is less likely to be included in A.I. training data.

Hundreds of people interviewed described this paradoxical feeling: overwhelmed by choice and a bit suffocated by algorithmic recommendations. “There are endless options on Netflix,” one executive said, “but I can’t find anything good to watch.” How can we make truly informed choices when our information diet is so tightly curated and narrowed?

Our Gradual Brain Atrophy

famous study of London taxi drivers showed that their hippocampi—the brain regions responsible for spatial navigation—grew larger as they memorized the city’s labyrinthine streets. Thanks to neuroplasticity, our brains constantly change based on how we use them. And it works both ways: when we stop navigating using our senses, we lose the capacity to do so. For example, when we rely on A.I. for research, we don’t develop the core skills to connect ideas. When we accept A.I. summaries without checking sources, we delegate credibility evaluation and weaken our critical thinking. When we let algorithms curate our music, we atrophy our ability to develop personal taste. When we follow automated fitness recommendations rather than listening to our bodies, we diminish our intuitive understanding of our physical needs. When we let predictive text complete our thoughts, we start to forget how to express ourselves precisely.

In The Shallows, Nicholas Carr explores how our brains physically change in response to internet use, developing neural pathways that excel at rapid skimming but atrophy our capacity for sustained attention and deep reading. The philosopher-mechanic Matthew Crawford offers a compelling antidote in Shop Class as Soulcraft, arguing that working with physical objects—fixing motorcycles or building furniture—provides a form of mental engagement increasingly rare and precious in our digital economy. These are important and tangible trade-offs that fundamentally change us, and while they may seem inevitable in our digital worlds, recognizing how we’re shaped by every tool we use is the first step toward becoming more aware and intentional about technology.

Reclaiming Our Algorithmic Agency

The good news is that there are ways to regain control and maintain human agency in our digital lives. First, recognize that defaults are deliberate choices made by companies, not neutral starting points. Research consistently shows that people rarely change default settings. Did you know you can view Instagram posts chronologically rather than by algorithm-determined “relevance”? How many people utilize ChatGPT’s customization features? These options often exist for power users but remain largely unused by most. It’s not just about digging into the settings; it’s a mindset. Each time we accept a default setting, we surrender a choice. With repetition, this creates a form of learned helplessness—we begin to believe we have no control over our technological experiences.

Second, consider periodic “algorithm resets.” Log out, clear your data or use private browsing modes. While it’s convenient to stay logged in, this convenience comes at the cost of increasingly narrow personalization. When shopping, consider the privacy implications of centralizing all purchases through a single platform that builds comprehensive profiles of your behavior. Amazon Fresh, anyone? Third, regulatory frameworks that protect cognitive liberty should be supported. As A.I. is able to read thoughts and manipulate them, Professor Nita Farahany is among those making the case for a new human rights framework around the commodification of brain data. If not, Farahany believes that “our freedom of thought, access and control over our own brains, and our mental privacy will be threatened.”

The algorithmic revolution promises unprecedented benefits. But many of these threaten to come at the cost of our agency and cognitive independence. By making conscious and intentional choices about when to follow algorithmic guidance and when to do our own thing, we can stay connected to what we value most in each situation. Perhaps the most important skill we can develop these days is knowing when to trust the machine and when to trust our own eyes, instincts, and judgment. So the next time a bot tries to steer you into a metaphorical lake, please remember you’re still the driver. And you’ve got options.

Menka Sanghvi is a mindfulness and digital habits expert based in London. She is a globally acclaimed author and speaker on attention, tech and society. Her latest book is Your Best Digital Life – Use Your Mind to Tame Your Tech (Macmillan, 2025). Her Substack newsletter, Trying Not To Be A Bot, explores our evolving relationship with A.I.



Source link

Mark Zuckerberg called to testify in Facebook parent Meta’s antitrust trial

Meta Platforms CEO Mark Zuckerberg took the stand Monday in a Washington, D.C., courtroom to defend his social media company from federal allegations that the technology giant is a monopoly. 

Meta, the parent company of Facebook, Instagram and WhatsApp, is facing off with the Federal Trade Commission on Monday, for Day one of a landmark antitrust trial that could result in the company’s breakup. For Zuckerberg, the case could determine whether the business empire he started building while still a student at Harvard University will be forced to break apart. 

The trial will be the first big test of the FTC’s willingness under President Trump to challenge Big Tech, a long-time target of Republicans. The lawsuit was initially filed against Meta — then called Facebook — in 2020, during Mr. Trump’s first term, before being amended in 2021.

In its complaint, the FTC accuses Meta of “anticompetitive conduct,” alleging that the company’s ownership of Instagram and WhatsApp gives it excessive control of the social media market.

A courtroom sketch shows Meta CEO Mark Zuckerberg on the stand on the first day of the technology company’s federal antitrust trial on April 14, 2025, in Washington, D.C.

Dana Verkouteren


“There’s nothing wrong with Meta innovating,” said Daniel Matheson, lead attorney for the FTC in his opening statements for the agency Monday, “It’s what happened next that is a problem.” 

During his testimony Monday, Zuckerberg defended his decision to buy Instagram and pushed back against FTC claims that he did not invest in developing the app.

Purchased by Facebook in 2012 and 2014, respectively, Instagram and WhatsApp have grown into social media powerhouses.

To restore competition, Meta must part ways with Instagram and WhatsApp, the government agency says in court filings. The FTC also wants Meta to provide the government with prior notice for any future mergers and acquisitions. 

With the landmark trial underway, here’s what you need to know.

How long will the Meta trial last, and who will testify?

The trial, which begins Monday in federal court in Washington, D.C., is expected to last several weeks. 

U.S. District Judge James Boasberg will preside over the case, which could see a range of witnesses including Meta CEO and founder Zuckerberg; former Meta Chief Operating Officer Sheryl Sandberg; former Meta Chief Technology Officer Mike Schroepfer; Instagram co-founder Kevin Systrom; and executives from rival social media platforms.

What is happening in court today?

The FTC and Meta made their opening arguments on Monday, with witness testimony expected to start in the afternoon.

In his opening statement, lead FTC attorney Matheson said Meta was struggling to compete with the fast-growing WhatsApp and Instagram platforms and in buying them, the tech giant was “eliminating immediate threats,” to their market.

Meta said the company “did nothing wrong” by acquiring Instagram and WhatsApp. Meta attorney Mark Hansen said in his opening statement that the two apps have grown substantially under the tech company’s ownership and that there is no evidence to prove Meta is a monopoly. 

If Meta had monopoly power, it would exercise control over pricing in the social media space, Hansen said. But Meta’s services are free, he noted. “How can the FTC maintain this monopolization case when [Meta] has never charged users a cent,” Hansen said.

After the opening statements, Zuckerberg was the first person called to the stand to testify. His 
testimony largely focused on Facebook’s efforts to build a rival photo app that could compete with Instagram, before it decided to ultimately buy the company in 2012.

Zuckerberg admitted Facebook was struggling with mobile users in the early 2010s. “Our whole company had been built up to that point” for desktop. 

Referencing email communications sent by the Meta executive, the FTC pointed out that Zuckerberg had been eyeing the progress of competitor Instagram. Facebook at the time was working on launching a new photo app.

According to the FTC, Zuckerberg sent an email in Feb 2011 saying “Instagram seems like it’s growing quickly” that mentioned their numbers and uploads.

In 2012, Zuckerberg sent another email in which he described his rationale for buying Instagram, saying it was a good photo-sharing network, according to the FTC. At that point Facebook was “so far behind that we don’t even understand how far behind we are,” he said, adding, “I worry that it will take us too long to catch up.”

Zuckerberg is scheduled to return to court Tuesday morning to continue testimony.

What’s at stake?

The showdown is the most significant legal challenge brought against Meta in the company’s roughly 20-year history. If the FTC is successful, Meta could be forced to divest Instagram and WhatsApp. Instagram, which Meta has owned for over a decade, accounts for half of the company’s overall advertising revenue.

“Instagram has also been picking up the slack for Facebook on the user front, particularly among young people, for a long time,” Emarketer analyst Jasmine Enberg told the Associated Press. 

“The trial also comes as Meta is trying to bring back OG Facebook in an effort to appeal to Gen Z and younger users as they join social media. Social media usage is far more fragmented today than it was in 2012 when Facebook acquired Instagram, and Facebook isn’t where the cool college kids hang out anymore. Meta needs Instagram to continue growing, especially as more advertisers think Instagram-first with their Meta budgets,” she added.

Meta, headquartered in Menlo Park, California, earned over $164 billion in revenue in 2024. Facebook and Instagram are the two most profitable social media platforms in the world.

In a statement issued Sunday, April 13, Meta said the “stakes could not be higher in this trial for U.S. consumers and businesses.”

What is Meta saying?

The social media company has called the FTC’s case “weak” and said it “ignores reality,” adding that it faces stiff competition from TikTok and YouTube. Both platforms outrank Facebook and Instagram in terms of how long users spend on each.

“Ultimately, an ill-conceived lawsuit like this will make companies think twice before investing in innovation, knowing they may be punished if that innovation leads to success,” Meta’s statement reads. “On top of it, this weak case is costing taxpayers millions of dollars.”

“The FTC’s lawsuit against Meta defies reality,” a Meta spokesperson told CBS MoneyWatch. “The evidence at trial will show what every 17-year-old in the world knows: Instagram, Facebook and WhatsApp compete with Chinese-owned TikTok, YouTube, X, iMessage and many others.”

“Regulators should be supporting American innovation, rather than seeking to break up a great American company and further advantaging China on critical issues like AI,” the spokesperson added.

The FTC did not reply to a request for comment.

When did this case get started?

The history of the Meta case stretches back several years. The FTC initially filed the suit in 2020 during President Trump’s first term in office. 

In June 2021, U.S. District Judge James Boasberg dismissed the antitrust lawsuit brought by the FTC, claiming the lawsuits were “legally insufficient” and did not supply sufficient evidence to prove Facebook was a monopoly. 

But the federal judge later cleared the path for the case to proceed after the FTC introduced more evidence in an amended complaint, according to The Washington Post.

contributed to this report.

Source link

EU hits Apple and Meta with hundreds of millions of dollars in new fines, enforcing digital competition rules

London — European Union watchdogs fined Apple and Meta hundreds of millions of euros Wednesday as they stepped up enforcement of the 27-nation bloc’s digital competition rules. The European Commission imposed a 500 million euro ($571 million) fine on Apple for preventing app makers from pointing users to cheaper options outside its App Store. The commission, which is the EU’s executive arm, also fined Meta Platforms 200 million euros ($228 million) because it forced Facebook and Instagram users to choose between seeing ads or paying to avoid them.

The punishments were smaller than the blockbuster multibillion-euro fines that the commission has previously slapped on Big Tech companies in antitrust cases.

Apple and Meta have to comply with the decisions within 60 days or risk unspecified “periodic penalty payments,” the commission said.

The decisions were expected to come in March, but officials apparently held off amid an escalating trans-Atlantic trade war with President Trump, who has repeatedly complained about regulations from Brussels affecting American companies.

The penalties were issued under the EU’s Digital Markets Act, also known as the DMA. It’s a sweeping rulebook that amounts to a set of do’s and don’ts designed to give consumers and businesses more choice and prevent Big Tech “gatekeepers” from cornering digital markets.

The DMA seeks to ensure “that citizens have full control over when and how their data is used online, and businesses can freely communicate with their own customers,” Henna Virkkunen, the commission’s executive vice-president for tech sovereignty, said in a statement.

EU Commission Vice-President for Technology Henna Virkkunen gives a speech during a press conference on secure and sustainable E-commerce communication, in Brussels, Belgium, Feb. 5, 2025.

Dursun Aydemir/Anadolu/Getty


“The decisions adopted today find that both Apple and Meta have taken away this free choice from their users and are required to change their behavior,” Virkkunen said.

Both companies indicated they would appeal.

“The European Commission is attempting to handicap successful American businesses while allowing Chinese and European companies to operate under different standards,” Meta Chief Global Affairs Officer Joel Kaplan said in a statement provided by the U.S. tech giant. “This isn’t just about a fine; the Commission forcing us to change our business model effectively imposes a multi-billion-dollar tariff on Meta while requiring us to offer an inferior service. And by unfairly restricting personalized advertising the European Commission is also hurting European businesses and economies.” 

Apple accused the commission of “unfairly targeting” the iPhone maker, and said it “continues to move the goal posts” despite the company’s efforts to comply with the rules.

In the App Store case, the Commission had accused the iPhone maker of imposing unfair rules preventing app developers from freely steering consumers to other channels.

Among the DMA’s provisions are requirements to let developers inform customers of cheaper purchasing options and direct them to those offers.

The commission said it ordered Apple to remove technical and commercial restrictions that prevent developers from steering users to other channels, and to end “non-compliant” conduct.

Apple said it has “spent hundreds of thousands of engineering hours and made dozens of changes to comply with this law, none of which our users have asked for.”

“Despite countless meetings, the Commission continues to move the goal posts every step of the way,” the company said.

Apple has also faced a broad antitrust lawsuit in the U.S., where the Justice Department alleged that the California company illegally engaged in anti-competitive behavior in an effort to build a “moat around its smartphone monopoly” and maximize its profits at the expense of consumers. Fifteen states and the District of Columbia have joined the suit as plaintiffs.

The EU’s Meta investigation centered on the company’s strategy to comply with strict European data privacy rules by giving users the option of paying for ad-free versions of Facebook and Instagram.

Users could pay at least 10 euros ($11) a month to avoid being targeted by ads based on their personal data. The U.S. tech giant rolled out the option after the European Union’s top court ruled Meta must first get consent before showing ads to users, in a decision that threatened its business model of tailoring ads based on individual users’ online interests and digital activity.

Regulators took issue with Meta’s model, saying it doesn’t allow users to exercise their right to “freely consent” to allowing their personal data from its various services, which also include Facebook Marketplace, WhatsApp and Messenger, to be combined for personalized ads.

Meta rolled out a third option in November giving Facebook and Instagram users in Europe the option to see fewer personalized ads if they don’t want to pay for an ad-free subscription. The commission said it’s “currently assessing” this option and continues to hold talks with Meta, and has asked the company to provide evidence of the new option’s impact.

The European Commission has also slapped Google with antitrust penalties several times, including a record $5 billion fine levied in 2018 over the search engine’s abuse of the market dominance of its Android mobile phone operating system.

Source link