As chief privacy officer of Meta, Erin Egan has a challenging role. Leading the privacy and global policy operations for the biggest social networking platform in the world has always been a complex job, with the company’s platforms frequently coming under the scrutiny of regulators globally over their handling and use of personal data.
But with generative artificial intelligence becoming mainstream in the past few years, it feels like the stakes have ratcheted up a notch.
“We are in a moment of a technological revolution akin to the revolution we had with the internet in the 1990s,” Egan says.
There is opportunity here for companies, that much is clear. Meta has big ambitions when it comes to AI. The Facebook owner is aiming to integrate the technology into hundreds of millions of businesses, bringing the benefits of AI to companies that might otherwise have missed out.
And it is spending big to do it, investing billions in the technology in 2025 alone as it builds data centres and expands its capacity to handle what it expects will be a surge in activity.
If chief executive Mark Zuckerberg’s vision is realised, AI could be used in the future for everything from creating advertising to preventing loneliness in society.
The company already has close to a billion active monthly users for its AI technology, and it is continuing to develop open-source models – dubbed Llama – although the latest version, Behemoth, has reportedly hit a snag in its roll-out that could delay it by a few months.
But there is also apprehension from consumers about what this new technology means for privacy, and how their data will be used to train the technology. For EU citizens, the introduction of new legislation is intended to help allay those fears, and provide a safeguard for how services can operate in the region.
That can be at odds with what businesses see as important.
Meta originally launched its AI in 2023 to the US market. It had planned to further expand that last year in the EU but opposition from privacy regulators put that on hold, and it is only in recent weeks that users in Ireland have seen Meta AI pop up as a chatbot in WhatsApp and Messenger.
It is an interesting time to head privacy at the social media giant. Egan has been with the company since 2011, joining from a successful law practice to lead then-Facebook’s new privacy policy team. She has since seen the company through several high-profile battles with regulators worldwide. She is preparing for more.
Relations between the US tech companies and the EU are frayed at present, with the current administration viewing action taken by the EU as discriminatory against US companies.
But keeping the lines of communication open is important on both sides. Egan was in Europe recently to attend the Venice Privacy Symposium in Italy, a visit she subsequently described as “productive”.
But it seems like there is a long way to go before both Big Tech and the EU are on the same page.
Last year, when Meta announced it would start using the public posts, comments and other data of EU users to train its AI models, there was immediate pushback. The company offered an opt-out form for users who wanted to object, but regulators still asked the company to hold off on its plans while it looked into the legality of the situation.
Meta hit pause for EU users, while moving ahead with introducing the AI technology in the UK.
Egan has been vocal on her opinion of the situation. If not resolved, the EU risks becoming the “museum of the world”, she says.
“It is part of an attempt by a vocal minority of activist groups to delay AI innovation in the EU and that ultimately harms consumers and businesses who can benefit from these technologies,” she says. “Their arguments were wrong on the facts and they’re wrong on the law.”
Meta feels singled out by regulators, pointing out that the type of training the company is doing is already being carried out by other companies.
It is a familiar complaint for the company. It has felt the sting of Europe’s regulations governing everything from data transfers to the use of consumer data on several occasions.
Since GDPR was introduced, Meta has been hit with a number of fines for various infringements of the rules. That includes the largest fine to date levied under the regulations, €1.2 billion, which was imposed in May 2023. On the list of the top five GDPR fines, Meta accounts for three.

Now it is facing even more rules, with the Digital Markets Act coming into play.
Meta insists it supports regulation and that all it wants is for things to be less complex.
“We need to look at all fundamental rights. Obviously we have fundamental rights to privacy but we also have fundamental rights of the freedom to do business,” Egan says. “So it’s important, as we think about regulation in this space and how to interpret it and apply it, that we’re applying it through a lens that recognises, yes, there’s risk, but we need to balance and we need to be focused on where there’s high risk and high harms and not do things in a way that they’re going to affect the technological revolution that’s in front of us.”
Egan points to the sheer number of regulations – more than 100 applying to the digital sector – noting it is much more complex in the EU than in any other region or country she works in today.
“There are over 70 new pieces of digital legislation that have been adopted in the last six years,” she says. “This is, for us, the most difficult place for us to operate. Therefore, it must be insurmountable for small businesses who are trying to innovate.”
It’s not just about the number of regulations, though. Meta has complained about the goalposts shifting. A case in point is the “pay or consent” model that Meta introduced in 2023. That model required Facebook and Instagram users to consent to the processing of their personal data for advertising purposes or pay a fee to not be shown personalised ads.
European consumer organisation BEUC described the changes to the policy as “cosmetic”.
“In our view, the tech giant fails to address the fundamental issue that Facebook and Instagram users are not being presented with a fair choice and is making a weak bid to argue it is complying with EU law while still pushing users towards its behavioural ads system,” said BEUC director Agustín Reyna.
BEUC said Meta breached EU law on several counts, accusing it of using misleading practices and unclear terms to steer users towards Meta’s preferred option, and degrading the service for users who do not consent to the use of their personal data.
The European Commission recently fined Meta €200 million for the practice under the Digital Markets Act, a move that has frustrated the company.
“We support protecting consumers through legal means. We support legal frameworks. But in the EU it’s such a challenge,” says Egan. “The regulatory landscape has evolved, the puck keeps moving. We have the GDPR, it’s been interpreted in an evolving way; we now have the DMA that’s coming into force.
“We introduced a subscription model which the highest court of the land in Europe said was a legal model. It’s pay versus consent, it’s a model relied on by other players in Europe. And now we’re being told we have to do something else,” she says. “This makes it difficult to launch innovative products in Europe and it hurts people and consumers.”
Egan also argues that there could be a negative effect on businesses. How exactly?
With personalised advertising, restricting the ability of small businesses to reach targeted markets, and the ability of people to receive information about products that are interesting to them, will hinder economic growth.
Meta’s advertising is a billion-dollar juggernaut, with recent research – carried out by Meta – linking it to more than €200 billion in economic activity and roughly 1.5 million jobs across the EU.
The complexity of regulations in the EU has seen the region face a lot of criticism, and not just from Meta. In September last year, economist and politician Mario Draghi published a report on competitiveness in the EU and warned of the widening gap between the EU and the US. In his address to the European Commission, Draghi said hurdles needed to be removed, noting that Europe did not lack ideas, but that it was failing to translate innovation into commercialisation, with growing companies hindered at every stage by “inconsistent and restrictive regulations”.
“The next step is encouraging innovative start-ups to scale up in Europe by removing regulatory hurdles,” he said. “This is not about deregulation: it is about ensuring the right balance between caution and innovation, and ensuring that regulation is consistently applied within Europe.”
That echoes what Meta has said about consistency in applying the rules.
There has been some speculation that Meta could throw in the towel in Europe, and decide the landscape is too complex to continue its operation here. It isn’t too far-fetched: Meta warned in 2022 that if no new framework was adopted for data transfers between Europe and the US, it would probably have to stop operating in the region. That ultimately did not come to pass. But concerns persist.
“In the EU, I am concerned about the direction of travel,” Egan says, “We want to continue to invest significant resources in Europe and we will always invest resources to comply with regulation. But my message is that we want to urge the EU data protection commissioners and authorities to avoid tying the European economy in knots with complexity and delay.”
Egan says Europe has many advantages: universities, talent in research and development, the single market model.
“There’s so much possibility and opportunity in operating in Europe,” she says. “It doesn’t have to be this way.”