This online safety update provides a summary of the significant changes to Ofcom’s implementation of the Online Safety Act 2023 and the regulator’s enforcement strategy over the last year, and their potential impact on tech companies.
While Ofcom has been implementing the Act in stages , we can expect the regulator to be increasingly ambitious over the coming months, by taking greater enforcement steps against social media and search services, who will continue to remain in the spotlight alongside adult/pornographic services .
1. New duties for platforms and risks to senior managers
To implement the Online Safety Act (“OSA”) Ofcom prepared a multi-stage plan which came into effect throughout 2025. Platforms are now subject to new obligations around age verification, prevention of illegal harms and the mitigation of risks to children. To meet these duties, online service providers must carry out risk assessments for each of these areas and, depending on the nature of their service, may be required to submit the assessments to Ofcom for review.
Non-compliance carries significant financial risk. The primary sanction is a fine of up to £18 million or 10% of the service providers’ qualifying worldwide revenue (whichever is greater). Ofcom also has the power to seek a court order to impose business disruption measures creating further operational risks for platforms that fail to meet their obligations.
To enforce these new obligations, Ofcom has been granted new information gathering powers, including the power to issue information notices. These require platforms to supply the information necessary for Ofcom to evaluate compliance with its online safety duties. In December 2025, Ofcom imposed a fine of £20,000 on a file-sharing service that failed to comply with such a notice. Ofcom issued the file-sharing service with binding information requests given these types of platforms can be used for widespread distribution of child sexual abuse material (“CSAM”).
Directors and senior managers must also be aware of their own potential exposure under the OSA. The Act provides that certain offences, such as threatening or false communications, can lead to personal criminal liability where they are committed by the body corporate and it is proved that the offence was committed with the consent or connivance of a corporate officer (i.e. a director, manager, associate, secretary or other similar officer), or attributable to any neglect on the part of a corporate officer . In addition, senior managers could be held criminally liable if they fail to comply with an information notice or fail to take all reasonable steps to prevent such a failure from occurring.
For further details on corporate criminal liability under the OSA please see our article here.
2. Ofcom’s enforcement drive
With its new guidance and the associated duties for corporates coming into force, Ofcom has adopted a proactive and assertive approach when it comes to enforcement. It launched its enforcement programme at the beginning of 2025, which is designed to review industry compliance with the OSA.
Throughout 2025, Ofcom launched numerous investigations into platforms for suspected non-compliance with age assurance and failure to ensure the protection against illegal harms and harms to children. In October 2025, Ofcom stated in its most recent update on OSA investigations that they had opened 21 investigations since March 2025, although since this update, they have announced several further investigations into alleged breaches of the OSA.
This follows a trend whereby Ofcom is taking further regulatory action against tech companies, specifically those at higher risk of perpetrating online harms. By way of example, in November 2025, the regulator opened investigations into 20 additional pornography sites and exercised its regulatory powers to fine the AI deepfake “nudification” site, Undress.cc, £50,000 for failing to implement age checks. Undress, which leverages generative AI to create nude images of people, is operated by Itai Tech Ltd. Reports indicate that Ofcom has imposed additional penalties on Itai Tech Ltd. for failing to comply with Ofcom’s statutory information request.
While current enforcement activity is focused on age verification measures, these robust steps illustrate that Ofcom is only beginning to exercise the full breadth of its OSA powers. There is significant political will behind the regulation of the UK tech sector, with the former Home Secretary Yvette Cooper stating that “AI is putting online child abuse on steroids” and senior leadership at Ofcom emphasising that age verification implementation to protect children from harmful content is “non-negotiable”.
In December 2025, Ofcom published its new guidance for tech companies aimed at tackling online harms against women and girls. The guidance focuses on, among many things, online misogynistic abuse and harassment, stalking, domestic abuse (including coercive control), and intimate image abuse (also referred to as image-based sexual abuse). Its stated aim is to ensure “a safer online experience for millions of women and girls in the UK”. Dating app and social media services are therefore likely to face heightened scrutiny and enforcement measures. Ofcom is expected to provide further guidance to tech companies on how to proactively protect their users from unsolicited sexual images and media.
Importantly, the guidance sets out examples of “good practice steps” that companies can adopt. Significantly, Ofcom has already written to a number of sites and app providers, making it clear that they are expected to “start to take immediate action in line with the guidance”.
For a deeper analysis of the new guidance, please see our previous article here.
3. The challenges and concerns raised about the OSA regime
With the implementation of the new OSA obligations, criticism and challenges were raised, with many questioning the adequacy of the Act and its impact on free speech.
One of the most prominent challenges came from Wikipedia, which sought a judicial review of Ofcom’s categorisation framework. The framework would classify Wikipedia as a Category 1 service, subjecting it to enhanced duties, including requirements to collect additional data about its contributors. Although Wikipedia’s application was ultimately dismissed, the case highlights the continuing debate around the scope and proportionality of online safety regulation.
Data privacy concerns have also been widely raised prompting a rise in the use of VPNs to circumvent the OSA regulations. When the new rules came into force VPN providers saw an increase in downloads, with several providers claiming the top spot on app store downloads. ProtonVPM, for example, reported a 1,400% increase in sign-ups with other providers seeing similar increases. These virtual, private networks enable users to bypass the regulations, most notably age verification/assurance measures that platforms are required to implement in compliance with the Act.
The ease with which users can access such digital tools allowing users to circumvent the legislation raises questions about its overall effectiveness and the practical challenges Ofcom and regulated platforms face in ensuring compliance.
4. From “incels” to “rage baiting” : tackling emerging threats in male-dominated online spaces
With the growing attention on the risks to children, as well and women and girls, we also anticipate steps being taken by the Ofcom to address the specific risks to and from young boys.
The huge popularity of the Netflix drama Adolescence has played a significant role in bringing the dangers and real-world consequences of the “manosphere” and its online influence into mainstream discussion. These online spaces, often frequented by young boys, promote misogynistic behaviour and harmful ideologies. As illustrated through the character of Jamie Miller, there is a heightened awareness of how such groups can influence boys’ views of the world. Terms such as “incels”, “red pill” and “black pill” have become part of popular vocabulary.
For a deeper dive into the themes raised in Adolescence and the role of criminal defence solicitors in the context of youth cases, please see our previous article here.
Ofcom and the NCA have each published studies highlighting the risks associated with the “manosphere”. The findings emphasise not only the risk of harm directed towards women and girls but also the vulnerability of the boys and young men drawn into these communities. Those that are socially isolated are particularly susceptible to the hierarchical structures and rhetoric within these groups, leaving them at risk of deeper immersion, worsening mental and physical health, and exposure to more extreme ideologies. Academic research reveals that only a small number of users in these online incel forums and groups regularly post extreme content, but these posts have the effect of radicalising the wider social network of young boys and vulnerable men.
The NCA has also launched a campaign to tackle sextortion among young teenage boys. There is a notable lack of understanding amongst young boys, with the NCA’s campaign showing that 74% of the boys questioned did not fully understand what sextortion was. Deemed an “emerging threat” , the NCA reports that teenage boys are increasingly joining online groups to share extreme material.
In its 2025 National Strategic Assessment, the NCA identified online networks engaging in diverse online offences (“Com networks”) including grooming, blackmailing and threatening victims into carrying out extreme acts, such as sharing sexual material and self-harming. Vulnerable young victims are targeted and groomed online (for example through social media and gaming services) and controlled through manipulation tactics to extort imagery and cause harm. Europol’s Internet Organised Crime Threat Assessment (2024) similarly found that “self-generated sexual material constitutes a significant share of the child sexual abuse material (CSAM) detected online”. These networks typically attract young men promoting nihilistic and misogynistic views, who attempt to gain status with other users by committing or encouraging harmful acts.
5. From scams to sextortion: how deepfakes are reshaping online threats
Deepfakes have increasingly become part of mainstream awareness, with the associated risks continuing to grow.
As part of the Criminal Justice Bill 24-26 an amendment was introduced aiming to prevent AI from being exploited to create deepfake CSAM. This would effectively make the creation of sexually explicit deepfakes illegal, in line with recommendations by experts and campaign groups within the violence against women and girls (VAWG) sector. Europol also notes that AI-generated CSAM is likely to become more prominent in the near future. AI-generated CSAM presents a significant challenge for law enforcement, as it becomes more difficult to distinguish real victims from synthetic subjects.
The proposed legislation aims to empower AI developers and child protection organisations such as IWF to test AI models for safeguards against generating CSAM, extreme pornography and non-consensual indecent images.
While public figures remain the main targets, we have also seen the increased use of deepfakes in fraudulent activities against companies and private individuals. More stories about romance fraud (or “pig butchering” scams) have been publicised, whereby fraudsters identify victims using dating apps and cultivate relationships with their victims over time, leveraging deepfake videos via remote calls/video conferencing platforms and, eventually, extorting money from them. With this new generation of “catfishing” becoming much harder to identify, it falls to users to become more wary and technologically savvy, keeping up to date with the latest fraudulent tactics and trends as part of their day-to-day online activities. Following its review into romance fraud in October 2025, the FCA called upon banks to take greater action on preventing romance fraud, revealing that it was responsible for losses of £106 million for the financial year 2024/2025.
For more detail on the latest romance fraud tactics, please see our previous article here.
We have also seen increased risks to corporates, not only through fraudulent financial transactions, but also disinformation. This brings in a new age of identity fraud where individuals can emulate image, video and audio material to fully convince individuals to carry out intended activities. UK regulators and cybersecurity agencies have issued several warnings about deepfake-enabled fraud in financial transactions and executive impersonation.
6. New priority offences and wider enforcement action
The regulator may be increasingly ambitious over the coming months in respect of its enforcement steps against tech companies. Notably, Ofcom has stated its intention to strengthen industry codes to reflect cyberflashing becoming a priority offence in 2026, an offence which is most often perpetrated via social media platforms, dating apps and file-sharing services. Online content depicting non-fatal strangulation or suffocation in pornography is also set to be designated a priority offence under the Act, with such depictions to become illegal content, by way of an amendment to the Crime and Policing Bill.
As we have seen following Ofcom’s investigation into Grok, the AI chatbot on X, reported by users as being used to create non-consensual sexually explicit deepfakes of adults and deepfake CSAM, the regulator is prepared to take swift action against platforms and to use the full suite of enforcement tools available. This month, Ofcom launched its formal investigation into Grok AI, requiring X to outline the steps it had taken to protect UK users and ensure compliance with the OSA.
What’s next in online safety regulation?
The year 2025 has seen Ofcom take significant steps in relation to online safety regulation. While the OSA’s implementation has received some criticism from the tech industry, many acknowledge the new regime marks a positive step in the journey toward protecting online users.
While recent enforcement actions have targeted the adult sector and platforms producing predominantly pornographic content, Ofcom’s actions illustrate a strong intention to police platforms more broadly on online harms and illegal content, and to use the full extent of its regulatory powers, including significant financial sanctions.
Tech companies can therefore expect to be increasingly required to take proactive measures in detecting and removing such illegal material online, and before such material reaches service users.
If you have any questions regarding this blog, please contact Nicola Finnerty or Alice Trotter in our Criminal Litigation team.
About the authors
Nicola is a leading defence lawyer specialising in high profile and complex Government enforcement cases, proceeds of crime, white collar crime, fraud, asset forfeiture, investigations and AML in the UK and internationally.
Alice is an Associate in the Criminal Litigation team. Alice’s practice includes all areas of criminal litigation, with particular expertise in Online Safety, serious and general crime, and white-collar crime. She represents individuals and corporate clients from the initial stages of an investigation through to trial.
Isabella is a trainee solicitor at Kingsley Napley and is currently in her third seat with the Criminal Litigation team.
Share insightLinkedIn X Facebook Email to a friend Print