Ofcom Enforces Online Safety, Requires Platforms to Assess Content Risks
UK media regulator Ofcom has launched an enforcement program to ensure compliance with the Online Safety Act, requiring social media and online platforms to assess and mitigate risks related to illegal content. Tech firms have been given a March 31 deadline to submit their first risk assessments, detailing how users might encounter harmful material on their services while operating within the UK or serving UK customers/users.
These assessments will play a key role in shaping platforms’ safety measures, helping them identify potential risks and implement strategies to combat illegal content. The new rules fall under Ofcom’s broader regulatory framework, which includes strict codes of practice on issues such as child sexual exploitation, terrorism, hate crimes, content promoting suicide, and fraud.
Failure to comply with the Online Safety Act could result in substantial fines – either 10% of a company’s global turnover or £18 million, whichever is higher. In extreme cases, Ofcom can also seek court orders to block non-compliant platforms in the UK. Suzanne Cater, Ofcom’s enforcement director, emphasized the importance of this process, calling the risk assessments a “vital first step” in making platforms “safer by design.”
Critics of the regulation argue that the codes of practice could create a checkbox approach to compliance, allowing some platforms to do the bare minimum rather than taking proactive steps to improve user safety. Nonetheless, Ofcom has stated it is prepared to take swift action against any platform that fails to meet its obligations. While some platforms may not change much about their safety measures worldwide, this sudden increase in pressure could see many international social media sites cracking down on the content they permit on their platform.