The upcoming social media regulation bill in the UK, dubbed the Online Safety Bill, should require social media platforms to provide real-time data on “disinformation” – that’s according to fact-checking activists. The activists also raised concerns about the over-dependence on AI in content moderation.
If the upcoming Online Safety Bill passes, social platforms would be required to clearly state the types of content and behavior that is acceptable in a clear and accessible manner in their terms and conditions. The bill also suggests that these companies should inform their users how they will handle legal but potentially harmful content.
Additionally, Category 1 tech platforms, those with millions of users such as Facebook, Instagram, TikTok and Twitter, will be legally required to publish transparency reports on the measures they are taking to handle harmful content. They already do.
Failure to do so, Ofcom, which will have oversight over online platforms, will fine the companies 10% of their global annual turnover or £18 million (whichever is higher). Ofcom will also have the authority to shutdown access to a platform that does not comply with the rules.