Kenya is taking steps to protect children online. New guidelines from the Communications Authority (CA) require “age verification mechanisms” for digital platforms, including social media, but what does this really mean for Kenyan internet users?
Let’s Break It Down For You
The CA’s official language calls for companies to “implement age verification mechanisms” for digital services. The guidelines don’t specifically mandate ID verification for social media access.
However, a CA insider who requested anonymity revealed, “At the beginning, we will allow the service providers to accept user-entered ages, but ultimately we will require everyone to verify that and there’s only one way of doing age verification, and that’s through an ID.”
If implemented as described, Kenya would become the first nation to enforce ID verification for social media, something no other country has successfully done.
The Full Picture of CA’s Guidelines
The guidelines require that digital products for anyone under 18 must have safety features built in from the start, not added later as an afterthought. This “safety by design” approach is similar to European data protection laws but goes further by covering everything in the digital world.
Unlike Western countries that mostly regulate big platforms like Facebook, Kenya’s plan covers everyone: phone companies, TV and radio stations, device makers, and content creators.
Companies that make phones and computers must ship their products with strong security settings turned on by default and clear instructions for safety features.
Phone companies have specific requirements too: SIM cards used by children must be properly registered under Kenya’s telecommunications laws, with adults declaring who will be using the SIM cards they purchase.
Companies Will Be Held Responsible
Accountability is also a big part of the new rules. Companies are required to:
- Appoint specific employees to oversee child safety
- Publish their plans for protecting children
- Create clear ways for users to report problems
- Send regular reports to the Communications Authority
The CA plans to check these reports and publish company ratings every three months, something few regulators in the world have tried. Companies must also find, report, and block material showing child abuse and help law enforcement when needed.
Children Will Still Reserve the Right to Information
CA’s approach is unique because it intends to protect children’s right to information while also keeping them safe. Instead of banning kids from social media like some countries are considering, Kenya wants to create safer access through what they call “empowerment over policing.”
This means teaching digital skills and responsible online behavior rather than just blocking content. The guidelines see children as digital citizens with rights to information and expression, which is a more balanced view than many Western approaches that focus mainly on avoiding risks.
Right now, platforms like Facebook, TikTok, and Instagram require users to be at least 13 years old, but kids easily get around this by entering fake birthdays, a system that clearly doesn’t work.
Still, Many Challenges Remain
Like any ambitious plan, this one has its fair share of challenges, and some don’t have any solutions to refer to, at least for now.
The guidelines don’t clearly say what counts as acceptable age verification. Could it be biometrics, ID cards, AI tools, or just users declaring their age? This lack of clarity creates the same problem the UK had with its own verification attempts.
Without strong penalties for companies that don’t follow the rules, some might treat these guidelines as optional suggestions.
Unlike European laws that can fine companies up to 4% of their global revenue, Kenya’s plan to publicly name non-compliant companies might not scare big tech firms. As the famous American showman P.T. Barnum said, “There’s no such thing as bad publicity.”
Global platforms like Meta, Google, and TikTok might resist Kenya-specific rules. Without ways to enforce rules across borders, content hosted outside Kenya could still reach children.
The guidelines assume many families understand digital technology, which isn’t true in many homes or schools, especially in poorer or rural areas. In the absence of government programs teaching basic digital skills, many safety tools might go unused.
Small tech companies and startups will struggle more than big companies to follow all these rules. If requirements are not based on company size, innovation from local developers could suffer.
Kenya currently scores poorly on global measures of child online protection, according to the DQ Institute, with below-average ratings for regulations, infrastructure, and safe technology use by children.
The new guidelines build on Kenya’s Data Protection Act from 2019, which required “data protection by design.” However, these laws might conflict with each other.
The Data Protection Act says companies should collect minimal data, which doesn’t work well with extensive age verification and content checking.
Kenya defines a child as anyone under 18, matching both its Constitution and Children Act of 2022. This offers broader protection than the US, where online children’s privacy laws only cover kids under 13.
These changes follow another recent rule requiring social media companies to open offices in Kenya, which is part of a larger effort to hold platforms accountable in a country where more young people are going online every day.
Kenya’s experiment could become a model for other developing countries with young populations who mainly use mobile phones. There’s a caveat, though.
It will either be a successful or failed experiment, depending on whether CA can fix the problems with enforcement, verification standards, and digital education that currently limit how effective it can be.