social media

On July 17, 2017, news broke that the Communication’s Authority of Kenya (CA) was planning to regulate social media, including WhatsApp groups during this election period. The organization said that it would hold group admins liable for any form of hate speech posted in the groups.

In Techweez tradition, we reached out to both CA and National Cohesion and Integration Commission (NCIC) to get more answers and after what seemed like forever, they replied.

Both CA and NCIC have been mandated to protect consumer interests in telecommunications services and promote national cohesion and integration respectively. The two government bodies have consulted with industry stakeholders such as Mobile Network Operators, Content Service Providers, and Facebook (yes, they explicitly mention Facebook) to create guidelines that will prevent the transmission of undesirable political content via social media platforms.

Hate Speech

First things first, the guidelines interpret “hate speech” as a message designed to degrade, intimidate or incite to violence or prejudicial action against a person or group of people based on their race, gender, ethnicity, nationality, religion, political affiliation, language, ability or appearance.

Social media platforms are also defined as all forms of, but not limited to; online publishing and discussion, media sharing, blogging/micro blogging, social networking, document and data sharing repositories, social bookmarking and widgets (an application, or a component of an interface, that enables a user to perform a function or access a service).

Read More: Google Launches Suite to Safeguard Media During the Kenya Election Period

Regulating WhatsApp groups

Now, the big headline was that CA and NCIC sent out a warning to WhatsApp group admins that they would be held responsible for any form of hate speech shared in their groups. It was revealed that the two bodies had sent out warnings to 21 group admins on the same. The guidelines read, “It shall be the responsibility of the administrators of a social media platform to moderate and control undesirable contents and discussions that have been brought to their attention on their platform.”

Looking at the guidelines, WhatsApp group admins fall under the category of “Social Media Platform Administrators”, account owners on various social media platforms such as Twitter and Facebook, also fall into this category.

The question was, how possible is it for CA and NCIC to regulate WhatsApp groups? WhatsApp privacy policy includes this paragraph:

We may collect, use, preserve, and share your information if we have a good-faith belief that it is reasonably necessary to: (a) respond pursuant to applicable law or regulations, to legal process, or to government requests…

To bring to your attention, WhatsApp is owned by Facebook, and the guidelines mention that Facebook was consulted in the creation of these guidelines.

So the answer is, yes, CA and NCIC can regulate WhatsApp groups thanks to… *drum roll* …Facebook!

Read More: Can Communications Authority of Kenya and NCIC Monitor WhatsApp Groups for “Hate Speech”?

The Penalty

If a WhatsApp group admin or an individual is found guilty of spreading hate speech, the guidelines clearly state that the said person(s) shall be liable to a fine not exceeding one million shillings or to imprisonment for a term not exceeding three years or to both, according to the NCI Act. The guidelines also state that the person(s) shall also be penalized according to the Penal Code and “other relevant laws”.

To protect yourself as an admin, it is prudent for you to post these guidelines on your group, and warn your members against posting any form of hate speech. You could take it as far as removing any violators from the group. As an individual, just don’t post anything stupid.

The Communications Authority of Kenya informed us that Kenyans can report any form of hate speech through [email protected]

You can download the full guidelines on CA’s website.


Comments are closed.