TikTok Pulled Down Nearly 600,000 Videos in Kenya Q2 2025: Here’s Why

0
Tiktok

TikTok has removed nearly 600,000 videos in Kenya between April and June 2025 for violating community guidelines, marking one of its most aggressive enforcement actions in the country to date. The company revealed the data in its latest Community Guidelines Enforcement Report, highlighting growing efforts to tackle harmful content and restore public trust amid mounting regulatory scrutiny.

According to the report, 92.9% of the removed videos were taken down before they were ever viewed, while 96.3% were removed within 24 hours of being posted. TikTok attributed the swift response to its increasingly advanced moderation systems, which rely on a combination of AI-powered detection and human content review teams to identify rule-breaking content such as misinformation, hate speech, and sexual exploitation.

Rising Enforcement in Kenya

This surge in enforcement marks a notable escalation compared to previous periods. In the first quarter of 2025, TikTok removed more than 450,000 videos in Kenya for similar violations, up from 360,000 videos during the same period in 2024. The steady rise underscores TikTok’s heightened focus on maintaining community safety and compliance in one of its most rapidly expanding African markets.

Kenya has become a key battleground for digital content moderation. Following a BBC investigation earlier this year that exposed disturbing incidents of children being exploited during sexualized live streams, the Communications Authority of Kenya (CA) issued five formal demands to TikTok in March 2025. These included directives to remove exploitative content, improve detection systems, address previous moderation failures, and educate Kenyan users about digital safety.

Since then, TikTok has intensified its safety and compliance efforts in the region, introducing new local review teams, training moderators on cultural context, and improving user reporting systems.

Global Context

Globally, TikTok deleted over 189 million videos during the same reporting period, representing about 0.7% of all content uploaded to the platform. Of these, 99.1% were detected proactively, 94.4% removed within 24 hours, and more than 163 million were automatically flagged by AI moderation tools.

In addition to content removals, TikTok also deleted 77 million fake accounts and 26 million accounts suspected to belong to users under 13 years old. The company says these measures are crucial for maintaining “platform integrity” and ensuring the safety of younger users.

TikTok stated that its focus remains on maintaining a “safe, inclusive, and trustworthy space” for creators and audiences. “Taking down content that spreads misinformation, hate speech, or exploitation is vital in protecting the community and preserving trust in our platform,” the company said.

Follow us on social media

Industry analysts believe the crackdown shows TikTok’s commitment to staying ahead of regulators in Kenya and across Africa. The platform is investing heavily in local moderation expertise, community outreach, and partnerships with safety organizations to align its global policies with regional realities.

Critics, however, warn that automated moderation systems may sometimes overreach, resulting in the removal of legitimate content. Yet, TikTok appears to be prioritizing safety over leniency, particularly given its massive young user base in Kenya. The company has acknowledged the ongoing challenge of balancing freedom of expression with protection against digital harm.

As Kenya’s online safety regulations tighten, TikTok’s approach may serve as a model for how global social platforms can adapt to local demands. The recent figures make one thing clear: the company is sending a strong message that harmful content will not be tolerated and that its future in Africa will hinge on trust, transparency, and responsibility.


Discover more from Techspace Africa

Subscribe to get the latest posts sent to your email.