The Community Guidelines provide general guidance on what is and what is not allowed on the platform, keeping TikTok a safe place for creativity and joy, and are localized and implemented in accordance with local laws and norms. TikTok’s teams remove content that violates the Community Guidelines , and suspends or bans accounts involved in severe or repeated violations. Content moderation is performed by deploying a combination of policies, technologies, and moderation strategies to detect and review problematic content, accounts, and implement appropriate penalties. Technology: TikTok’s systems automatically flag certain types of content that may violate its Community Guidelines, enabling it to take swift action and reduce potential harm. These systems take into account things like patterns or behavioral signals to flag potentially violative content. Content moderation: Technology today isn’t so advanced that a platform can solely rely on it to enforce our policies. For instance, context can be important when determining whether certain content, like satire, is violative. As such, TikTok has a team of trained moderators to help review and remove content. In some cases, this team removes evolving or trending violative content, such as dangerous challenges or harmful misinformation. Another way the TikTok team moderates content is based on reports that it receives from its users. There is a simple in-app reporting feature for users to flag potentially inappropriate content or accounts to TikTok. In the recent release of its Transparency Report, TikTok shared the global volume of videos removed for violating its Community Guidelines or Terms of Service, which showed that Pakistan is one of the five markets with the largest volume of removed videos. This demonstrates TikTok’s commitment to remove any potentially harmful or inappropriate content reported in Pakistan.