The microblogging service Koo announced the release of new proactive content filtering tools designed to give users a safer social media experience.
According to Koo, the new in-house technology is capable of proactively recognising and blocking any sort of nudity or child sexual abuse materials in less than 5 seconds, labelling falsehoods, and hiding toxic comments and hate speech on the site.
Koo, a Twitter competitor, stated that it is devoted to providing a secure and positive experience for its users by being an inclusive platform designed with a language-first strategy.
In announcing the new proactive content moderation features, Koo stated that they are intended to provide users with a safer and more secure social media experience. Koo co-founder Mayank Bidawatka stated that the platform’s objective is to establish a nice social media arena for constructive discourse.
While moderation is an ongoing journey, we will always be ahead of the curve in this area with our focus on it. Our endeavour is to keep developing new systems and processes to proactively detect and remove harmful content from the platform and restrict the spread of viral misinformation. Our proactive content moderation processes are probably the best in the world
Mayank Bidawatka, co-founder of Koo
Koo’s proprietary ‘No Nudity Algorithm’ detects and bans any attempt by a user to post a picture or video containing child sexual abuse materials, nudity, or sexual content. The detections and blocking are completed in less than 5 seconds.
To discover impersonated profiles and block them, Koo’s in-house ‘MisRep Algorithm’ monitors the site for profiles that use the content, photographs, videos, or descriptions of well-known individuals. When photographs and videos of well-known people are discovered, they are quickly removed from the profiles, and such accounts are tagged for future monitoring of bad behaviour.
In addition, the safety features actively detect, hide, or remove toxic comments and hate speech in less than 10 seconds, ensuring that they are not visible to the public.
Users who upload sexually explicit content are immediately barred from posting content, being discovered by other users, being highlighted in trending posts, or engaging in any way with other users.
Koo’s other algorithm, ‘Misinfo and Disinfo Algorithm’ monitors all viral and reported fake news based on public and private sources of fake news in real-time to detect and categorise misinformation and disinformation on a post. This helps to keep viral misinformation from spreading on the platform.
Leave a Reply
You must be logged in to post a comment.