We are leaders in implementing safety and security policies, ensuring that our properties remain among the safest platforms online for our communities and are at the forefront of combating illegal content.
These comprehensive measures for verification, moderation and detection include
Only users who have been ID-verified and authenticated by a third-party digital ID company are able to upload content to Aylo content sharing platforms.
Safeguard, Aylo’s proprietary image recognition technology, has been developed and deployed with the purpose of combating child sexual abuse imagery (CSAM) and non-consensual content by preventing the re-upload of previously fingerprinted content to our platforms.
Aylo digitally fingerprints materials found to be in violation of its policies in order to mitigate the ability for unwanted content to return to our platforms. Fingerprinting technology is also available to users and creators on a proactive basis to prevent the unauthorized upload of their content.
Aylo maintains relationships with leading non-profit organizations around the world to learn from their expertise, assist in their missions, and help inform platform policy.
Moderation practices include scanning all content against NGO hash-lists to prevent known child sexual abuse material or non-consensual content, several layers of AI to detect unknown child sexual abuse material, banned terms in multiple languages, AI text moderation, and an extensive team of human moderators dedicated to manually reviewing every single upload before it is published on any platform.
Aylo platforms provide easy-to-use, robust systems for flagging, reviewing, and removing illegal material reported by users. This includes a content removal request form and an industry-leading Trusted Flagger Program that spans more than 35 countries and has over 50 members. Both systems instantly and automatically disable reported material.
Aylo platforms display deterrence messaging to users who attempt to search for potentially illegal material, ensuring they are informed that their search may have been inappropriate, and direct them to the help they need to change their behaviour.