Calileo has an automated content moderation that recognises illegal/inappropriate contents that are not accepted online or anywhere in the world.

For transparency, you can find accurately the level of content filtering here below.

Image Moderation

Image Redaction

The Image Redaction API is used to automatically hide unwanted content in images. Using an automated APIs, we can define the types of content that need to be hidden such as explicit/illegal content or personal content.

The following concepts can be detected and hidden with the Image Redaction API. We select any combination of the following concepts to customize the results we get.

nudity-raw Hide raw nudity only: sexual activity, sexual display and erotica
face-minor Hide faces of children under 18
license-plate Hide license plates
gore Hide gore and horrific imagery
profanity Hide written profanity: insults, racial slurs, inappropriate language

By default, the engine will pixelate any unwanted areas of the image (this applies to content with face-minor and/or license-plate).

Model Reference

Nudity (NSFW) Content

On top of that, since we haven’t developed NSFW badges on content yet, we are not allowing some level of nudity:

nudity.sexual_activity blocked nudity.sexual_display blocked nudity.erotica blocked nudity.suggestive can publish nudity.none can publish

Offensive Content

The offensive model is useful to determine if an image or a video contains offensive content or hate content. In addition to detecting such content, this model gives details on the position and the type of the offensive content found.

The offensive content detected falls broadly into the following categories: