How Mollom works

How does Mollom work?

Mollom employs three specific technologies to detect spam and malicious content:

Mollom decision flow

  • Machine learning. Mollom uses sophisticated machine learning techniques to block spam and malicious content automatically. Mollom uses a reputation-based system that keeps a continually evolving archive of user profiles to immediately discern an individual’s propensity to submit spam. This applies to everything from user registration forms to blog entries.

  • Protection against profanity. Using text analytics, Mollom is able to detect harmful content such as profanity and other spam-related content. And Mollom adds language support, stopping unwanted content in 75 languages.

  • Centralized CAPTCHA service. Mollom provides a centralized CAPTCHA service that stop known spammers. Approved users are not required to solve a CAPTCHA.

    The CAPTCHA is invoked for three specific use cases:

    • Upon user registration, when no content can be classified
    • When Mollom is unable to classify a user
    • When a site owner using Mollom opts for more privacy, and Mollom isn’t allowed to audit all content

Mollom audits the content quality by defining it across three dimensions:
Spam, Ham, and Unsure:

  • Ham is considered positive content and automatically published.
  • Spam is negative content and automatically blocked.
  • Unsure is anything in between. Mollom does not recognize the user, and they’re shown CAPTCHA’s, and the customer gets to decide if content is automatically published, blocked, or sent for manual moderation.

Want to learn more?

If you want to know more, we recommend that you check out our technical whitepaper or that you read the Mollom API documentation.