The science behind Mollom: humans entering CAPTCHAs

Mollom is based on the idea that if we are not 100% confident about the classification of your content in the ham or spam group, we present a CAPTCHA to the user. This improves the classifier's performance and eliminates the need of a moderation queue. And because these CAPTCHAs are only shown in the "gray zone", only a very limited number of real users have to fill in a CAPTCHA, while most of the CAPTCHAs are actually shown to spam-bots.

Here is actual data from the Mollom server showing the number of requests of the different types:

Humans that had to enter a CAPTCHA

This plot shows Mollom's server statistics on any particular day, showing the ham messages in green, the spam messages in orange, and the unsure messages (where a CAPTCHA was shown) in gray. Also notice the thin brown line just on top of the ham region (look close, it is barely visible :), which denotes the human users that had to fill in a CAPTCHA.

In numbers it boils down to this: on average 80% of all content is spam of which about 40% is shown a CAPTCHA and 60% is directly blocked; from all the user generated ham content, which is the remaining 20% of the whole, only 5% (of this 20%) was seen as unsure and had to fill in a CAPTCHA to get accepted. Not too bad... And thanks to both the image and audio CAPTCHAs, there are no accessibility issues.

More statistics ?

Benjamin, those are very intresting and impressive numbers.
If only 5% of the none-spamming users have to fill in the captcha, that means a lot less people that are frustrated.

Do you have statistics of the number of wrongly classified messages ?

1) Messages that were classified as ham (both those that were classified directly, and those that passed the captcha), but were marked as spam by the administrator later on (false negatives) ?

2) And, even more important: messages marked as spam by Mollom, but that were actual ham messages (false positives) ?

The percentage of ham

The percentage of ham messages that are later reported as spam is what is used to generate the classification performance that can be seen on the main page. This fluctuates around 99.7% currently.

Ham messages marked as spam cannot be detected because Mollom does not allow administrators to mark content as ham (to prevent obvious abuse), but the Mollom backend is set-up such that this will never happen. A message is only marked as spam if the text based classification is very confident that it is spam, and if the poster's internal reputation is very poor, meaning that he has already been posting spammy content for quite a while. So this is a two stage process that makes sure the humans will not get blocked. In an upcoming blog post I will elaborate some more on this reputation aspect.

Very effective

I've been using Mollum for the past two weeks and it's managed to snarf every spam attempt (1058 of them according to the stats) on my site,, while not inconveniencing legitimate posters with unnecessary CAPTHAs.

Good job!

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Allowed HTML tags: <a> <em> <strong> <blockquote> <code> <ul> <ol> <li> <dl> <dt> <dd>
  • Lines and paragraphs break automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.