Call for testers for Mollom's hosted moderation interface

As the amount of user-generated content on the web increases exponentially, the need for responsive, swift and effective moderation of that content also increases. Spam is not only annoying to a site’s visitors, it has a substantial influence on both visitor behavior and site reputation.

• A classifieds website containing many fake items for sale, for instance, will cause a potential customer to either refrain from buying an item or from posting a classified ad herself.

• A discussion forum filled with spam directly degrades an otherwise high quality discussion.

• Reputable news sites lose credibility when they do not - or cannot - block spammers from commenting on news items.

• Fans of great performers and artists become increasingly frustrated when their blog posts are lost in an avalanche of spam messages.

To date, there exist no off-the-shelf, useful services that enable moderators to control comments, blogs, forum posts and other user-generated content in an effective and efficient way.

There are no systems available that easily manage and moderate content that is spread across several sites, that switch from one site to another in a fraction of a second using a single backend and with a consolidated user interface.

There is no intelligent service on the market that can accurately scan huge amounts of content and preemptively classify it according to predefined variables, making the work of moderators easier and faster.

Or is there?

• Imagine a publisher of several dozen websites being able to steer its moderation team, track team performance and empower moderators to control content, while that team works more quickly and effectively than before, while using one single backend for all of its sites.

• Imagine a social network empowered to track and manage the reputation of each individual user, from the posting IP address to the quality and nature of the content itself. Imagine if such a network could undertake specific actions on the site’s behalf, whenever necessary.

• What about a discussion forum with hundreds of thousands of topics where moderators have tools that scan and determine potential flashpoints within seconds, allowing moderators to use “just in time,” targeted, control,

• Or, visualize a community site for kids, where all profanity is blocked automatically before being read by children.

Call for testers

Mollom is about to launch a new add-on product in private beta at Mollom. That new product is effectively a "hosted moderation interface". Our goals are to:

  • Provide an optimized and intelligent moderation interface -- sort and bulk moderate comments by spam score, profanity content and more.
  • Make it easier to moderate multiple websites -- moderate all your sites from a single, unified moderation interface.
  • Make it easier to support moderation teams -- create moderation teams, define their workflows and track the performance of individual team members.
  • Provide moderation as a service -- seamlessly outsource the moderation of your site fully or partially to a dedicated team.

The product is a work in progress, but we'll soon be accepting a limited number of private beta test users. If you're interested in being an early beta tester, sign up here.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Allowed HTML tags: <a> <em> <strong> <blockquote> <code> <ul> <ol> <li> <dl> <dt> <dd>
  • Lines and paragraphs break automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.