Search

Social Websites Challenged By Spam

0 views

Just as social sites start to gain or build their target audiences, spammers may rush in and ruin those initiatives.

The easiest method in controlling spammers relies on the manual approach, where other users identify something as spam. Once a certain threshold has been passed, like downvotes on Digg or Reddit, the spam content simply goes away.

Automated prevention, like the ubiquitous captchas seen on forms all over the Internet, help stop robotic spamming. Social websites could also use rank-based methods, similar to how search engines work, to drop spam deep within the bowels of social search results where it is unlikely to be viewed often.

The researchers also looked at tagging and its challenges. Since tagging can be a subjective medium, a spammer could place an inappropriate tag on an item, and when it is clicked, the tag brings up the spammer's content.

Beating that means developing a spam model, which can be used to quickly identify unwanted spam content. This entails defining "good" tags for a given piece of content, and assessing other tags for their suitability.

The report's creators noted how pre-existing solutions to spam, coupled with the detailed logs of user interaction maintained by social websites, make those solutions work even better on the social Web. As with anything in security, implementing them means finding a balance between user convenience and adequate spam protections.

Suggest a Correction

Found an error or have a suggestion? Let us know and we'll review it.

Share this article

Comments (0)

Please sign in to leave a comment.

No comments yet. Be the first to comment!