Russ is an expert source and speaks regularly on the topic of Internet Safety. Continue reading Russ’ informative blog regarding inappropriate content on social media networks.
“Social Networks Contain Inappropriate Content. Who’s Watching Out For It?
Have you ever seen or read something inappropriate on Facebook? It’s very likely. What about on Instagram, Pinterest, Tumblr, or Vine? Who’s really tracking that type of stuff? Do any of these social networks have “content” police? Did you see last week that Pinterest will now allow “artistic” nudity?
Most reputable social networks have instituted “community reporting” to keep content in line with established Terms of Service. Facebook, with its community of over 1 billion users, attempts to enforce it’s use policies by allowing users to send reports about inappropriate content. Thus, anyone can report inappropriate or abusive things such as pornography, bullying, hate speech, graphic violence, and threats–and Facebook will likely remove them.
A policy of “community reporting” implies that Facebook doesn’t pretend to be able to review all content therein. Is that unreasonable? Maybe so, but with 350 million new images posted on Facebook EVERY DAY, no one can keep up.
Most social networks have set a minimum age of 13 for users. However, not all pre-teens tell the truth. In fact, about 5 million children under the age of 13 have Facebook accounts. When a user indicates she is under 18 years old, Facebook automatically blocks “mature” ads. This is its first line of defense.
But if your 10-year-old claims to be 13 when signing up, she’ll be 15 when Facebook starts allowing mature ads.
Due to sheer volume, Facebook has four task forces to moderate and remove potentially harmful content. The “Abusive Content” Team handles sexually explicit content, the “Safety Team” deals with vandalism, graphic violence, credible threats of violence and illegal drug use, the “Hate and Harassment” Team of course manages hate speech and harassment, and the “Access” Team handles hacked and imposter accounts.
In recent news, Twitter’s Vine app has been bombarded with hardcore pornographic material. However, Vine says that “users can report videos as inappropriate within the product if they believe the content to be sensitive or inappropriate (i.e., nudity, violence, or medical procedures).”
When a user stumbles upon inappropriate material on Vine, they encounter a warning message they must click-through to access the material.
If a video violates guidelines, it can be reported and, in some cases, the user who uploaded the may be terminated. Unfortunately, Vine’s Terms of Service do not explicitly mention nudity or pornography. So if a video contains inappropriate material, a user must report it. And even when some are reported, others are not, leaving adult material accessible.
Other social media applications are taking stances on inappropriate material like Facebook and Vine.
Instagram requires users to be 13 years old. It doesn’t even have a 1999 option when signing up. It allows users to block other users and report subjective material.
Pinterest doesn’t allow “nudity or hateful content.” Its Terms of Service allows users to submit content for review. It also requires users to be 13 years old.
Twitter users to report spam and pornography to Spam Watch, an account that collects inappropriate material reports. You can also block users from following you. If enough users block an account, Twitter can suspend it.
- 1) You can “Hide it” from the News Feed
- 2) Send a message to the person that posted the inappropriate content/image and ask them to take it down
- 3) Unfriend/block the responsible person.
When posting things on Facebook, you automatically grant Facebook the right to show it or remove it. When you post something using the Public setting, you allow everyone to see that information and to associate it with you.
Users cannot post content or take any action on Facebook that infringes or violates someone else’s rights or violates the law. If someone repeatedly infringes other people’s intellectual property rights, their account can be disabled.
The moral of this story: parents need to be vigilant when it comes to monitoring their child’s access to social network content. The problem is that parents don’t have time to monitor one social network, let alone three or five.
A new category of software has emerged called “facebook monitoring” or “facebook parental control software.” These types of products help parents keep track of a child’s activity on social networks. Go toTopTenReviews.com, PCMag.com, or CNET.com to get lists.”
Russ Warner is CEO of ContentWatch, makers of the top-rated parental control software, Net Nanny. He is used as an expert source in local and national press and speaks and blogs regularly on the topic of Internet Safety. Warner has been quoted in or written for articles found on CNN, The NY Post, Forbes, HLNtv, and is a columnist on Huffington Post. He has a Master’s Degree in Business Administration. ContentWatch is actively involved in Internet safety campaigns for the local media and a national audience of customers. (The opinions expressed in this article are his own.)