By: Parker Roberson
Almost a week ago, Facebook released a set of tools aimed at helping prevent suicides. Users are now able to report any content that causes them to believe that someone is having suicidal thoughts. Once the post has been reported, it will be reviewed and Facebook will decide whether to reach out to the individual to offer support and advice.
If Facebook decides to contact a user, they will be sent this message-
The message will let the user know that a friend is genuinely concerned for them. Once the user receives the message they are able to chose if they would like to receive support or have the opportunity to talk with someone about the situation. Because this is such a sensitive subject, only the user can see the message.
Facebook has paired with a few mental health organizations to ensure that they are handling the situation in the correct manner and using the appropriate language while contacting users. Some of these organizations include Forefront, the National Suicide Prevention Lifeline, Now Matters Now, and Save.org
While I applaud the efforts that Facebook is making to prevent any sort of tragedy with its users, I feel that this might cross a privacy line. I’m not sure if having Facebook reach out to users is the most efficient manner in handling this situation. If I see a post that causes me to worry about the life of a friend, my first reaction would be to reach out to that friend not push the responsibility onto a social media site.
And I wonder about the legal aspects. What happens if someone reports a post but Facebook doesn’t offer the user the help they requested in a timely manner and the worst happens? Is Facebook to blame?
I’m really glad to see these options being offered to those who need them but as a social media site, is Facebook over stepping its boundaries?