Join our Waitlist for Expert Advice!

Facebook Updates Its Suicide Prevention Tools The social-media platform has made it easier for users to reach out to their connections whose activity on Facebook indicates they are at risk for self-harm or suicide.

By Laura Entis

Opinions expressed by Entrepreneur contributors are their own.

Reuters | Robert Galbraith

Yesterday, Facebook announced that it is launching a new tool that will make it easier for users to intervene if they are worried about a friend's risk of suicide.

While the social network has allowed users to report potentially suicidal posts on the platform since 2011 (via submitting a screenshot or a link to the concerning post), this updated feature allows users to flag troubling posts directly.

After flagging a post, users will receive a message with links to three available options: directly message the potentially suicidal person, reach out to another Facebook friend for support or connect with a professional at a suicide hotline.

facebook safety post

Image credit: Facebook Safety

According to the blog post announcing the changes, trained teams at Facebook will review each flagged post in question and, if they decide it's necessary, will send the person who posted the message a note encouraging them to speak with a mental-health expert at the National Suicide Prevention Lifeline.

Related: Facebook Basically Shrugs Off User Outrage Over 'Emotional' Experiment

"We have teams working around the world, 24/7, who review any report that comes in. They prioritize the most serious reports, like self-injury, and send help and resources to those in distress," the Facebook post reads.

This is certainly an admirable update; one could even argue that it is a necessary one. As Facebook continues to consume more of our time and account for an increasing percentage of our interactions with friends, family and acquaintances, it's inevitable that expressions of real pain and distress will be shared over the platform. For better or worse, Facebook is a big part of how we communicate. By making it easier for users to reach out to connections on the platform who appear to be at risk for self-harm or suicide, Facebook will undoubtedly save lives.

This makes the update an unequivocal success.

At the same time, it raises some prickly ethical questions for the company, namely exactly how Facebook's team will go about determining which reports are valid and require a response and which do not. Even with the most extensively trained experts on hand, oversights will be made.

Additionally, with the announcement of this new feature, Facebook risks wading back into some very uncomfortable territory. This summer, as you probably remember, the company faced near universal backlash after the revelation that it had manipulated content seen by more than 600,000 people to find out whether the changes would affect people's emotional state. And this kind of tweaking is in no way an isolated situation.

Related: Facebook Updates Its Privacy Policy, But Does That Mean Anything?

As revealed by a recent episode of NPR's Radiolab, the company is constantly tinkering with the way it words calls to action on its platform. For example, in an attempt to sort through the onslaught of photo removal requests it receives every day, Facebook asked users to select a reason why they wanted a particular photo removed from a list of responses ('embarrassing,' 'makes me sad' and 'bad photo of me,' to name a few.) Typically, only 50 percent of respondents selected one of the available options.

Interestingly, all it took was the addition of a single word -- it's -- for the response rate to shoot up by 28 percent to 78 percent. So, instead of the option being embarrassing, it was tweaked to "it's embarrassing."

Arturo Bejar, the director or engineering for the Facebook Protect and Care team, revealed to Radiolab that he and his team make these subtle adjustments, which often result in significant changes in response rates and/or follow-up actions, all the time.

This brings us back to Facebook's updated suicide prevention feature, which reaches out to at-risk users in the form of a carefully worded message:

facebook safety post

Image credit: Facebook Safety

Will Facebook experiment with tweaking the wording in order to make it more effective?

Rolling out this new initiative has the potential to save lives but when the stakes are this high, the prospect of any kind of experimentation -- even when the intentions are so pure -- still feels a little chilling.

Related: OkCupid Founder: 'If You Use the Internet, You're the Subject of Hundreds of Experiments'

Laura Entis is a reporter for Fortune.com's Venture section.

Want to be an Entrepreneur Leadership Network contributor? Apply now to join.

Side Hustle

At 16, She Started a Side Hustle While 'Stuck at Home.' Now It's on Track to Earn Over $3.1 Million This Year.

Evangelina Petrakis, 21, was in high school when she posted on social media for fun — then realized a business opportunity.

Health & Wellness

I'm a CEO, Founder and Father of 2 — Here Are 3 Practices That Help Me Maintain My Sanity.

This is a combination of active practices that I've put together over a decade of my intense entrepreneurial journey.

Business News

Remote Work Enthusiast Kevin O'Leary Does TV Appearance Wearing Suit Jacket, Tie and Pajama Bottoms

"Shark Tank" star Kevin O'Leary looks all business—until you see the wide view.

Business News

Are Apple Smart Glasses in the Works? Apple Is Eyeing Meta's Ran-Ban Success Story, According to a New Report.

Meta has sold more than 700,000 pairs of smart glasses, with demand even ahead of supply at one point.

Money & Finance

The 'Richest' U.S. City Probably Isn't Where You Think It Is

It's not located in New York or California.

Business News

Hybrid Workers Were Put to the Test Against Fully In-Office Employees — Here's Who Came Out On Top

Productivity barely changed whether employees were in the office or not. However, hybrid workers reported better job satisfaction than in-office workers.