Facebook Using Artificial Intelligence to Identify and Flag Suicidal Behavior

Facebook video app push - will it lean towards Netflix or YouTube?

When a person decides to take their own life, there are often subtle signs in their actions and speech that are rarely picked up by the people around them. Facebook is now trying to flag such action using artificial intelligence in its algorithms to detect such behavior.

Of note is the fact that this is the first time Facebook is using AI technology to review messages, and is being tested in the United States for now.

The algorithm first looks for and identifies any type of activity that looks suspicious, which is then flagged for the company’s human review team. Once the review is done, Facebook contacts the user to suggest ways in which they can seek help.

Facebook CEO Mark Zuckerberg last month announced his company’s intent to use such algorithms to spot posts by terrorist organizations, as well as other types of content which indicate that some form of action is required.

This is certainly one of the more practical applications of artificial intelligence, and there will also be tools in place for users to flag and take action on their friends’ posts in such cases. The latter has been around for a while; this is the first time that AI is being brought into the picture to help both the affected user as well as their contacts.

Facebook has been active in educating its users, and has announced this on its Facebook Live video streaming broadcast tools. The company has also tied up with several mental health organizations in the United States so they can offer assistance to those in need.

This is not an easy task even for AI. There are so many variables that go into detecting suicidal behavior that a human component is always essential. Only after a thorough review by a human team can any sort of action be taken.




Dr. John Draper, director of the U.S. National Suicide Prevention Lifeline, lauded Facebook’s efforts, but added that a lot more needs to be done:

“It’s something that we have been discussing with Facebook. The more we can mobilise the support network of an individual in distress to help them, the more likely they are to get help. The question is how we can do that in a way that doesn’t feel invasive. I would say though that what they are now offering is a huge step forward.”

Though it might appear that way, Facebook isn’t going to be barging in on people’s lives and pretending they know how to handle the situation. They’re merely making the information available to professionals, who can assess the situation and decide whether or not any sort of action needs to be taken.




Facebook has already developed the tools for content to be flagged by other users, and this is being rolled out to Facebook Live worldwide. It includes flagging options for cyberbullying, violence, harassment, nudity/sex acts, hate speech, spam and unauthorized sales.

On Facebook Messenger, the company has already included an option for people to contact crisis councellor helplines directly on the platform. This is currently being released to U.S. users, but a wider rollout is expected once Facebook can ensure that organizations in other markets are able to cope with a possible spike in demand for their services.

Thanks for reading our work! Please bookmark 1redDrop.com to keep tabs on the hottest, most happening tech and business news from around the world. On Apple News, please favorite the 1redDrop channel to get us in your news feed.