Friday, December 1, 2017

Suicide prevention with Artificial Intelligence

This week, Facebook recently came out and said that they are screening Facebook profiles for videos, status updates, and live streams to identify suicidal behavior. Facebook uses artificial intelligence to scan all posts by all users and flag key phrases and comments such as “Are you okay?” and “Can I help?”. The program then flags these users to be reviewed by a team that Facebook has put together. If the team deems that the situation is serious enough, they will reach out to first responders to provide immediate relief for the individual. If the team deems that there is moderate concern for suicidal tendencies, they will reach out via Facebook messenger with links to the Crisis Text Line, National Eating Disorder Association, and National Suicide Prevention Lifeline. The artificial intelligence reviews every post and then ranks what it deems is most serious to be reviewed by the team Facebook has put together. Strictly from this AI and then team-reviewed cases, there have been wellness checks by 100 different Facebook users in the past month.

The way that Facebook is screening these posts definitely brings up some ethical concerns. The first and most obvious is the policing of the user’s day to day life that they broadcast online. They are using each user’s information to compile them and then rate them on a list of severe mental health. Following this, strangers will review your profile and then reach out strictly based upon that to provide help. Even though they are doing it for a good cause like suicide prevention, this is still a large corporation going through all of your information to determine if they should step in. As a company they are taking it upon themselves to help combat mental health. Which begins a slippery slope in which they begin to start to interfere in more and more of your privacy and personal lives. To play devil’s advocate however, they are broadcasting this in a very public setting and the artificial intelligence is searching for comments made by other people. So in that way, it is just another person reaching out to help the person who seems to be in need. It is an interesting new development that could potentially spark more and more AI searching for different things. 

http://money.cnn.com/2017/11/27/technology/facebook-ai-suicide-prevention/index.html

2 comments:

  1. This is a very interesting post that I can definitely see where the controversy lies. Personally, I feel as though I agree with Facebook's approach on trying to decrease the amounts of suicides. I feel as though if the person is willing to put out the information that they aren't doing well mentally or post consecutive red flagged sayings, I think we should have a better system of recognizing this. I also feel this way because it appears the younger generations feel more comfortable posting things on social media versus talking to their parents, elders, or any other person that can actually help them. Potential suicide victims can also control who sees their posts, which can be detrimental because if the viewers of their post are younger, they might not take what these people are saying seriously. Personally, my best friend committed suicide last year, which came to a total shock to me. Sometimes he mentioned how stressed he was but I never in my life thought he would. Because of this, I feel it is pretty necessary social medias step up and take action because the companies are the only ones that have complete access to users profiles and can have a positive effect on their lives. I do agree that this could potentially make people more selective regarding what they post, which isn't necessarily a bad thing, but it could isolate potential suicide victims even more maybe? I haven't had a Facebook in about 6 years so i'm a little out dated, but how would the Facebook group know the persons location and therefore be able to call the appropriate county to respond? Would they even know exact locations? Or would there be some type of way they could contact other people close to that person to get their location?

    ReplyDelete
  2. With the use of social media, more and more people are relying on it to communicate things that would otherwise be communicated in person. We've all seen the "over-sharer" on Facebook who seems to post everything they do on the website. As it is not uncommon for someone considering suicide to give off some sort of warning sign (though it should be noted that this is by no means a guarantee), it makes sense that these warning signs would manifest themselves in social media. As such, I believe an algorithm that detects this potential is a good thing, as it allows for the detection of warning signs and even potential active suicide attempts. It is important to note as well that this algorithm is not operating in a vacuum. The AI detects the threat, but there are still humans deciding what the course of action should be. The AI is not simply calling the police or EMS on people on its own. Humans are determining whether or not the person needs immediate help, or whether they simply may need resources provided to them. While some might say that no one is well equipped to make that decision from a distance, consider the alternative. If the AI, or the "Team", simply call 911 on every single person the AI singles out, they will likely get a large number of false positives, as well as place unneeded strain on the 911 system. In the same vein, while a person with a "plan" to commit suicide can be detained for their own safety, suicidal thoughts and ideations are not enough grounds to suspend someone's rights, and there is, in all honesty, not much emergency providers can do, unless the person wants to be committed. I have been on many calls where the person didn't meet the criteria and didn't want help, and we legally have to leave the person alone. I believe Facebook is making the right decision by using a combination of AI and human input to try to help people. By agreeing to post to Facebook in the first place, you are agreeing to be monitored.

    ReplyDelete