The development of social media in our digital era has been beneficial for business, networking, creativity and for some, building a personal brand; however, as the use of social media has increased amongst teens, so has the decline in mental health cases.
According to an article published via the theodysseyonline.com, those who spend more than 3 hours per day on social media are at heightened risk for mental health problems in general, and teens who spend 5 or more hours a day on their electronic devices are 71% more likely to exhibit suicide risk factors.
This dark side to social media can be caused by several factors, one being the focus on “likes”. Teens and young adults are constantly focused on producing content in hope that it will gain a lot of positive attention. The need to gain “likes” on social media can cause teens to make choices they would otherwise not make, including altering their appearance, engaging in negative behaviors, and accepting risky social media challenges. Cyberbullying and trolling can be a result, causing depression, anxiety, and elevated risk of suicidal thoughts.
The ability to edit pictures also can cause users to compare themselves to others regardless of these false displays of beauty and lavish lifestyles.
So, what are these social media companies doing to help?
Popular video-sharing app TikTok has put out at least half a dozen initiatives this year alone to further safety and privacy, primarily for teen users. For example, TikTok started offering guides and tools in the search results when a user searched terms related to eating disorders. Director of Policy for TikTok US, Tara Wadhwa, stated “While we don’t allow content that promotes, glorifies or normalizes suicide, self-harm or eating disorders, we do support people who choose to share their experiences to raise awareness, help others who might be struggling and find support among our community.”
In addition, user searches are being closely monitored and as a result, search intervention has increased. If a user for example, searches for terms related to mental illness or suicide, then the app will direct them to resources such as Crisis Text Line Helpline.
Some videos will also portray a warning message if flagged for sensitive content so that users have the choice to view or skip content.
In 2018, Instagram created a ‘well-being’ team who have been able to come up with ideas to hide the number of likes on posts and to introduce the ability to flag posts that may be inappropriate or raise concern.
Twitter is another social media platform that can contain constant negative news and opinion, leading to ‘Doom scrolling’. Wikipedia defines doom scrolling as the act of spending an excessive amount of screen time devoted to the absorption of dystopian news. Increased consumption of predominantly negative news may result in psychophysiological responses in some. Marketwatch.com stated that earlier this month, Twitter debuted “Safety Mode” that uses artificial intelligence to automatically block users who are being aggressive or hateful.
Social media can be a scary place if delved into deep; however, it’s never going to go away but only continue to develop and grow. As users we need to be more mindful of what we are releasing to the world in terms of content and help the employees behind these top social media platforms to identify inappropriate content and accounts. Most importantly we need to BE KIND ONLINE.