The Merge of Social Media and AI - Implications on Adolescent Brain Development

 

Written By Rouida Siddiqui

Cover Art: Royalty Free on PickPik

Introduction

After a long day at school, many adolescents come home, grab a snack, lay in bed, pull out their phone, and scroll on TikTok. Immediately after opening the app, they see the first video. Putin is doing the “renegade” dance. Funny. Scroll. Biden is plunging a toilet. Funny. Scroll. Zuckerberg is doing cartwheels. Funny. Scroll. Will Smith is drinking bottles of ketchup. Funny. Scroll. Dwayne Johnson is eating rocks. Funny. Scroll. 

Suddenly, things take a darker turn. Videos discussing the dark side of artificial intelligence begin popping up. “An AI-generated pornographic video of Selena Gomez and Justin Bieber…” Scroll. “China’s President declares invasion of Taiwan…” Scroll. “Principal screams racist slurs at students…” Scroll. “Kim Jong Un declares war with the US…” Scroll. And just like that...two hours pass by. 

All the content this adolescent just consumed are hyper-realistic AI-designed videos known as “deepfakes.” As artificial intelligence advances rapidly, its infiltration within various social media networks has tremendously increased. From deepfakes, Snapchat’s new AI chatbot, ChatGPT, and even AI-managed social media algorithms, the impact of this infiltration is massive, causing misinformation, falsified claims, and impaired decision-making abilities in both adults and adolescents. To truly understand the impact the merge of social media and artificial intelligence will have on adolescents, it is essential first to understand the psychology of a young mind.

Social and Psychological Effects of Social Media on Adolescent Brain Development 

Extensive research indicates that peer environments and impulsive emotions have a powerful effect on adolescent managerial skills. The amygdala, a part of the brain responsible for emotional processing, is connected poorly to the brain’s judgment center during adolescence, increasing the effect of immediate emotion and sidelining logic in teenage decision-making (“The Teenage Brain: Under Construction,” 2022). 

Additionally, the socioemotional system, involved in creating instinctive emotional responses, matures earlier in adolescence than the cognitive-control system, which is in charge of responses based on logic and reasoning  (“The Teenage Brain: Under Construction,” 2022). This indicates that adolescents are more likely to make decisions and engage in behaviors based on emotion rather than logic and the critical evaluation of the benefits versus the risks. fMRI scans of adolescents have shown decreased fronto-parietal circuitry activity, vital for cognitive control, and increased ventromedial prefrontal cortex activity, which overlooks the emotional processing areas of the brain  (“The Teenage Brain: Under Construction,” 2022). 

The psychological effects of social media on the adolescent brain revolve heavily around the risk versus reward association in which decisions are based on peer validation. According to the American College of Pediatricians, “In a study of 306 individuals between 13 and 24 years of age, researchers found risk-taking and risky decision making increased when in peer groups”  (“The Teenage Brain: Under Construction,” 2022). In social media, peer validation and “reward” are often associated with “likes.” Increased “likes” on social media posts correlate to increased feelings of reward, whereas decreased numbers of “likes” on social media posts correlate to less feelings of reward. 

This idea was proposed by UCLA scientist Laura Sherman and her team, who hosted a research experiment in which the brains of a few dozen adolescents were scanned while viewing their Instagram feeds. Their data showed that photos with more likes caused more significant activity in the nucleus accumbens of the adolescents, a system that is part of the brain’s reward circuitry (Vendantem, 2016). It is important to note that this was true for all photos, including those created by themselves and others. 

Figure 1. Nucleus Accumbens Activity Levels in Adolescents’ Brains In Response To “Like” Ratios on Certain Images (Sherman et al., 2016)

Social science correspondent Shankar Vedantam, while discussing Sherman’s experiment in an interview with The Morning Edition, simplified the conclusions by stating, “The brains of teenagers responded very strongly to the pictures deemed popular, regardless of which pictures they were…And they found that if a teen saw a picture with lots of likes, she tended to like it herself. If another teen saw the same picture with only a few likes, he tended to not like it himself” (Vendantem, 2016). Conclusions from this experiment indicate that teenagers are heavily influenced psychologically by their social contexts and often make decisions based on impulsive emotions and peer pressures. 

Dangers of the Merge of AI and Social Media: Deepfakes, Snapchat AI, and AI-Run Algorithms

With the rise of social media and the rapid advancement of artificially intelligent technology, integrations between the two are becoming more and more prominent. Some common forms of this merge include the creation of deepfakes, Snapchat AI, and AI-run algorithms used by notable social media apps like TikTok. 

Deepfakes:

In 2019, a video was released on social media of Mark Zuckerberg, founder of Facebook, stating that due to safety concerns, Facebook will be deleted (Ali et al., 2021). The video fostered over 72 million views on various platforms and was believed to be accurate by many (Ali et al., 2021). 

In 2020, a video was released by researchers at the Massachusetts Institute of Technology (MIT) of Richard Nixon nervously delivering a speech after the Apollo Space Mission in which he stated “Good Evening, my fellow Americans. Fate has ordained that the men who went to the moon to explore in peace will stay on the moon and rest in peace” (Ali et al., 2021). The truth, which was that the astronauts in this mission had safely returned to Earth from the Moon, had been fabricated in this video to imply that they had died (Ali et al., 2021). This video was also believed to be confirmed by many. 

Both of these videos are examples of deepfakes. Deepfakes use a type of AI called generative adversarial networks (GANs) to produce hyper-realistic videos (Ali et al., 2021). While GANs have many positive uses, such as for more precise healthcare imaging and the creation of compelling artwork, they also have a dangerous capability of generating and spreading false information (Ali et al., 2021). 

Creating deepfakes is extremely easy and can be done by anybody using tools such as Reface' and FakeApp (Ali et al., 2021). With just a few photos and voice clips, a deepfake creator can make a person say anything they want on video. Often used for malicious purposes, Deepfakes are commonly used to swap celebrity faces into explicit videos and manipulate the words of world leaders in speeches (Ali et al., 2021).

Because social media has made it very easy to circulate deepfakes across the internet rapidly, the misinformation shared by these videos has harmful consequences for decision-making. According to scientists, “Children are considered vulnerable populations (Mechanic & Tanner, 2007) and are more at risk of believing deepfakes given their lack of exposure to such manipulated media and their lack of knowledge to contextualize new information. Furthermore, children form several social and political opinions during their formative years and can become targets of disinformation spread by convincing deepfakes” (Ali et al., 2021). 

Unfortunately, studies suggest teenagers are not skilled at recognizing conventional fake news. One study was conducted in which students were asked about the credibility of a source using a phony website. Only 11% of the students recognized the website was a hoax (Leu et al., 2007). Another study using the same website showed that 65% of students claimed the website was reliable (Pilgrim et al., 2019). This is particularly concerning because 75% of teenagers rely on the internet for news information (Joel Breakstone et al., 2018), making them prone to faulty decision-making and poor judgment skills due to the misinformation they are fed. 

The implications of the inability to detect the difference between fake and authentic news and deepfakes are dire as it can further undermine the ability of an adolescent to make decisions as they grow older. A lack of this ability in adulthood can be extremely consequential when looking at the larger-scale impacts of democracy, the political landscape, future employment status (especially with the easy manipulation of a person’s words and actions through deepfakes), and society as a whole. 

Snapchat’s “My AI” :

The “My AI” introduction on Snapchat in early 2023 surprised many of its users. Built to be “your digital friend,” My AI uses OpenAI’s GPT technology to engage in conversation with its users over various disciplines (Simplilearn, 2023). Very quickly after its release, controversies arose regarding My AI’s data privacy and security, suspicious responses to questions, and the lack of a content filter. The system’s ability to “quickly extract and process vast amounts of personal data [exposes] children to cyber threats, targeted advertising, and inappropriate content” (Garatte, 2022). This is especially important considering 57.8% of Snapchat’s users are between 13 and 24 (Lin, 2023). 

To test the dangers “My AI” poses to children on Snapchat, a researcher/journalist went undercover, pretending to be an underage teenager, and asked “My AI” a series of suspicious questions. By the end of their conversation, the researcher was able to get “My AI” to give them guidance on “how to mask the smell of pot and alcohol, how to move Snapchat to a device parents wouldn’t know about, and how to plan a ‘romantic’ first sexual encounter with a 31-year-old man. Brief cautions were followed by cheerful support” (Haidt, 2023). 

The company’s website has stated that they are “...constantly working to improve and evolve My AI, but it’s possible My AI’s responses may include biased, incorrect, harmful, or misleading content…you should always independently check answers provided by My AI before relying on any advice, and you should not share confidential or sensitive information” (Raj, 2023). Given the underdevelopment of the teenage brain during adolescence and its hindered ability to identify authentic sources through independent research, concerns have risen in terms of whether the youth will acknowledge this warning and if Snapchat’s My AI should be continued. 

Adolescents are prone to engaging in risky behaviors because of the heavy influences of social context, peer pressure, and the desire for psychological and social approval. With the introduction of unfiltered chatbots such as “My AI,” executing these behaviors is made more accessible as adolescents can find guidance on how to accomplish these tasks, all of which pose a great danger to adolescents’ mental, physical, and emotional health. 

AI-Run Algorithms: TikTok and Instagram Reels

Many social media networks utilize artificially intelligent algorithms to draw user attraction and present content personally tailored to what they enjoy watching. These algorithms proctor user engagement, relevance, timing and frequency, recency, content type, virality, watch time, and more (Adisa, 2023). A long-standing debate since TikTok’s creation indicates heavy bias and data security concerns within the algorithm being presented to its viewers. 

One central allegation that has been pressed on TikTok is that TikTok’s algorithm warps its users' views and opinions through heavily filtered political content, propaganda, misinformation, and biased content (Glosserman, 2023). One survey indicated that an estimated 20% of TikTok videos contain misinformation when portraying news events, and 90% of ads containing electoral misinformation are TikTok-approved (Glosserman, 2023). With an estimated 30% of people under thirty using TikTok to obtain their news, the potential for damage caused by false news and misinformation is huge (Glosserman, 2023). Not only do these statistics represent potential issues within TikTok’s content verification, but using an AI-managed algorithm to consistently push content containing misinformation toward the youth poses a concern for the adolescents who consume this content. 

Political bias is also often seen in TikTok’s AI-run algorithm. For example, during the Russia-Ukraine War, Russian TikTok was heavily filtered to show pro-Russian content to its viewers (Haidt, 2023). But, the algorithmic bias is not limited to TikTok only. The same political influence can be seen with Instagram reels in regard to current events, in which the platform has been accused of “shadow-banning” pro-Palestinian content (Paul, 2023). Shadow-banning occurs when an individual is excluded from a social media platform or online forum discreetly, usually by rendering their posts and comments invisible to other users, all without the user’s awareness. With this in mind, questions arise about what American social media platforms would look like if China were to invade Taiwan (Haidt, 2023). Would pro-Taiwan content be shadow-banned to justify China’s actions? For adolescents, this means that the formation of political opinions can be dangerously influenced by biased content consumption and misinformation instead of authenticated sources and objectively presented information. 

Additionally, with the existence of Instagram bot followers, it has become easier than ever to receive large numbers of fake likes, comments, views, and more significant publicity through automated robots (Howson, 2018). Manipulated content can be pushed out faster and more publicly using the AI-managed algorithm due to the fake publicity it can foster using these bots. 

The impacts of the AI-based algorithm used in popular social media networks can be considered a form of manipulation. Because a staggering amount of adolescents rely on social media networks such as TikTok as their news source and the fact that these algorithms can be heavily tampered with, adolescents are almost manipulated to believe and form opinions based on what the government and others want them to. This negatively impacts their ability to make their own decisions based on independent research and reliable sources. 

The Extrapolation of the Bridge between AI and Social Media: Possible Implications of its Effect on Future Generations of Adolescents. 

As the infiltration of various forms of artificially intelligent technology within social media becomes increasingly prominent, it is essential to consider the implications this merger will have on the adolescents maturing in this world. How will it impact their emotional, mental, and physical health, social relationships, skill sets, and decision-making ability? More importantly, will the negative impacts outweigh the positives? The escalated spread of misinformation through social media networks can harm adolescents' ability to make decisions. Adolescents are prone to being manipulated towards certain opinions and ideologies, especially if they foster a large following. 

Consider the following scenario. If an adolescent were to see a deepfake of their favorite celebrity promoting harmful behaviors such as vaping due to the significant influence of social context on the adolescent brain, they may be more likely to engage in the dangerous activity. Additionally, if the deepfake video received hundreds of likes and views through Instagram bot followers, the adolescent's chance of participating in the behavior would likely jump significantly. This same scenario can be applied to underage sex, violence, skipping school, shoplifting, drinking, drug use, and other activities harmful to adolescent socioemotional health. 

Despite the easy manipulation of content via AI, positive implications may also emerge due to the AI and social media combination. After being exposed to deepfakes and falsified information for a prolonged period, adolescents may gain experience and acquire the skills to distinguish authenticated information from fabricated information more broadly. In some cases, school systems are even developing digital media literacy models to help adolescents spot deepfakes and false information through generative modeling (Ali et al., 2021). 

Apart from malicious intentions, deepfakes can also be used to create videosmade purely for entertainment. For example, videos of world leaders such as Putin, Obama, Trump, and others doing TikTok dances circulate the internet frequently. While these don’t appear to have significant impacts on adolescents' mental and emotional health, they tend to leave their viewers with a good laugh. 

Questions also tend to arise in terms of whether it is AI that is “bad” or if it is the people who use AI to manipulate others and press false ideologies. One concept that is often overlooked is the fact that AI is a tool that can be utilized at the user's disposition. Similar to the analogy of whether a hammer can be used to build a house or hurt someone, artificial intelligence can be used for good or evil. Therefore, labeling artificial intelligence as a collectively malicious tool might not do justice to the positive things it can be used for, such as medical advancements, harmless comedy, and technological improvement. 

The implications of the positive and negative effects of the merging of social media and artificial intelligence on adolescent health is a pervasive topic. Further research and discussion are required to fully understand the depth of the effects this merger will have on adolescent health. 

Conclusion

With the rapid advancement of artificially intelligent technology and its increasing infiltration into popular social media networks used by adolescents, the implications of this overlap are yet to be researched thoroughly. From existing research, a few theories can be drawn about where this merger will lead the next generation of adolescents. Though the implications can be both positive and negative, it is observable that the negative repercussions outweigh the positive. Amidst the seldom laughs AI-generated TikTok videos can give to their viewers, there are layers of misinformation and content manipulation which can hinder the viewer’s decision-making and news processing abilities. 

As the curiosity of what the combination of artificial intelligence and social media will bring remains, a few questions continue to linger. Will future generations of adolescents face enhanced data privacy and security threats? Will the next world leaders be master manipulators who win votes by creating falsified realistic content? Will the minds of adolescents be manipulated to behave in the way that society wants them to? Can creating deepfake videos of normal individuals ruin their lives permanently? Current research suggests that if schools and academic institutions do not work to provide their students mentorship on the filtration of fake versus authentic content and the dangers/responsible utilization of social media and artificial intelligence, major consequences will arise in the political and social landscapes of the world and especially in countries like the US that are run by democracy. However, further research must ultimately be conducted to answer these questions and deepen our understanding of the role AI plays in adolescent perceptions of social media. 

Adisa, D. (2023, October 30). How to Rise Above Social Media Algorithms. Sprout Social. https://sproutsocial.com/insights/social-media-algorithms/

Ali, S., DiPaola, D., Lee, I., Sindato, V., Kim, G., Blumofe, R., & Breazeal, C. (2021). Children as creators, thinkers and citizens in an AI-driven future. Computers & Education: Artificial Intelligence, 2, 100040–100040. https://doi.org/10.1016/j.caeai.2021.100040

American College of Pediatricians (2023). The Teenage Brain: Under Construction. Issues in law & medicine, 38(1), 107–125. 

Breakstone, J., McGrew, S., Smith, M., Ortega, T., & Wineburg, S. (2018). Why we need a new approach to teaching digital literacy. Phi Delta Kappan, 99(6), 27-32.

Chatty Garrate. (2022, June 25). The impact of artificial intelligence on kids and teens. Aimagazine.com; BizClik Media Ltd. https://aimagazine.com/machine-learning/the-impact-of-artificial-intelligence-on-kids-and-teens 

For, C. (2019, September 12). Snapshot Paper - Deepfakes and Audiovisual Disinformation. GOV.UK; GOV.UK. https://www.gov.uk/government/publications/cdei-publishes-its-first-series-of-three-snapshot-papers-ethical-issues-in-ai/snapshot-paper-deepfakes-and-audiovisual-disinformation

Fowler, G. A. (2023, March 14). Snapchat tried to make a safe AI. It chats with me about booze and sex. Washington Post; The Washington Post. https://www.washingtonpost.com/technology/2023/03/14/snapchat-myai/ 

Glosserman, B. (2023). AI and social media are a dangerous combination. The Japan Times; The Japan Times. https://www.japantimes.co.jp/opinion/2023/03/28/commentary/world-commentary/tiktok-ai-dangers/ 

Haidt, J., Schmidt, E. (2023, May 5). The Atlantic. The Atlantic; theatlantic. https://www.theatlantic.com/technology/archive/2023/05/generative-ai-social-media-integration-dangers-disinformation-addiction/673940/ 

Howson, N. (2018, August 25). Instagram Bots - What They Are and Why You SHOULDN’T use them. AIM Social Media Marketing. https://aimsmmarketing.com/instagram-bots-why-you-shouldn-use-them/

Leu, D. J., Reinking, D., Carter, A., Castek, J., Coiro, J., Henry, L. A., & Zawilinski, L. (2007). Defining online reading comprehension: Using think aloud verbal protocols to refine a preliminary model of Internet reading comprehension processes. D. Alvermann (Chair) 21st Century Literacy: What is it, How do students get it, and how do we know if they have it.

Lin, Y. (no date) Which age group uses Snapchat the most? [Sep 2023 update], Oberlo. Available at: https://www.oberlo.com/statistics/which-age-group-uses-snapchat-the-most#:~:text=Together%2C%20these%20two%20age%20groups,are%20in%20this%20age%20range. (Accessed: 12 November 2023). 

Raj, A. (2023, August 18). Technical glitch panics Snapchat AI users. Tech Wire Asia; TechWireAsia. https://techwireasia.com/2023/08/did-snapchat-ai-just-go-rogue/

Mechanic, D., & Tanner, J. (2017). Vulnerable People, Groups, And Populations: Societal View | Health Affairs Journal. Health Affairs. https://www.healthaffairs.org/doi/10.1377/hlthaff.26.5.1220

Paul, K. (2023, October 18). Instagram users accuse platform of censoring posts supporting Palestine. The Guardian; The Guardian. https://www.theguardian.com/technology/2023/oct/18/instagram-palestine-posts-censorship-accusations

Pilgrim, J., Vasinda, S., Bledsoe, C., & Martinez, E. (2019). Critical Thinking Is Critical: Octopuses, Online Sources, and Reliability Reasoning. The Reading Teacher, 73(1), 85–93. https://doi.org/10.1002/trtr.1800

Sherman, L. E., Payton, A. A., Hernandez, L. M., Greenfield, P. M., & Dapretto, M. (2016). The Power of the Like in Adolescence. Psychological Science, 27(7), 1027–1035. https://doi.org/10.1177/0956797616645673

Simplilearn. (2023, August 24). Snapchat’s New Friend: How My AI is Changing the Game. Simplilearn.com; Simplilearn. https://www.simplilearn.com/my-ai-on-snapchat-article#:~:text=My%20AI%20on%20Snapchat%20is%20an%20innovative%20chatbot%20that%20integrates,across%20a%20range%20of%20subjects.

Torney-Purta, J. (2017). The Development of Political Attitudes in Children. Google Books. https://books.google.com/books?hl=en&lr=&id=qFUPEAAAQBAJ&oi=fnd&pg=PP1&ots=JD5zS5Uw5T&sig=A_QjYW7jLS0NHNEyD07ahQVAWJY#v=onepage&q&f=false

Vedantam, S. (2016, August 9). Researchers Study Effects Of Social Media On Young Minds [Radio]. https://www.npr.org/2016/08/09/489284038/researchers-study-effects-of-social-media-on-young-minds

 
Previous
Previous

What Do We Owe AI?