Detecting AI
Anna Cheng
Grade 7
Presentation
Hypothesis
If individuals have background information, then they are more likely to accurately distinguish between AI-generated and real content, as the additional facts can help identify inconsistencies that reveal the content's authenticity.
Research
AI is everywhere.
AI in the news has become an ongoing discussion for years. Since 1995, documented cases of AI generated images and facts have been reported in the news. This has caused a large problem due to the harmful way AI is being used. In the news, people can use AI to create fake stories, which are used to support an opinion of a person, which leads to actions or opinions based on false information. For example, multiple deepfakes, images, or videos artificially altered to make one person look like another person have been used to create bias in elections or other crucial decisions. When deciding who to vote for or where your opinion stands, people often look to trusted friends, family, and the news to see how the candidates are doing and treating others. This is a problem because “news” is no longer trustworthy and contains nonfactual information.
Many engineers and scientists have been working on ways to stop the harmful manipulation of our minds, but have yet to find an answer. While our hopes rest in the future, in reality, what we can do is attempt to recognize when we see something untrue. Humans can recognize the quality of the image and fact check, but how often do we attempt to do that? Unfortunately, due to the growing problem of fake news, scientists have been forced to turn to AI to recognize the problem – AI.
Variables
Manipulated
Whether or not there is a news story to go with the images
Responding
The percentages of people who guess the AI generated image correctly
Controlled
Images, stories, time, revealing the answers after testing,
Procedure
- Gather 3 AI generated news stories with AI generated images and 9 real stories and images.
- Separate the 12 news stories and images into 3 groups of 4 with one fake story/image in each group.
- Label each group with the letters A-D, E-H, or I-L.
- Set up a survey with 6 identical pages with 4 sections each for each participant.
- Hand out group A-D ONLY IMAGES to participants.
- Set a timer for five minutes and let participants study the images, they will then guess the image that was AI generated and record their answer on the survey.
- Repeat steps 5 and 6 with the other 2 image groups E-H, and I-L.
- Repeat steps 5-7 with the same image groups but for every test add the 4 corresponding news stories to go with the images.
- Collect the data for each trial and calculate the percentages of the correct answers for each trial.
- Analyze the results.
Observations
Results:
Data Set | # Right | % Right | |
Image Only |
A - D | 11 | 68.75% |
E - H | 9 | 56.25% | |
I - L | 4 | 25.00% | |
Text & Image |
A - D | 8 | 50.00% |
E - H | 11 | 68.75% | |
I - L | 4 | 25.00% |
Notes:
- There were 16 participants tested.
- There were 12 data sets, labeled A through L. Participants were given a set of 4 images (with/without accompanying text) at at time. In each set there was 1 "fake" image.
Analysis
I was quite surprised to discover my hypothesis was incorrect. I was wrong by only one measly point, but honestly, I had expected a greater margin in my direction. What I also didn’t anticipate was how many participants changed their answer to the wrong one after reading the news story. Instead of helping people detect AI, actual stories were more likely to lead people astray. The participants appeared to be influenced by stories that went against what they believed to be true. For example, the story “In My Daddy’s Belly” seemed to confuse many individuals. Many people believed this story to not be possible, and were further biased with its origin coming from TikTok.
Conclusion
We are not prepared for deciphering AI. Not only do we struggle with interpretting the authenticity of images, but the stories do not seem to aid us either. It appears that people are not helped by having more information. Computers don’t really care if there are more data points, but often we as humans struggle (see “Talking to Strangers”) with our own biases. As AI is already growing excessively in its use, I firmly believe in the importance of more research into this topic.
Application
This question is one similar to many others being studied by experts all around the world. AI is a real life problem and a blessing at the same time. It is in the news, in our everyday lives; AI I is everywhere. AI can be a tool that aids our society, or AI can destroy us. If we can understand AI and learn to accept the reality that we need to understand AI, then we can reduce the problem AI can become.
If I were to do this experiment again, the first thing I would change would be the number of participants. With only a few people who were given the survey, the result would be less accurate than an experiment with more people. Unfortunately testing more people just wasn’t possible with the limited time and resources I had. The second thing I would change would be the question itself. The question I had asked was an insufficient way to study the physiological aspect of detecting AI. What I really wanted to learn was how our brains are able to detect AI, as well as why this is a problem, and how we can solve it. If we were able to accurately detect AI 100% of the time, fake news would no longer be a problem, and AI could be used to improve our lives instead of making it worse. Other projects people could do relating to this experiment are things like studying the human brain, or looking further into the uses of AI. We need to understand this further if we are to be able to use AI to its full potential.
Sources Of Error
I had in no way enough participants. Originally, I had 14 people turn in their permission forms and the testing went well. Throughout the experiment, I was fully aware that 14 students wasn't even close to the amount I would need to get an accurate result. Unfortunately, at that time, I had no other option. And then, I lost everything. All the data I had collected had disappeared. In the last week, I had to not only wrap up my slideshow and the other factors in this project, but re-conduct the experiment with a whole new set of people as well. One other thing I would like to recognize is the error on the question \ problem. During the interview, it was realized that my experiment in fact, had very little to do with AI at all. The way my testing was done made the discovery more about psychology than AI. I needed to have a clearer understanding of AI and what questions to ask.
Citations
https://www.snopes.com/fact-check/picture-boy-ugly-drawing/
https://www.forbes.com/sites/mollybohannon/2024/09/15/vance-defends-false-claims-about-immigrants-eating-cats-wanted-to-create-media-attentio
https://theonion.com/jd-vance-forced-to-dress-as-elf-at-mar-a-lago-christmas-party/
https://www.theguardian.com/books/2024/apr/26/trump-kristi-noem-shot-dog-and-goat-book
https://www.snopes.com/fact-check/shaq-boy-diner-abuse/
https://www.snopes.com/fact-check/jim-thorpe-shoes-olympics/
https://www.snopes.com/fact-check/dominos-pizza-paving/
https://www.snopes.com/fact-check/trans-man-childrens-book/
https://www.snopes.com/fact-check/the-rescuers-topless/
https://www.snopes.com/fact-check/trump-flies-sick-boy/
https://www.snopes.com/fact-check/do-farmers-feed-cows-skittles/
https://www.snopes.com/fact-check/snow-king-chairlift/
https://www.ftc.gov/news-events/news/press-releases/2022/06/ftc-report-warns-about-using-artificial-intelligence-combat-online-problems
https://www.cnet.com/news/misinformation/ai-misinformation-how-it-works-and-ways-to-spot-it/
https://akademie.dw.com/en/generative-ai-is-the-ultimate-disinformation-amplifier/a-68593890
https://www.cbc.ca/news/science/artificial-intelligence-misinformation-google-1.7217275
https://theconversation.com/how-close-are-we-to-an-accurate-ai-fake-news-detector-242309
https://news.vt.edu/articles/2024/02/AI-generated-fake-news-experts.html
https://www.washingtonpost.com/technology/2023/12/17/ai-fake-news-misinformation/
https://designer.microsoft.com/image-creator
https://journalism.columbia.edu/news/tow-report-artificial-intelligence-news-and-how-ai-reshapes-journalism-and-public-arena
https://www.pushkin.fm/podcasts/against-the-rules
https://www.goodreads.com/book/show/43848929-talking-to-strangers
Acknowledgement
Thank you to all the people who contributed to this project. Thanks to the teachers who supported this experiment. Thank you to my parents: Jeremy Cheng who worked as my expert and Andrea Cheng who helped edit the writing and slideshow. I would also like to thank all the participants who took place in the experiment.