Human Vs. AI. Can humans tell the difference between human and AI images?

For our experiment, we tested if people could recognize pictures made by A.I. In this project, we will ask AI to create visuals that are matched by an identical real image. Several age groups will be asked if they are able to recognize the difference.
Aresema Egnako Fiona Muse
Grade 7

Hypothesis

PT.1

We think that us humans will be able to tell the difference between images created by artificial intelligence and real ones. We believe this is because AI is still in its early stages and can't do a wide range of things very well yet. Right now, AI is good at recognizing images and understanding language, but it struggles to make things that feel real or have genuine emotions. The test subjects' ability to spot the AI-generated images may be due to these subtle shortcomings. However, there's a growing expectation that AI will get much better in the coming years. People are imagining a future where AI not only looks real but also understands and conveys emotions, making it hard for us to tell the difference. This progress raises important questions about ethics and how AI might impact our lives as it becomes more and more advanced.

 

PT.2

(Fiona)

I believe that teenagers aged 11-16 will be better at telling the difference between real and AI-created images compared to other age groups. This is because teens today, especially Gen Z, are used to using AI and spending a lot of time on social media like TikTok and Instagram. These platforms often talk about AI-created content, which might make teens more aware of the differences between real and AI images. Additionally, their frequent exposure to AI-generated media and their familiarity with digital technology could give them an edge in distinguishing between the two.

(Aresema)

I believe that young adults aged 17-24 will have a strong ability to distinguish between images produced by artificial intelligence and real ones more than any other age group. This belief arises from their frequent use of internet and web browsers like Google Chrome, Google, and Internet Explorer. Additionally, this age group often creates online shops, using AI-generated images to showcase their products and work. Moreover, due to their heavy presence on social media platforms such as Instagram and TikTok, where discussions about AI-generated content are common, young adults are likely to have heightened awareness and discernment when encountering such images.

Research

In our experiment, we looked at how well people could recognise images generated by artificial intelligence. In this project, we requested AI to generate visuals, each accompanied by an identical real duplicate. Various people were then asked to examine the photographs and identify any differences between them. Surprisingly, the photos created by humans and AI shared a remarkable similarity. However, a divide came among the participants: some were able to discern between human and AI-generated photos, while others found the task more difficult. This gap in recognition shows that there may be subtle nuances or defects in AI-generated images that particular people are attentive to, raising concerns about the visual limits between human and artificial creativity. Further investigation on these distinctions could provide useful insights into AI's creating capabilities and incorporation into artistic creation.

Variables

Independent Variables: Age groups

Dependent Variables: Different age groups identifying and differentiating between AI and real-life images

Controlled Variables: The images we used to test people.

Procedure

  1. Select a subject or idea for your images.
  2. Take six or more actual photos of the item or subject you have chosen, or use images from the internet.
  3. Locate or create at least six AI-generated pictures of the same subject or object.
  4. Get every picture ready for your volunteers to see.
  5. Prepare a data table.
  6. Show each picture to a volunteer in each age group one at a time. Ask if they believe the image is artificial intelligence (AI) created or real, and note their answer in your data table.
  7. For every volunteer, repeat the procedure.
  8. Add up the number of volunteers in each age group who said each image was the real one, then record this number in your data table.
  9. Add up the number of volunteers in each age group who said that each image was created by artificial intelligence (AI) and record their response in your data table.
  10. Do the same thing for each age group and calculate averages and percentages
  11. Calculate the percentage of volunteers who correctly identified whether each individual image was real or AI-generated. Enter the percentage in your data table.
  12. Examine your data.

Observations

(Fiona's observations)

During our experiments, I noticed that older participants, aged 35 and above, took longer to make decisions about the images. Some even guessed because they found it hard to tell if the images were made by AI or real. On the other hand, younger participants, especially those between 17 and 24, were quick to spot mistakes in the AI-generated images. They noticed details like a burger with caviar instead of ketchup and sneakers with the wrong logo and a strange hand holding them. This suggests that familiarity with technology might affect how well people can distinguish between real and AI-generated images, with younger individuals potentially having an advantage due to their greater exposure to AI technology and social media.

(Aresema's observations)

When I tested each participant, I noticed that the older people took the longest to choose, and some of them couldn't even solve them, so they just guessed. I also saw that participants had a very hard time telling the difference between the two sunflower photographs since the AI image generator produced a highly realistic image that mimicked the real one. In addition, some people within the 17–24 age group found mistakes in some pictures that Fiona and I completely missed.

Analysis

After conducting our testing, we found that volunteers in the 17-24 age group achieved the highest accuracy, with 70% of their responses correct and 30% incorrect. Surprisingly, the 35+ age group followed closely behind, achieving 69% accuracy with 31% incorrect answers. The 25-34 age group ranked third, with 65% of their answers correct and 35% incorrect. In fourth place were participants aged 5-10, with 63% accuracy and 37% incorrect responses. Unexpectedly, the 11-16 age group ranked last, with 61% accuracy and 39% incorrect answers. This outcome was particularly surprising given the assumption that the younger age groups, especially 11-16, would perform better due to their frequent use of social media. However, the 17-24 age group emerged as the frontrunner with the highest percentage of correct responses.

This outcome was particularly surprising given the assumption that the younger age groups, especially 11-16, would perform better due to their frequent use of social media. However, upon reflection, it's possible that factors such as cognitive development and exposure to AI technology may have influenced the results. The 17-24 age group, often characterized by high digital literacy and familiarity with AI-driven content, demonstrated a stronger ability to discern between real and AI-generated images. Conversely, the 11-16 age group, while active on social media platforms, may not have developed the same level of critical thinking skills necessary for accurate identification.

Conclusion

In conclusion, our investigation into the ability of individuals across different age groups to distinguish between real and AI-generated images has provided valuable insights into the complex interplay between age, technology usage, and cognitive development. Contrary to initial assumptions, our findings revealed that the 17-24 age group demonstrated the highest proficiency in this task, surpassing expectations based on their frequent use of social media. This unexpected outcome suggests that factors such as digital literacy and exposure to AI technology may play a more significant role than age alone in shaping individuals' abilities to discern between real and AI-generated content.

Moreover, our observations underscore the importance of considering complex factors, such as cognitive development and critical thinking skills, in understanding individuals' performance in image recognition tasks. While younger age groups may exhibit high levels of digital engagement, they may not necessarily possess the same level of analytical ability as older counterparts.

Moving forward, further research into the specific mechanisms underlying individuals' perception of AI-generated content could provide valuable insights for educational strategies, technology integration, and media literacy initiatives. By understanding how different age groups interact with and interpret AI-driven media, we can better equip individuals with the skills and knowledge necessary to navigate the increasingly digital landscape of the 21st century.

Overall, our project highlights the dynamic relationship between human cognition and technological advancement, paving the way for future exploration and innovation in the realm of artificial intelligence and digital media.

 

Application

The ability of humans to distinguish between AI-generated and real images demonstrates the fascinating intersection of technology and perception. While many of us may already be able to tell them apart, it is important to recognize that this ability may not be universal. With the rapid advancements in AI technology and the ever-increasing influence of the digital age, we are undoubtedly on the verge of a future in which AI-generated images will become more common in our daily lives. As we navigate this changing landscape, it will be fascinating to see how our relationship with these images evolves and how they influence our visual experiences.

 

Our research project, which investigates people's ability to distinguish between real and AI-generated images, holds significant potential for various applications. Firstly, the findings could enhance security systems, aiding in the identification of individuals attempting to deceive with computer-generated content, particularly in areas like surveillance and access control. Secondly, understanding the capacity to differentiate between genuine and computer-generated faces directly impacts user authentication processes, with implications for online account access, secure facilities, and financial transactions. Additionally, the research has practical applications in the media and journalism industries, providing tools to detect manipulated content in the digital age. The identification of the age group most proficient in distinguishing between real and AI-generated images opens doors for the development of educational tools and games aimed at improving visual discernment skills. Furthermore, the insights gained from the project can be applied to social media content moderation, helping platforms detect and manage deceptive content. Lastly, the research contributes to cognitive and psychological studies by offering valuable information on how different age groups perceive and interpret AI-generated content, advancing our understanding of human cognition. Throughout this work, ethical considerations, especially concerning potentially deceptive content, remain a priority.

 

 

Sources Of Error

What could have happened:

Task Complexity:

  • The nature of the task or the images used in your experiment may introduce complexity that affects participant responses. If the images are too similar or too different, it can impact the accuracy of your results. Ensure that the difficulty level is appropriate for your research goals.

Familiarity with AI-Generated Content:

  • Participants' prior exposure to AI-generated images may influence their ability to differentiate them. Those who are more familiar with AI technology might be more adept at spotting generated content, potentially skewing the results.

Temporal Factors:

  • External factors such as time of day, fatigue, or distractions during the experiment can introduce variability in participant performance. Try to control for these factors as much as possible to ensure consistent conditions for all participants.

 

What did happen:

Response Biases:

  • Participants may provide responses they believe align with social expectations or what they perceive as the "correct" answer. This response bias can affect the reliability of your data, especially if participants feel pressure to perform well.

Printing Errors:

  • Unfortunately, some pictures were printed incorrectly, introducing a significant source of error. Misprints or inaccuracies in the printing process can alter the images' visual characteristics, potentially affecting my participants' ability to make accurate distinctions.

Insufficient Time for Participants:

  • I noticed that participants might not have had enough time to carefully evaluate and differentiate between real and AI-generated images. Providing ample time for each participant is crucial to ensure thoughtful responses and accurate judgments.

Discrepancy Between Online and Printed Images:

  • I observed differences between the online presentation of images and their printed versions. This discrepancy could cause confusion and influence participants' judgments. Maintaining consistency in the appearance of images across both digital and printed formats is essential for the integrity of my experiment.

External Influence from Other Participants:

  • I am aware that the potential for participants to discuss or share answers with each other could introduce the risk of contamination. This might lead to biased responses, as individuals could be influenced by the opinions or answers of others. I will implement measures to prevent communication between participants during the experiment.

 

To address these issues, I am considering the following strategies:

  • Standardized Time Limits: I will ensure that each participant is given a standardized and sufficient amount of time to complete the task, minimizing the impact of time constraints on their responses.

  • Quality Control in Printing: I will take steps to thoroughly check and control the printing process to avoid misprints or distortions in the images. Regular calibration of printing equipment will be a priority to maintain consistency.

  • Image Calibration: I plan to validate that the online presentation of images accurately represents their printed counterparts. Calibration of colors, resolution, and other visual elements will be conducted to minimize discrepancies.

  • Isolation of Participants: I will conduct the experiment in a controlled environment to minimize external influences. Ensuring that participants cannot communicate with each other during the task will be crucial to prevent the sharing of answers.

 

Citations

https://www.edplace.com/blog/home_learning/identify-variables-in-a-scientific-investigation#:~:text=The%20elements%20that%20change%20in,independent%2C%20dependent%2C%20and%20controlled.
https://www.imagine.art/dashboard/tool/from-text

https://docs.google.com/

https://classroom.google.com/u/1/c/

https://www.tidio.com/blog/ai-test/

https://uwaterloo.ca/news/media/can-you-tell-ai-generated-people-real-ones#:~:text=Participants%20were%20asked%20to%20label,cent%20threshold%20that%20researchers%20expected.

https://www.nexcess.net/resources/ai-vs-human-study/

https://www.sciencedaily.com/releases/2024/03/240306003456.htm

 

Acknowledgement

A big thanks to Miss. Kale for helping me a lot with my science fair project and cheering me on. Also she would always give me ideas to make my project better. Aresema, my teammate/partner, also played a big part, and I appreciate her efforts. Thanks to Mountain View Academy for giving us what we needed and some of the materials as well. My family supported me a ton, keeping me going when things got tough. Big thanks to my friends for being positive and excited. Also, scientists' work inspired our project. I want to shout out and say thanks to the volunteers and test subjects who helped us out and let us test on them! Everyone made this science fair project super cool and unforgettable.