Critical AI Literacy
Welcome to the introduction to Critical AI Literacy page. By reading this, you will gain a high-level understanding of the importance of developing critical AI literacy as part of using GenAI tools responsibly. This entails evaluating these tools and our use of them and understanding the existing and potential shortcomings and detrimental aspects of GenAI. It will take time for educators and learners to develop in-depth critical AI literacy, but a good starting point is an awareness of the following points.
Please explore the links below to find specific information you are looking for on this page.
Privacy Concerns & Copyright Infringement
As GenAI tools train, in part, off the data that users input into them, it is important that you do not enter any personal data (your own, or others) or any copyrighted material into them because this information is not secure and can possibly be output for another user. Beyond this, ethical concerns have been raised over the methods used for training these tools that include scraping internet data, as there is a lack of transparency around this in terms of use of copyrighted material, lack of consent, and the monetization of the work of others without proper credit/payment. Several lawsuits have been brought against GenAI compnaies regarding copyright infringment, including the NY Times Lawsuit against OpenAI which cites several ChatGPT outputs that mirror excerpts from the paper's article near word for word.
Misinformation and Amplification of Biases
The content that GenAI produces is not always accurate (hallucinations) and contains bias. Scraping large quantities of human-generated internet data in the training of these tools means that the tools often adopt biases found in this data, therefore further amplifying predominant viewpoints and stereotypes. Remember that these tools do not "think" for themselves and are not capable of critical analysis - it is important that we fill this role when using them and critically analyse their outputs. There is a possibility that these tools may be used (intentionally or not) to further spread misinformation and cause harm.
Environmental Impact
GenAI has a large environmental impact. The training of these tools is intensive, requiring large amounts of electricity and water. A study conducted by the University of Massachusetts found that training a single AI model emitted as the same amount of carbon that five cars would over the course of their lifetimes (more than 626,000 pounds of carbon dioxide equivalent).
Exploitation of Workers
While we think of AI as the work of technology, it requires a large amount of human intervention while training and developing. As we know, GenAI contains biases and misinformation. In an effort to improve this and the more general functioning of AI, people are employed to label images and text. This work is auctioned off globally in order to create a "race to the bottom" for wages, leaving workers, mostly in the global south, open to exploitation and with very little pay.