My university has undertaken a long-term initiative called “AI across the curriculum.” I recently saw a presentation that referred to this article: Conceptualizing AI literacy: An exploratory review (2021; open access). The authors analyzed 30 publications (all peer-reviewed; 22 conference papers and eight journal articles; 2016–2021). Based in part on their findings, my university proposes to tag each AI course as fitting into one or more of these categories:
- Know and understand AI
- Use and apply AI
- Evaluate and create AI
- AI ethics
“Most researchers advocated that instead of merely knowing how to use AI applications, learners should learn about the underlying AI concepts for their future careers and understand the ethical concerns in order to use AI responsibly.”
— Ng, Leung, Chu and Qiao (2021)
AI literacy was never explicitly defined in any of the articles, and assessment of the approaches used was rigorous in only three of the studies represented among the 30 publications. Nevertheless, the article raises a number of concerns for education of the general public, as well as K–12 students and non–computer science students in universities.
Not everyone is going to learn to code, and not everyone is going to build or customize AI systems for their own use. But just about everyone is already using Google Translate, automated captions on YouTube and Zoom, content recommendations and filters (Netflix, Spotify), and/or voice assistants such as Siri and Alexa. People in far more situations than they know are subject to face recognition, and decisions about their loans, job applications, college admissions, health, and safety are increasingly affected (to some degree) by AI systems.
That’s why AI literacy matters. “AI becomes a fundamental skill for everyone” (Ng et al., 2021, p. 9). People ought to be able to raise questions about how AI is used, and knowing what to ask, or even how to ask, depends on understanding. I see a critical role for journalism in this, and a crying need for less “It uses AI!” cheerleading (*cough* Wall Street Journal) and more “It works like this” and “It has these worrisome attributes.”
In education (whether higher, secondary, or primary), courses and course modules that teach students to “know and understand AI” are probably even more important than the ones where students open up a Google Colab notebook, plug in some numbers, and get a result that might seem cool but is produced as if by sorcery.
Five big ideas about AI
This paper led me to another, Envisioning AI for K-12: What Should Every Child Know about AI? (2019, open access), which provides a list of five concise “big ideas” in AI:
- “Computers perceive the world using sensors.” (Perceive is misleading. I might say receive data about the world.)
- “Agents maintain models/representations of the world and use them for reasoning.” (I would quibble with the word reasoning here. Prediction should be specified. Also, agents is going to need explaining.)
- “Computers can learn from data.” (We need to differentiate between how humans/animals learn and how machines “learn.”)
- “Making agents interact comfortably with humans is a substantial challenge for AI developers.” (This is a very nice point!)
- “AI applications can impact society in both positive and negative ways.” (Also excellent.)
Each of those is explained further in the original paper.
The “big ideas” get closer to a general concept for AI literacy — what does one need to understand to be “literate” about AI? I would argue you don’t need to know how to code, but you do need to understand that code is written by humans to tell computer systems what to do and how to do it. From that, all kinds of concepts stem; for example, when “sensors” (cameras) send video into the computer system, how does the system read the image data? How different is that from the way the human brain processes visual information? Moreover, “what to do and how to do it” changes subtly for machine learning systems, and I think first understanding how explicit a non–AI program needs to be helps you understand how the so-called learning in machine learning works.
A small practical case
A colleague who is a filmmaker recently asked me if the automated transcription software he and his students use is AI. I think this question opens a door to a low-stakes, non-threatening conversation about AI in everyday work and life. Two common terms used for this technology are automatic speech recognition (ASR) and speech-to-text (STT). One thing my colleague might not realize is that all voice assistants, such as Siri and Alexa, use a version of this technology, because they cannot “know” what a person has said until the sounds are transformed into text.
The serious AI work took place before there was an app that filmmakers and journalists (and many other people) routinely use to transcribe interviews. The app or product they use is plug-and-play — it doesn’t require a powerful supercomputer to run. Just play the audio, and text is produced. The algorithms that make it work so well, however, were refined by an impressive amount of computational power, an immense quantity of voice data, and a number of computer scientists and engineers.
So if you ask whether these filmmakers and journalists “are using AI” when they use a software program to automatically transcribe the audio from their interviews, it’s not entirely wrong to say yes, they are. Yet they can go about their work without knowing anything at all about AI. As they use the software repeatedly, though, they will learn some things — such as, the transcription quality will be poorer for voices speaking English with an accent, and often for people with higher-pitched voices, like women and children. They will learn that acronyms and abbreviations are often transcribed inaccurately.
The users of transcription apps will make adjustments and carry on — but I think it would be wonderful if they also understood something about why their software tool makes exactly those kinds of mistakes. For example, the kinds of voices (pitch, tone, accents, pronunciation) that the system was trained on will affect whose voices are transcribed most accurately and whose are not. Transcription by a human is still preferred in some cases.
.
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.
.