How to educate the public about AI

Two new items related to educating the general public about artificial intelligence:

The A–Z guide comes from the Oxford Internet Institute and Google. It’s slick, pretty, and animated. It consists of exactly 26 short items, one for each letter of the alphabet: artificial intelligence, bias, climate, datasets, ethics, fakes, etc. The aim is to provide answers in a not-overwhelming way.

I love the idea, but I’m not in love with the execution. For example, the neural networks piece tells us that neural nets “attempt to mimic the structure of the brain,” but they “cannot ‘think’ like humans.” That’s great — clear and accurate. We could quibble about “attempt to mimic the structure,” but we can also let that slide. But then:

“AI design teams can assign each piece of a network to recognizing one of many characteristics. The sections of the network then work as one to build an understanding of the relationships and correlations between those elements — working out how they typically fit together and influence each other.”

To me, that seems misleading. It sounds as if the layers of the neural net are directed by specifically programmed instructions, but all my reading has indicated that the layers determine on their own which features they are detecting. (I’m thinking specifically about image recognition and supervised learning here.) This is important because it contributes to the “black box” problem of machine learning systems.

I also dislike phrases such as “build an understanding,” because that implies more intentionality than these networks actually have.

Giving people short, understandable explanations of specific aspects of AI is a wonderful idea, but the explanations need to be both straightforward and true.

The second education item I linked above comes from MIT’s news office. It describes a “new cross-disciplinary research initiative … to promote the understanding and use of AI across all segments of society.”

“People need to be AI-literate to understand the responsible use of AI and create things with it at individual, community, and societal levels.”

—Cynthia Breazeal, MIT professor, director of Responsible AI for Social Empowerment and Education (RAISE)

This sentiment is becoming more widely voiced as claims for the benefits of AI increase in the media. The idea behind RAISE is good and admirable — yes, people in all walks of life should have some understanding of AI, at least as much as they have an understanding of what makes airplanes fly and what makes computers able to store and retrieve our vacation photos.

Oh, wait.

In the United States, the average person’s understanding of any process involving physics or electronics might not be very good. Many students with stellar high-school grades don’t have a solid grasp of how their laptops or phones work at a basic level. I’m not talking about the students who attend MIT, but I am talking about those who can manage high SAT scores and gain admission to top public universities.

The RAISE initiative has identified four strategic areas for research, education, and outreach:

  • Diversity and inclusion in AI
  • AI literacy in pre-K–12 education
  • AI workforce training
  • AI-supported learning

But let’s go back to the A–Z guide and look at the segment about binary code, Zeros & Ones. It tells us that 0’s and 1’s are “the foundational language of computers.” It tells us that a particular long sequence of 0’s and 1’s means “Hello” to a computer. In one sense, that is true — but it really explains nothing to a layman. A computer system doesn’t know what “Hello” is (or means) any more than a rock does.

To accomplish AI literacy, we need to accomplish computer literacy. We need to teach and explain — clearly and accurately — to students at all levels what computers can and cannot do, how they are programmed, and how AI is different from, say, writing a game program that plays tic-tac-toe as well as any human can. I can write and run a winning tic-tac-toe program on an average laptop if I know which algorithms to use in my code — but there’s nothing remotely like intelligence in that program.

We need to add caveats every time we say something like “the computer learns,” or “the system understands.”

It will be fantastic if RAISE (and other outreach programs) can raise the level of computer literacy among Americans. It’s an important goal in this era of AI hype and euphoric claims, because it will be so much easier for people to be duped, exploited, mistreated, sidelined, marginalized, and/or denied jobs, loans, mortgages, healthcare, or admission to universities if they don’t understand what AI is and how it works.

.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

Pastries, cancer cells, and neural networks

The system described in this wonderful New Yorker article from March 2021 is NOT a neural network, and that’s one of the things that make it fascinating. I’ve written before about ImageNet and how neural networks, trained on humongous datasets of labeled digital images, are able to very accurately say what is in a photograph that the system has never “seen” before.

This is different.

This system, developed by a small company in Japan, does not require hundreds or thousands of images of each object it needs to identify precisely because it doesn’t use a neural network. The technologies it uses can be called good old-fashioned AI (GOFAI). Essentially it consists of a collection of manually constructed algorithms.

Above: BakeryScan at work: Screen capture from video (2017)

The system also “learns,” but not in the typical black-box sense of today’s machine learning systems. It is widely used in the checkout systems of Japanese bakeries, which offer a bewilderingly large assortment of pastries and small bread items, many of which look quite similar to one another. BakeryScan was released in 2013; it was 15 years in development.

More recently, the bakery system has been adapted to recognize specific types of cancer cells. The new system is able to “look at an entire microscope slide and identify the cells that might be cancerous” (source: The New Yorker article).

Rather than summarizing the article further, I’m just going to urge you to read it. It’s very much worth your time.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

The companies that are making AI a hot investment

Sometimes I read something that is like a voice out of my own head:

“Artificial intelligence is a buzzword increasingly being used by companies around the world that seek to project themselves at the forefront of cutting-edge research … As the word loses its meaning, it is important for investors to understand what artificial intelligence is and what companies stand to gain from breakthroughs in the new technology.”

Yahoo! Finance, April 12, 2021

That comes from an article titled “10 Best Artificial Intelligence Stocks to Buy for 2021” (link above) but it’s more than just a list of stock tips. It points out that “technology firms with social media services” (e.g. Facebook) are hot because they have the massive datasets that power machine learning about consumers. Companies that make super-fast computer hardware — particularly graphical processing units (GPUs) that crunch through that data — are also good bets (although I’ve heard about growing hardware shortages due to the pandemic).

The article’s author refers to hedge-fund investments as an indicator, which might make me leery about investing my own hard-earned cash, but the list of companies still interested me. Along with hardware manufacturers such as Micron Technology and Nvidia; Amazon, which is valuable for more than only its growing AI expertise; and Alphabet Inc., the parent of Google and DeepMind — the list also includes:

  • Adobe, which is “integrating data-based learning into most of its software through Adobe Sensei, a tool that uses artificial intelligence to improve user experiences across a wide range of Adobe products.”
  • Facebook — this is Yahoo! FInance’s No. 1 pick, and with its deep pockets, Facebook is certainly able to acquire some of the best research minds in AI today. Its efforts are grouped under the Facebook AI label, and the breadth of its work is visible on this page.
  • IBM — this is a recommendation I would argue with. IBM talks a big game in AI, but its failures with IBM Watson Health make me skeptical about its strategies overall.
  • Microsoft, which “has a separate artificial intelligence unit called Microsoft AI that helps users, organizations, and governments across the world with machine learning, data analytics, robotics, and internet of things products.” Just this week, Microsoft to announced a $16 billion cash deal to buy Nuance, which develops AI software including speech-recognition products (Dragon is one). Microsoft pointed to Nuance’s position in the healthcare market as a primary reason for the acquisition.
  • Pinterest, because it is using AI to sort and categorize the millions of images shared by its users and also to “tailor the experiences” of users. Note, news organizations such as The New York Times are also using AI to determine how content is presented to users.
  • Salesforce.com, which “provides customer relationship management services and other enterprise solutions on market automation, data analytics, and application development.” The company markets its AI products under the Einstein brand — see AI use cases from the company. Salesforce acquired Slack Technologies last year.

Notably absent from the list is Apple (although maybe not a great investment, due to its high valuation), which is no newcomer to incorporating AI into its products. Critics might pooh-pooh Apple’s AI clout, but machine learning has been integral to the iPhone, iPad, and Apple Watch for years. Ars Technica published an excellent article about this in mid-2020.

Another absence is the assorted promising startups — particularly those in the climate arena and those founded by alumni of DeepMind, which to me is the most fantastic incubator of AI talent (see AlphaFold) outside the top universities. Just this week, Google put money into one of those startups — founded by a former research engineer at DeepMind, and “focused on reducing greenhouse gas emissions.”

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

Image recognition in medicine: MS subtypes

Machine learning systems for image recognition aren’t always perfect — and neither are AI systems marketed for medical use, whether they use image recognition or not. But here’s an example of image recognition used in a medical context where the system appears to have succeeded at something significant — and it’s something humans can’t do, or at least can’t do well.

“Researchers used the AI tool Subtype and Stage Inference (SuStaIn) to scan the MRI brain scans of 6,322 patients with MS, letting SuStaIn train itself unsupervised. The AI identified 3 previously unknown patterns …” (Pharmacy Times). The model was then tested on MRIs from “a separate independent cohort of 3,068 patients” and successfully identified the three new MS subtypes in them.

Subtype and Stage Inference (SuStaIn) was introduced in this 2018 paper. It is an “unsupervised machine-learning technique that identifies population subgroups with common patterns of disease progression” using MRI images. The original researchers were studying dementia.

Why does it matter? Identifying the subtype of the disease multiple sclerosis (MS) enables doctors to pursue different treatments for them, which might lead to better results for patients.

“While further clinical studies are needed, there was a clear difference, by subtype, in patients’ response to different treatments and in accumulation of disability over time. This is an important step towards predicting individual responses to therapies,” said Dr. Arman Eshaghi, the lead researcher (EurekAlert).

Sources: Artificial Intelligence Weekly newsletter, from The Wall Street Journal; Pharmacy Times; EurekAlert.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

New AI strategy from U.S. Department of Health and Human Services

The Biden Administration is working hard in a wide range of areas, so maybe it’s no surprise that HHS released this report, titled Artificial Intelligence (AI) Strategy (PDF), this month.

“HHS recognizes that Artificial Intelligence (AI) will be a critical enabler
of its mission
in the future,” it says on the first page of the 7-page document. “HHS will leverage AI to solve previously unsolvable problems,” in part by “scaling trustworthy AI adoption across the Department.”

So HHS is going to be buying some AI products. I wonder what they are (will be), and who makes (or will make) them.

“HHS will leverage AI capabilities to solve complex mission challenges and generate AI-enabled insights to inform efficient programmatic and business decisions” — while to some extent this is typical current business jargon, I’d like to know:

  • Which complex mission challenges? What AI capabilities will be applied, and how?
  • Which programmatic and business decisions? How will AI-enabled insights be applied?

These are the kinds of questions journalists will need to ask when these AI claims are bandied about. Name the system(s), name the supplier(s), give us the science. Link to the relevant research papers.

I think a major concern would be use of any technologies coming from Amazon, Facebook, or Google — but I am no less concerned about government using so-called solutions peddled by business-serving firms such as Deloitte.

The following executive orders (both from the previous administration) are cited in the HHS document:

The department will set up a new HHS AI Council to identify priorities and “identify and foster relationships with public and private entities aligned to priority AI initiatives.” The council will also establish a Community of Practice consisting of AI practitioners (page 5).

Four key focus areas:

  1. An AI-ready workforce and AI culture (includes “broad, department-wide awareness of the potential of AI”)
  2. AI research and development in health and human services (includes grants)
  3. “Democratize foundational AI tools and resources” — I like that, although implementation is where the rubber meets the road. This sentence indicates good aspirations: “Readily accessible tools, data assets, resources, and best practices will be critical to minimizing duplicative AI efforts, increasing reproducibility, and ensuring successful enterprise-wide AI adoption.”
  4. “Promote ethical, trustworthy AI use and development.” Again, a fine statement, but let’s see how they manage to put this into practice.

The four focus areas are summarized in a compact chart (image file).

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

Would you let AI create a recipe for you?

On Fridays I try to find something to write about that’s a little less heavy than explanations of neural networks and examinations of embedded biases in AI systems. I call it Friday AI Fun.

The BBC recently wrote about a mobile app that uses AI to help you concoct a meal from the ingredients you already have at home. Plant Jammer is available for both iOS and Android, and it doesn’t merely take your ingredients and find an existing recipe for you — it actually creates a new recipe.

According to BBC journalist Nell Mackenzie, the results are not always delicious. She made some veggie burgers that came out tasting like oatmeal.

I was interested in how the app uses AI, and this is what I found: The team behind Plant Jammer consists of 15 chefs and data scientists, based in Copenhagen, Denmark. They admit that “AI is only a fraction” of what powers the app, framing that as a positive because the app incorporates “gastronomical learnings from chefs.”

Image from Plant Jammer

The app includes multiple databases, including one of complete recipes. An aspect of the AI is a recommender system, which they compare to Netflix’s. As Plant Jammer learns more about you, it will improve at creating recipes you like, based on “people like you.”

“We asked the chefs which ingredients are umami, and how umami they are. This part reflects the ‘human intelligence’ we used to build our system, a great ‘engine’ that has led to very interesting findings.”

—Michael Haase, CEO, Plant Jammer

My searches led me to an interview with Michael Haase, Plant Jammer’s CEO, in which he described the “gastro-wheel” feature in the app. The wheel encourages you to find balance in your ingredients among a base, something fresh, umami, crunch, sweet-spicy-bitter, and something that ties the ingredients together in harmony.

I’ve downloaded the app but, unlike Mackenzie, I haven’t been brave enough yet to let it create a recipe for me. Exploring some of the recommended recipes in the app, I did find the ability to select any ingredient and instantly see substitutions for it — that could come in handy!

Mackenzie’s article for the BBC also describes other AI–powered food and beverage successes, such as media agency Tiny Giant using AI to help clients “find new combinations of flavors for cupcakes and cocktails.”

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

What’s the use of machine learning?

I’m interested in applications of machine learning in journalism. This is natural, as my field is journalism. In the field of computer science, however, accolades and honors tend to favor research on new algorithms or procedures, or new network architectures. Applications are practical uses of algorithms, networks, etc., to solve real-world problems — and developing them often doesn’t garner the acclaim that researchers need to advance their careers.

Hannah Kerner, a professor and machine learning researcher at the University of Maryland, wrote about this in the MIT Technology Review. Her essay is aptly titled “Too many AI researchers think real-world problems are not relevant.”

“The first image of a black hole was produced using machine learning. The most accurate predictions of protein structures, an important step for drug discovery, are made using machine learning.”

—Hannah Kerner

Noting that applications of machine learning are making real contributions to science in fields outside computer science, Kerner (who works on machine learning solutions for NASA’s food security and agriculture program) asks how much is lost because of the priorities set by the journals and conferences in the machine learning field.

She also ties this focus on ML research for the sake of advancing ML to the seepage of bias out from widely used datasets into the mainstream — the most famous cases being in face recognition, with systems (machine learning models) built on flawed datasets that disproportionately skew toward white and male faces.

“When studies on real-world applications of machine learning are excluded from the mainstream, it’s difficult for researchers to see the impact of their biased models, making it far less likely that they will work to solve these problems.”

—Hannah Kerner

Machine learning is rarely plug-and-play. In creating an application that will be used to perform useful work — to make new discoveries, perhaps, or to make medical diagnoses more accurate — the machine learning researchers will do substantial new work, even when they use existing models. Just think, for a moment, about the data needed to produce an image of a black hole. Then think about the data needed to make predictions of protein structures. You’re not going to handle those in exactly the same way.

I imagine the work is quite demanding when a number of non–ML experts (say, the biologists who work on protein structures) get together with a bunch of ML experts. But either group working separately from the other is unlikely to come up with a robust new ML application. Kerner linked to this 2018 news report about a flawed cancer-detection system — leaked documents said that “instead of feeding real patient data into the software,” the system was trained on data about hypothetical patients. (OMG, I thought — you can’t train a system on fake data and then use it on real people!)

Judging from what Kerner has written, machine learning researchers might be caught in a loop, where they work on pristine and long-used datasets (instead of dirty, chaotic real-world data) to perfect speed and efficiency of algorithms that perhaps become less adaptable in the process.

It’s not that applications aren’t getting made — they are. The difficulty lies in the priorities for research, which might dissuade early-career ML researchers in particular from work on solving interesting and even vital real-world problems — and wrestling with the problems posed by messy real-world data.

I was reminded of something I’ve often heard from data journalists: If you’re taught by a statistics professor, you’ll be given pre-cleaned datasets to work with. (The reason being: She just wants you to learn statistics.) If you’re taught by a journalist, you’ll be given real dirty data, and the first step will be learning how to clean it properly — because that’s what you have to do with real data and a real problem.

So the next time you read about some breakthrough in machine learning, consider whether it is part of a practical application, or instead, more of a laboratory experiment performed in isolation, using a tried-and-true dataset instead of wild data.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.