Symbolic AI: Good old-fashioned AI

The distinction between symbolic (explicit, rule-based) artificial intelligence and subsymbolic (e.g. neural networks that learn) artificial intelligence was somewhat challenging to convey to non–computer science students. At first I wasn’t sure how much we needed to dwell on it, but as the semester went on and we got deeper into the differences among types of neural networks, it was very useful to keep reminding the students that many of the things neural nets are doing today would simply be impossible with symbolic AI.

The difficulty lies in the shallow math/science background of many communications students. They might have studied logic problems/puzzles, but their memory of how those problems work might be very dim. Most of my students have not learned anything about computer programming, so they don’t come to me with an understanding of how instructions are written in a program.

This post by Ben Dickson at his TechTalks blog offers a very nice summary of symbolic AI, which is sometimes referred to as good old-fashioned AI (or GOFAI, pronounced GO-fie). This is the AI from the early years of AI, and early attempts to explore subsymbolic AI were ridiculed by the stalwart champions of the old school.

The requirements of symbolic AI are that someone — or several someones — needs to be able to specify all the rules necessary to solve the problem. This isn’t always possible, and even when it is, the result might be too verbose to be practical. As many people have said, things that are easy for humans are hard for computers — like recognizing an oddly shaped chair as a chair, or distinguishing a large upholstered chair from a small couch. Things we do almost without thinking are very hard to encode into rules a computer can follow.

“Symbolic artificial intelligence is very convenient for settings where the rules are very clear cut, and you can easily obtain input and transform it into symbols.”

—Ben Dickson

Subsymbolic AI does not use symbols, or rules that need symbols. It stems from attempts to write software operations that mimic the human brain. Not copy the way the brain works — we still don’t know enough about how the brain works to do that. Mimic is the word usually used because a subsymbolic AI system is going to take in data and form connections on its own, and that’s what our brains do as we live and grow and have experiences.

Dickson uses an image-recognition example: How would you program specific rules to tell a symbolic system to recognize a cat in a photo? You can’t write rules like “Has four legs,” or “Has pointy ears,” because it’s a photo. Your rules would need to be about pixels and edges and clusters of contrasting shades. Your rules would also need to account for infinite variations in photos of cats.

“You can’t define rules for the messy data that exists in the real world.”

—Ben Dickson

Thus “messy” problems such as image recognition are ideally handled by neural networks — subsymbolic AI.

Problems that can be drawn as a flow chart, with every variable accounted for, are well suited to symbolic AI. But scale is always an issue. Dickson mentions expert systems, a classic application of symbolic AI, and notes that “they require a huge amount of effort by domain experts and software engineers and only work in very narrow use cases.” On top of that, the knowledge base is likely to require continual updating.

An early, much-praised expert system (called MYCIN) was designed to help doctors determine treatment for patients with blood diseases. In spite of years of investment, it remained a research project — an experimental system. It was not sold to hospitals or clinics. It was not used in day-to-day practice by any doctors diagnosing patients in a clinical setting.

“I have never done a calculation of the number of man-years of labor that went into the project, so I can’t tell you for sure how much time was involved … it is such a major chore to build up a real-world expert system.”

—Edward H. Shortliffe, principal developer of the MYCIN expert system (source)

Even though expert systems are impractical for the most part, there are other useful applications for symbolic AI. Dickson mentions “efforts to combine neural networks and symbolic AI” near the end of his post. He points out that symbolic systems are not “opaque” the way neural nets are — you can backtrack through a decision or prediction and see how it was made.

.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

Summary of the challenges facing algorithms, AI

Hayden Field, a technology journalist at Morning Brew, published a series of articles about algorithms and AI earlier this year, and they’ve been on my TBR list.

First up was Nine Experts on the Single Biggest Obstacle Facing AI and Algorithms in the Next Five Years. Experts: Drago Anguelov (Waymo); Kathy Baxter (Salesforce); David Cox (IBM Watson); Natasha Crampton (Microsoft); Mark Diaz (Ethical AI at Google); Charles Isbell (professor and dean, College of Computing, Georgia Institute of Technology); Peter Lofgren (Stripe); Andrew Ng (co-founder and former head, Google Brain); Cathy O’Neil (author, Weapons of Math Destruction).

Predictably, ethics was noted as a big challenge — O’Neil asked what we will do about unfairness in decisions made by algorithms. Diaz pointed to the need for involving “experts from a wide range of disciplines, including non-technical disciplines,” in the development process, long before an end product emerges. This intersects with ethics and fairness, as the absence of experts and stakeholders opens the door wide to omissions and errors. Baxter was explicit about systemic racism that is embedded in both training data and models. She listed “medical care decisions, hiring recommendations, access to housing and social programs, visa application approvals, school exam results, hate speech detection, dynamic pricing algorithms for ride hailing services, and even dating apps” — as well as face recognition and predictive policing.

“In essence, problems that are not purely technical require solutions that are not purely technical.”

—Mark Diaz, Ethical AI at Google

Isbell spoke of systematic solutions that can be widely applied. “We cannot treat minority groups as exceptions and edge cases,” he said. Cox highlighted transparency and explainability, as well as ethics and bias. He also alluded to adversarial attacks as well as the non-adversarial errors that surprise researchers (possibly due to overfitting). He grouped all this under trust. Crampton also focused on fairness and referred to diversity in teams, similar to Diaz’s and Isbell’s concerns.

Anguelov explained the need for reliable simulations so that systems can scale up to real-world use. He’s talking about the Long Tail problem: the real world throws up too many unexpected situations. Simulations allow testing in ways that don’t risk human lives (think self-driving cars). Lofgren also talked about scale, but in terms of personalization — his example is detecting credit card fraud in real-time based on Big Data that detects abusing IP addresses and then drills down to the individual cards being used. Ng talked about the difficulty in making dependable commercial AI products — basically off-the-shelf solutions.

“We will often need to make hard decisions based on competing priorities, including decisions to not build or deploy a system for certain purposes.”

—Natasha Crampton, Microsoft

Second in the series is titled Amex’s Fraud Detection AI Was Ready to Go Live. Then Covid Hit. This article starts with the idea that large AI models in the field will still need adjustments as unforeseen problems crop up. This echos the concerns about scale raised by Anguelov and Lofgren in the first article in the series.

The challenge thrown by COVID-19 was that all existing models had been developed and adjusted in a non-pandemic world. Then the world changed.

Amex’s fraud-detecting systems are a blend of old-school rule-based systems and newer machine learning techniques. A team of about 30 decision scientists monitors the system round-the-clock and updates it when necessary, at least once a year. The pandemic came at a bad time for Amex, just as they were rolling out a new model.

“Since each generation of a gradient-boosting ML model is typically developed on data from earlier that same year, many of the model’s assumptions no longer made sense” in 2020.

—”Amex’s Fraud Detection AI Was Ready to Go Live. Then Covid Hit”

This is a really interesting article — although I’d read others about issues caused for AI models by pandemic changes, most of those had to do with either healthcare or travel.

Because of increased online traffic in 2020 — more people online, every day, as the pandemic drove work-from-home and stay-at-home schooling — demands on Amazon Web Services (providing servers and processing power to millions of commercial clients such as Amex) grew enormously. This “dwindling cloud capacity” meant testing new solutions for Amex’s model took much longer than usual. The team had to run new simulations that took our new way of life into account, and those simulations required lots of processor juice.

In the end, Amex’s rollout was successful — but it came months later than originally planned. This was a really neat case study and could be discussed in a lot of different contexts.

I’m going to look at the other articles in the series in tomorrow’s post.

.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.