Deep learning: Why it’s time for AI to get philosophical

SPECIAL TO THE GLOBE AND MAIL
ILLUSTRATIONS BY RAYMOND BIESINGER

Catherine Stinson is a postdoctoral scholar at the Rotman Institute of Philosophy, at the University of Western Ontario, and former machine-learning researcher.

I wrote my first lines of code in 1992, in a high school computer science class. When the words “Hello world” appeared in acid green on the tiny screen of a boxy Macintosh computer, I was hooked. I remember thinking with exhilaration, “This thing will do exactly what I tell it to do!” and, only half-ironically, “Finally, someone understands me!” For a kid in the throes of puberty, used to being told what to do by adults of dubious authority, it was freeing to interact with something that hung on my every word – and let me be completely in charge.

For a lot of coders, the feeling of empowerment you get from knowing exactly how a thing works – and having complete control over it – is what attracts them to the job. Artificial intelligence (AI) is producing some pretty nifty gadgets, from self-driving cars (in space!) to automated medical diagnoses. The product I’m most looking forward to is real-time translation of spoken language, so I’ll never again make gaffes such as telling a child I’ve just met that I’m their parent or announcing to a room full of people that I’m going to change my clothes in December.

But it’s starting to feel as though we’re losing control.

These days, most of my interactions with AI consist of shouting, “No, Siri! I said Paris, not bratwurst!” And when my computer does completely understand me, it no longer feels empowering. The targeted ads about early menopause and career counselling hit just a little too close to home, and my Fitbit seems like a creepy Santa Claus who knows when I am sleeping, knows when I’m awake and knows if I’ve been bad or good at sticking to my exercise regimen.

Algorithms tracking our every step and keystroke expose us to dangers much more serious than impulsively buying wrinkle cream. Increasingly polarized and radicalized political movementsleaked health data and the manipulation of elections using harvested Facebook profiles are among the documented outcomes of the mass deployment of AI. Something as seemingly innocent as sharing your jogging routes online can reveal military secrets. These cases are just the tip of the iceberg. Even our beloved Canadian Tire money is being repurposed as a surveillance tool for a machine-learning team.

The continuum of creepiness between a Fitbit, above, and 2001: A Space Odyssey’s mad computer Hal 9000 may not be as wide as we imagine.
MGM/PHOTOFEST

For years, science-fiction writers have spelled out both the technological marvels and the doomsday scenarios that might result from intelligent technology that understands us perfectly and does exactly what we tell it to do. But only recently has the inevitability of tricorders, robocops and constant surveillance become obvious to the non-fan general public. Stories about AI now appear in the daily news, and these stories seem to be evenly split between hyperbolically self-congratulatory pieces by people in the AI world, about how deep learning is poised to solve every problem from the housing crisis to the flu, and doom-and-gloom predictions of cultural commentators who say robots will soon enslave us all. Alexa’s creepy midnight cackling is just the latest warning sign.

Which side should we believe? As someone who has played for both teams, my answer is that there are indeed genuine risks we ought to be very worried about, but the most serious risks are not the ones getting the most attention. There are also things we can do to make sure AI is developed in ways that benefit society, but many of the current efforts to address the risk that AI will run amok are misdirected.

One scenario I don’t think we need to worry about is the one in which robots become conscious (whatever that means) and decide to kill us. For one thing, conscious robots are still far off. For another, consciousness doesn’t lead directly to murder – it’s a non sequitur. Another scenario I find far-fetched is the one in which a computer tasked with maximizing profits for a paperclip company ends up turning all the resources on the planet (and beyond) into paperclips. Sure, it’s possible, but not very probable. We should be able to switch off that malfunctioning HAL 9000. One thing we should be very concerned about, however, are weaponized drones. Automated killing machines are not going to make our lives better, period.

The other serious risk is something I call nerd-sightedness: the inability to see value beyond one’s own inner circle. There’s a tendency in the computer-science world to build first, fix later, while avoiding outside guidance during the design and production of new technology. Both the people working in AI and the people holding the purse strings need to start taking the social and ethical implications of their work much more seriously.

In medicine and engineering, there are codes of conduct that professionals are expected to follow. The idea that scientists bear some responsibility for the technologies their work makes possible is also well established in nuclear physics and genetics, even though scientists don’t make the final decision to push the red button or genetically engineer red-headed babies. In the behavioral sciences, there are research-ethics boards that weigh the possible harms to participants in proposed experiments against the benefits to the population. Studies whose results are expected to cause societal harm don’t get approved. In computer science, ethics is optional.

Mary Shelley’s Frankenstein, which turned 200 this year, is perhaps the most famous warning to scientists not to abdicate responsibility for their creations. Victor Frankenstein literally runs away after seeing the ugliness of his creation, and it is this act of abandonment that leads to the creature’s vengeful, murderous rampage. Frankenstein begins with the same lofty goal as AI researchers currently applying their methods to medicine: “What glory would attend the discovery if I could banish disease from the human frame and render man invulnerable to any but a violent death!” In a line dripping with dramatic irony, Frankenstein’s mentor assures him that “the labours of men of genius, however erroneously directed, scarcely ever fail in ultimately turning to the solid advantage of mankind.” Shelley knew how dangerous this egotistical attitude could be.

But the nerd-sighted geniuses of our day make the same mistake. If you ask a coder what should be done to make sure AI does no evil, you’re likely to get one of two answers, neither of which is reassuring. Answer No. 1: “That’s not my problem. I just build it,” as exemplified recently by a Harvard computer scientist who said, “I’m just an engineer” when asked how a predictive policing tool he developed could be misused. Answer No. 2: “Trust me. I’m smart enough to get it right.” AI researchers are a smart bunch, but they have a terrible track record of avoiding ethical blunders. Some of the better-known goof-ups include Google images tagging black people as gorillas, chat bots that turn into Nazis and racist soap dispensers. The consequences can be much more serious when biased algorithms are in charge of deciding who should be approved for a bank loan, who to hire or admit to university or whether to kill a suspect in a police chase.

Tay, a chatbot designed by Microsoft, was introduced in 2016 as an experiment to teach an AI how to hold conversations with people. Within 24 hours, Twitter users had taught it to say racist and misogynist things. Microsoft quickly closed off the bot to the public.
TWITTER

There is a growing movement within AI to do better. For the first time, this past December, one of the top AI conferences, NIPS, featured a keynote about bias in AI. The New York Times recently reported that top computer-science schools such as MIT and Stanford are rushing to roll out ethics courses in response to a newfound awareness that the build-it-first, fix-it-later ethos isn’t cutting it.

In fact, such courses have existed for a long time. The University of Toronto’s computer-science program has a computers and society course, which includes a few weeks of ethics. Philosophy departments and, where they exist, interdisciplinary departments such as science and technology studies, history and philosophy of science and media studies also routinely offer courses covering the social and ethical implications of technology. Unfortunately, these courses are rarely required for computer-science degrees, so most graduates don’t get any ethics training at all. I’m currently teaching a course about the social and ethical implications of AI in health care – to about a dozen students.

That’s what may be changing. When I contacted François Pitt, the undergraduate co-ordinator of U of T’s computer science department, he said the issue of ethics training had come up four or five times just that week. Steve Easterbrook, the professor in charge of the computers and society course, agrees that there needs to be more ethics training, noting that moral reasoning in computer-science students has been shown to be “much less mature than students from most other disciplines.” Instead of one course on ethics, he says “it ought to be infused across the curriculum, so that all students are continually exposed to it.” Prof. Zeynep Tufekci of the University of North Carolina at Chapel Hill agrees, commenting on Twitter, “You have to integrate ethics into the computer science curriculum from the first moment you teach students here’s a variable, here’s an array, and look, we can sort things. When you run your first qsort, [a standard sorting algorithm encountered in computer-science 101 courses], you’ve encountered ethical and ontological questions.”

It remains to be seen how computer-science departments can pull off that sort of sweeping curriculum change, given that many of their faculty have just as little ethics training as their students. Philosophy departments regularly face the frustration of having the expertise to teach applied ethics courses, only to have those courses get poached by law, medicine and engineering departments, which require their students to take in-house ethics courses that are often taught by non-experts. Hopefully, computer-science departments expanding their ethics offerings will see the need to hire trained ethicists to design and teach the courses. Frankly, opportunities for philosophers to make themselves useful are rare enough that we shouldn’t pass them over in cases where their training has real value.

Another kind of effort at fixing AI’s ethics problem is the proliferation of crowdsourced ethics projects, which have the commendable goal of a more democratic approach to science. One example is DJ Patil’s Code of Ethics for Data Science, which invites the data-science community to contribute ideas but doesn’t build up from the decades of work already done by philosophers, historians and sociologists of science. Then there’s MIT’s Moral Machine project, which asks the public to vote on questions such as whether a self-driving car with brake failure ought to run over five homeless people rather than one female doctor. Philosophers call these “trolley problems” and have published thousands of books and papers on the topic over the past half-century. Comparing the views of professional philosophers with those of the general public can be eye-opening, as experimental philosophy has repeatedly shown, but simply ignoring the experts and taking a vote instead is irresponsible.

The point of making AI more ethical is so it won’t reproduce the prejudices of random jerks on the internet. Community participation throughout the design process of new AI tools is a good idea, but let’s not do it by having trolls decide ethical questions. Instead, representatives from the populations affected by technological change should be consulted about what outcomes they value most, what needs the technology should address and whether proposed designs would be usable given the resources available. Input from residents of heavily policed neighbourhoods would have revealed that a predictive policing system trained on historical data would exacerbate racial profiling. Having a person of colour on the design team for that soap dispenser should have made it obvious that a peachy skin tone detector wouldn’t work for everyone. Anyone who has had a stalker is sure to notice the potential abuses of selfie drones. Diversifying the pool of talent in AI is part of the solution, but AI also needs outside help from experts in other fields, more public consultation and stronger government oversight.

Elderly people play with a robot named NAO, manufactured by Softbank Robotics, in their retirement home in Bordeaux, France.

REGIS DUVIGNAU/REUTERS

Another recent article in The New York Times claimed that “academics have been asleep at the wheel,” leaving policy makers who are struggling to figure out how to regulate AI at the mercy of industry lobbyists. The article set off a Twitter storm of replies from philosophers, historians and sociologists of science, angry that their decades of underfunded work is again being ignored and erased. Like the Whos down in Whoville, they cried out in fear, “We are here! We are here! We are here! We are here!” If policy makers and funding sources listen closely to those voices, there are solutions being offered. The article concludes that we “urgently need an academic institute focused on algorithmic accountability.” On Twitter, the article’s author, Cathy O’Neil, insisted, “There should be many many more tenure lines devoted to it.” Those both sound like solid ideas.

How does this play out in the Canadian context? The federal government recently earmarked $125-million for a Pan-Canadian Artificial Intelligence Strategy – with the Canadian Institute for Advanced Research (CIFAR) in charge – and three new research institutes in Edmonton, Toronto and Montreal. One of the goals is to “develop global thought leadership on the economic, ethical, policy and legal implications of advances in artificial intelligence.”

There are promising signs already. An International Scientific Advisory Committee on AI Strategy was recently announced and includes a co-founder of the Future of Life Institute, which has a mission to safeguard life against technological challenges. CIFAR just launched an AI and society program featuring workshops and publications designed to stimulate discussion and guide policy makers on the ethical, legal, economic and policy issues AI presents for society. The Alberta Machine Intelligence Institute so far focuses on industry partnerships, while Toronto’s Vector Institute has particular strengths in deep learning and medical applications of AI. Vector Institute research director Richard Zemel has a growing side project in algorithmic fairness and has been making connections with legal scholars and researchers from the University of Toronto’s Centre for Ethics. Another group is studying AI and safety. Prof. Zemel promises that bringing in social scientists, ethicists and policy analysts is something he is “determined to do.”

The Montreal Institute for Learning Algorithms (MILA) has taken several concrete steps toward addressing AI’s ethical challenges, including offering diversity scholarships, hosting an interdisciplinary forum on the socially responsible development of AI and announcing the Montreal Declaration for a Responsible Development of AI. Although this too is a crowdsourced ethical code, there’s a key difference: Philosophers, social scientists and policy analysts were involved in its inception and will oversee the final document.

What’s still lacking is investment in jobs dedicated to the social, ethical and policy implications of AI. The Vector Institute is hiring research scientists and postdocs, and MILA is hiring interns, postdocs and professors, but in each case, expertise in machine learning is the main job qualification. CIFAR’s AI and society workshops propose to bring together “interdisciplinary teams to explore emerging questions about how AI could affect the world,” but there are precious few jobs in Canada (or elsewhere) for people who study AI’s effects from the perspective of the humanities and social sciences. Alongside the creation of new tenure lines and graduate degrees in AI, we need to invest in training and employing experts in the social, ethical and policy implications of AI if we’re to have a hope of predicting and preventing dangerous outcomes.

Sophia, a robot with Saudi Arabian citizenship, interacts at an innovation fair in Kathmandu. As the technical achievements of robotics and AI advance, there are few jobs to study artificial intelligence from a humanities or social-science perspective.
NAVESH CHITRAKAR

A recent CIFAR event illustrates what’s still missing in the strategy. The workshop brought together AI and health researchers for a joint project to develop systems to do things such as predict flu outbreaks. Sounds good, but work by philosophers of science has shown that putting people in the same boardroom is not enough to make interdisciplinary projects work. Researchers in different fields use different vocabularies, concepts and methods – and may have conflicting goals. Words such as “bias,” “naive,” “ontology,” “regression” and “supervised” have technical meanings in AI that are very different than their usual meanings. Having a translator of sorts at the table to integrate across disciplines is essential to avoid potentially costly misunderstandings. That means inviting philosophers, sociologists and historians of science to the table.

Fahad Razak, an internist at St. Michael’s Hospital and an assistant professor of health policy, management and evaluation at the University of Toronto, describes facing just this kind of challenge applying AI methods in his research. The Li Ka Shing Knowledge Institute’s General Medicine Inpatient Initiative at St. Mike’s applies machine learning to health-care data with the aim of improving treatment, especially for patients with multiple concurrent diagnoses. One of the main challenges with health-care data sets, according to Prof. Razak, is “a data-quality problem, both with data being inaccurate or poorly coded, but also often because of the lack of lots of relevant data.” Socioeconomic status and the ability to function without assistance at home are two examples of important data that is often missing. But, he says, computer scientists don’t realize how messy and gappy the data is, and because of the hype around AI, health researchers “believe that AI methods are so miraculous, they just work despite data quality problems!”

If every computer-science student were taught to look for social and ethical problems as soon as they learn about variables, arrays and sorting, as Prof. Tufekci suggests, they would realize that the way health data is coded matters very much. For example, recording age as a range, such as under 19, 19-24, 25-35, may seem harmless, but whether a patient is 1 or 17 can make a big difference to their health-care needs. An algorithm that doesn’t take that into account could make fatal mistakes – for example, by suggesting the wrong dosage. Likewise, since income is usually missing from health data sets, an algorithm could draw false conclusions about how to prevent diabetes or premature births.

These are just a small sample of the mistakes we need to prevent. The current generation of AI researchers (with a few exceptions) do not have the training necessary to deal with the implications of the AI they are building. So far, the experts who do have that training are not being hired to help. That needs to change – or the darkest of science fiction will become reality.

AP
UNIVERSAL STUDIOS HANDOUT/AP

 
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments