Episode Notes
As Australia reels from the catastrophic bushfires and deals with COVID-19, these moments have revealed the fragility of our infrastructure including supply chains and telecommunications. This episode of the Seriously Social podcast explores Artificial Intelligence and what it means for humanity. Join host Ginger Gorman with cultural anthropologist, technologist and futurist Professor Genevieve Bell as they dissect the fears and realities of technology.
Transcript
Ginger Gorman: With me now is Professor Genevieve Bell, live and in person, in fact. She is Director of the 3A Institute, also known as 3AI at the Australian National University in Canberra. Genevieve is an expert in artificial intelligence and what it means for humanity. Genevieve, thank you so much for joining me.
Professor Genevieve Bell: It’s my pleasure, Ginger.
Ginger: And lovely to see you socially distanced, but in person.
Professor Genevieve Bell: It’s nice to be back in our space at the Institute at the Australian National University. It’s nice to get to have you here.
Ginger: And it’s actually a very fancy recording studio, isn’t it?
Professor Genevieve Bell: Well, it has sound dampening walls, which helps.
Ginger: One of the things I realised when I was chatting to you on the phone, Genevieve, before this interview is, I actually didn’t understand really what artificial intelligence is. And I think this is the case for a lot of people. So, let’s start out with what is it.
Professor Genevieve Bell: Listen, I always think it’s a very good starting position. And as a well-trained anthropologist, I often think it’s really good to start by attempting to get to a shared understanding of what we’re talking about. For me, that usually means you’ve got to give it a bit of context and a bit of history.
And I think the question you’re asking is the right one too, which is that actually, I imagine when you say AI, most people kind of nod sagely and go, “Hm-mm. I know what that is.” And the reality is, were you to poll anyone who is now listening, they’ll all have a slightly different take on it. So, for me, at least here are the couple of pieces that matter.
First of all, you need to understand that AI was a term coined in a grant proposal in 1955 to fund a conference in 1956. That conference was co-funded between a couple of big American tech companies, so IBM, Bell Telephones and Rand as well as some academics from Harvard, MIT and Dartmouth. They put in a grant proposal to ask for money to spend eight weeks together to contemplate the proposition that you could so specifically describe intelligence into manageable pieces that a machine could be made to simulate it.
Further to that, they said there would be four things they’d be really interested in simulating in that sense. One was the ability to understand language, for the machine to understand language. The second was the ability for the machine to understand complex symbols and abstractions. The third was that the machine would be able to do tasks currently, circa 1955, reserved for human beings and that the machine would learn for itself.
The reason you’ve got to go back to 1995, 1956 is it turns out that set of preoccupations, breaking down intelligence into manageable, replicable by machine pieces, language, visual abstraction, learning for itself and doing things that humans do has basically been what we have been pursuing in the AI research and commercialisation space ever since. Nothing, in some ways, has changed.
What you also need to know, going back to ’55/’56 is that this was always a pursuit about how to apply computational technology that was the twinning of commercial interests, military interests and academic ones. This was never something that unfolded just inside a university.
And of course, the for that group of people who gathered in 1956, they had already been engaged in conversations about what computation might do for the proceeding 10 years and those conversations had unfolded with social scientists, biologists, psychologists and a bunch of other people. So, this was really a distillation of a much longer set of conversations.
Ginger: How do you explain it to people though at a dinner party, say, who might have their eyes glazing over when you talk about artificial intelligence? How do you relate it to their everyday lives so they think that it matters to them?
Professor Genevieve Bell: The last time I got asked what AI was, I was actually at a dinner table, more like a dinner bench, in a mine site in the Pilbara. So, I was asked, “What is AI and why do you care?”
And I told two stories. One, I talked about what it was that people were concerned about in 1955 because I actually think you do need to know where these things come from and where those people came from because it really shapes what they built into the questions we ask.
And then I talked about, as I often do, more banal pieces of technology, things that are around us all the time that we don’t think of as exhibiting characteristics of AI, but they really are. At that mine site, that happened to be autonomous dump trucks.
When I’m having these conversations in Sydney and Melbourne, I often talk about smart lifts, or elevators, because it turns out, if you’ve been in a high-rise building recently, the kind where you pushed a button before you got into the lift and then when you got into the carriage, there were no buttons. And you had that moment of going, “Ah! Where did the buttons go?” The chances are you were encountering a lightweight AI system at that point because what has happened in those lifts is that we have moved from you as the human pushing a button and the lift responding to your button call to you as a human pushing a button, which triggers a set of data activities.
And it actually doesn’t directly call the lift because it turns out that lift knew you were going to press that button before you pressed it. Not because it is telepathic or anything else, or psychic, but that because that lift is tracking the behaviours of everyone in that building over time and it has started to know that people like Ginger are leaving the building at 11:30 looking for coffee. And so, the lifts are waiting for you before you’ve even pressed the button.
Ginger: I find that really creepy and this jumps to something else that I want to talk to you about, which is that often our experiences of AI in everyday life are creepy. I was on stage at the Melbourne Writer’s Festival and Siri turned on and started talking to me on stage and I kind of couldn’t turn her off.
And the other thing that happens is I’ve not long separated from my husband and Facebook and my photo memories on my computer will throw up all these photos of him all the time at the worst possible moment. So, often, our experiences of AI are quite disconcerting. And I wonder if you think that feeds into our sense of AI as a threat.
Professor Genevieve Bell: Something that I think is really interesting for me is that we have always understood AI from day one as being part and parcel of a complicated relationship between humans and technology, and humans and computation. And even in the earliest documents, there’s this kind of debate that is unfolding about whether computers can really be like humans. Will they be able to write poetry? Will they have belief systems? Will they be able to do, quote, unquote, creative things?
And what’s lurking inside all of that is a deep set of cultural preoccupation, particularly Western cultural preoccupations about what makes us human, what makes us distinctive and about some very deep-seated things we would call the sociotechnical imagination, but basically how society shapes the way we think about things, both physical things and technical things and cultural things.
And we would say that in our cultures now, particularly in the West, we have had longstanding stories about what happens when humans make things come to life. Spoiler alert: none of those stories end well. So, The Terminator, Frankenstein, Gollum. And so, part of why there is unease about technology is that we have hundreds of years of stories telling us there should be unease.
And the second part is that it is also the case that those fears are not irrational. If you look at the arc of the 20th Century in particular, what it is that humans did when they built technology. Spoiler alert: also didn’t end well. So, the most famous sort of roboticist of the 20th Century in some ways is a man named Mori from Japan. And he always says that our anxieties about robots in particular are really just a projection of our anxieties about what humans can do to other humans and the fact that we are the ones building the machinery is actually the thing we should fear and not the technology itself.
Ginger: And we have had huge catastrophes, like the atom bomb that make us afraid, rightly, of technology.
Professor Genevieve Bell: And that’s what Mori was referencing. Mori basically said, “Listen, the robots didn’t bomb Hiroshima and Nagasaki. Humans did.” And so, he says our anxiety about the machines is tied up with that. I think that’s one way of looking at it.
I think the other one is to say, “Listen, there is an extraordinary complexity in what happens when computation and machines become sufficiently dense that it is very hard both to see them and then to explain then.” And that constellation of practice, the bit of the complexity and the fact that we then can’t easily and readily access it makes it difficult. And it also takes multiple generations to be comfortable with new technical systems.
Ginger: In our previous season, Professor Anthony Elliott, who’s also an AI expert, proclaimed that people think of AI either as a menace or a saviour. Why do you think that kind of dichotomy is actually unhelpful?
Professor Genevieve Bell: Listen, I think that dichotomy is one that is fed… Again, and this is where the phrase, the ‘sociotechnical imagination’, right. We imagine it is one of those two things because those are the stories that we are primed with. And unsurprisingly, dystopic images of technology function in a more interesting way than utopic visions of it, at least over the last 30‑plus years. It’s also really interesting what we imagine is going to be bad as a robot.
Part of the reason I started to talk about lifts a lot was I realised that we didn’t actually have a deep science fiction trope about killer lifts. There are no fantasies about the lift crawling out of the stack and kind of wandering around the room threatening to stamp on you. Holding in abeyance Douglas Adams, because I know someone will want to talk about that.
I think the problem with those narratives is that they’re incredibly reductive. And they tend to do two things, right. One is they make it easy to dismiss people’s fears as just fed by Hollywood and the reality is there are reasons we should be cautious and careful. These are new technical systems with inordinate capacity and with an ability to scale very quickly. So, I think there’s always a reason to be cautious.
Likewise, the moments when we have been overly optimistic about technical systems means we haven’t thought about them either. So, for me, that dichotomy does two problematic pieces of work. One, it makes it easy to dismiss people’s anxieties and it makes it equally easy to, in some ways, not pay attention to why the utopian fantasy is there.
Ginger: It’s interesting as humans, isn’t it? We always want things in black and white. We’re very rarely willing to see it in the grey, in the middle of that.
Professor Genevieve Bell: Listen, I’m not convinced that’s all humans. I think that’s a particular moment in a particular set of societies. I know lots of places that are willing to tarry with or haggle with the negative or to spend time where things are not ambiguous, but the desire to reduce things to simple constructs is one that we are all guilty of and one where I actually think you have to do the work to take a step back and ask the questions not only about why are we doing that, but in doing that, what are we erasing?
Ginger: Genevieve, we’re in this strange moment in history because we’ve had these catastrophic summer fires and a lot of us here in Canberra as well couldn’t breathe through that. And now, we’re in the middle of this global pandemic that’s causing catastrophic loss of life. What has that made you think about in terms of technology and our relationship with it?
Professor Genevieve Bell: I think one of the things that’s really interesting is if you unfold the last now going on eight months, so starting in November of last year, through a series of bushfires, which as you say, ended up impacting about 80% of Australians actually one way or another were impacted either directly by fire, or by the smoke that followed or by the underlying droughts that were kind of in some ways, part of the cause of all of that. And then you roll into a global pandemic, which has impacted everyone in this country and every other country.
Ginger: And it’s impacting us right now because we’re sitting far apart.
Professor Genevieve Bell: Of course, it is. Well, we’d probably be sitting on the opposite sides of a desk anyway, but yes, and I think it has a set of consequences that are by no means finished. And I don’t think the bushfires are either. It’s easy to kind of imagine that that’s done, but we know it isn’t.
And what was startling about the bushfires and what continues to be interesting at an intellectual level and complicated at a human level is that all of those moments revealed the fragility of our infrastructure. So, the bushfires made it clear that our electrical grid was fragile. It made it clear that our telecommunications and information networks were fragile. Suddenly we were back to the landline phone boxes. It made it clear that our social institutions were actually more resilient than we had given them credit for.
And then, I think, if you kind of roll forward into the pandemic that we are still in, what’s also suddenly clear is that our supply chains are neither invisible, nor magic. And in fact, they are surprisingly fragile, both in country and in this country connected to others, and our reliance and need for technical systems to function.
I think for any one of us who has been on the Zoom call with the inevitable glitch where you freeze frame in a manner that is never aesthetically pleasing or where you just think, “I don’t want to look at another screen,” have just made it really clear that a lot of the stories we tell ourselves about the 21st Century were predicated on the 20th Century being kind of stable and resolved.
And I think, for me, at least, what the last nine months has suggested is that a lot of that stuff that looked like it was done and dusted, or tidied up, or fixed or finished just isn’t. And a whole lot of those pieces feel much more brittle, or fragile or unstable. And I think there’s something really interesting that happens when you have to see things when the way they usually function is they’re invisible.
I mean, you encounter infrastructures most when they’re broken and you don’t really think about electricity until it isn’t there, or the Internet until it’s not working or why you can’t use those pieces.
Ginger: Those fires in Mallacoota, people were uncontactable because the mobile phone network went down all together. And it was a moment, as you say, of thinking this could all go wrong. There would be no way to contact these people if somebody doesn’t fix this. So, it really almost did bring us face-to-face with ourselves and our humanity because the technology wasn’t working.
Professor Genevieve Bell: And I think it also makes, and I really do believe that one of the things about technology, when it is functioning well is that it is invisible, but the cost of that invisibility is that we tend not to think about it or ask critical questions about it, like is it sustainable? How is it being monetised. Who has an interest in it? Where is it being deployed? And then, of course, what happens when it breaks and who’s responsible and what are the backups?
Ginger: They’re really interesting questions, both from as you say, a technology point of view, but with your anthropology background as well, the kind of humanness of that, what did you come up with as solutions or what did you think about in terms of the future?
Professor Genevieve Bell: One of the things that I came home to do – I spent 30 years living in the United States. I came back three years ago now, myself. We don’t have the people or the skills or the questions to ask to help us navigate successfully into the future.
And so, I decided with the kind of hubris that you can when you really don’t know what you’re doing that we needed to establish a new branch of engineering to take AI safely, sustainably and responsibly to scale. And I figured I could keep talking about what was going on or I could try building an alternative because, for me, I often get asked, “Are you optimistic or pessimistic about the future?” and the only answer I can come up with that feels useful is, “I’m just going to build a better one.”
And so, for us what we’ve been doing here is trying to work out how do you build a new branch of engineering. So, teach it into existence, study it into existence, theorise it into existence.
So, we have an experimental education programme. We have had a series of research projects going on. We’re trying to work out what are the big questions you need to be able to repeatedly ask of the systems that you are both building and regulating and designing in order to make a set of them that feel like things we can live with.
Ginger: There are so many questions I want to ask you about that, but some of the words you used that were really interesting, like ‘sustainably’, ‘responsibly’. So, these are not necessarily words that people would associate with technologies like AI. So, why are you putting those into the scene immediately?
Professor Genevieve Bell: Because the other place we could have started having this conversation rather than reflecting on my impressive sound batts was to say that we’re meeting on the lands of the Ngunnawal and Ngambri people and that we should acknowledge those who came before and remember that this land is never ceded and always sacred. And also to remember that this is a place where technical systems have been built for a really long time.
Eighteen months ago now, I took myself up to Brewarrina, so on the New South Wales, Queensland border. It’s a town on the Barwon River. It has many claims to fame, but the one that is most important as far as I’m concerned is that it is the site of the oldest known archaeological site representing human technology at scale on the planet.
There is a set of fish weirs or fish traps there that were built 40,000 years ago. They exhibit an exquisite understanding of hydrology, the local environment, the fish that inhabit that river as well as what it means to gather people there and feed them over time.
That system, first built 40,000 years ago. Last documented use in the historical record is 1915. And I think about a system that endured for 40,000 years and I think about what were its characteristics. Well, it was sustainable. It was built into and with, and in sympathy with, and with an understanding of that environment. It was built in a manner that was safe, so safe for humans, safe for fish, safe for the environment. And it was built in a way that was responsible. It wasn’t about taking all the fish. It was about taking and keeping enough fish to feed the people that gathered there.
And I think about there is a technical system, and it is a system. It was a technical system at scale that lasted millennia. And if those were its key attributes, I think to myself, “Well, those seem like pretty good ones. Maybe we could pull those forward into the 21st Century.”
Ginger: It’s interesting what I’m hearing you say is always putting at the forefront humans, but also putting at the forefront, history. So, why is it so important, do you think, to put humans in front of the technology?
Professor Genevieve Bell: Well, because humans make technology, right. Technology does not, despite the stories from some of our movies, spring fully formed from venus’ forehead or anywhere else, technical systems are built by and for humans. And in any one of those moments, the humans that are building those systems have an explicit or implicit imagination of the world they are making. And I think we need to be much better at connecting the dots than asking the questions about what is that world they or we imagine.
And then, for me, the reason to put history in it is not that history ever gives you an answer to what the future might look like, but it does give you an instructive set of questions you might want to ask.
Ginger: So, how are we going to use technology in a way that’s safe and productive and supportive of humanity, going into the future, as opposed to the many instances where our own technologies actually damage us?
Professor Genevieve Bell: I think part of it comes by again asking yet another set of questions, which is what do we think the values are that technology ought to embody? You’ve just said one of them and I suspect not everyone would agree, which is productive. And so much of the way we have thought about technology in the last 200 years is that we imagine that the best way to measure it as an effective technology is does it make us more efficient or more productive? And I think those are not human universal truths. They’re not even technical truths. They are values we’ve imposed on our systems.
And so, for me, starting to be willing to go back to not first principles, but go back to a set of principles that say, “As we imagine building technical systems, what is the world we are bringing into existence?” And let’s be really critical about the things that we think are its attributes. We’ve been willing in multiple places to trade off efficacy for sustainability.
So, I’m sure, if you were to go back to Newcomen and Watt about their steam engine and say to them, “Oi! This configuration is great if you’re sitting atop a a coalmine in Cornwall or if you were surrounded by trees that you can put in there. It’s not very efficient otherwise and, oh, by the way it actually created a lot of pollution.”
So, what are the questions we would ask now of those systems to say, “Actually, we need to build things that are sustainable.”
Ginger: And it’s not just about building them, but also regulating them. As you well know, Genevieve, because you won my book in the Authors for Firies auction, I’m a cyberhate expert. And one of the things I have been horrified by is the way that big platforms, Facebook, Twitter and so forth, have been able to ignore human rights in a lot of instances and actually do us great damage. And there’s a space in here for regulation.
Professor Genevieve Bell: Oh, absolutely.
Ginger: And I wonder what you think that is.
Professor Genevieve Bell: Listen, I think one of the interesting things we ought to be considering with most technical systems – and I don’t think it’s just AI ones, I think it’s technical systems more broadly – is what are the regulatory policy and standard frameworks with which we are operating? How do we make sure that we are attending to those? That also means how do we make sure that we are educating our regulators and our policymakers to the dimensions of the technology? It means paying a lot more attention to how we make sense of and explain things. It also means being attentive to where the laws already apply.
So, we can talk about autonomous vehicles, but they’re actually not absolved from the rules that already exist about safety in cars. Similarly, there are places where our law sets need to move forward, right. We have a whole series of notions about privacy that don’t necessarily contend with the way data is being used in the 21st Century. We have ideas about privacy that are complicated in a world of things, like contact tracing apps, where you can inadvertently violate someone else’s privacy.
And so, how we start to think about all of that feels like a place where it’s not just about building a system. It’s about designing the system. It’s about regulating a system, and it’s even about decommissioning it at the end because these things are not built forever.
Ginger: And having ethics and humanity at the core of this seems to be crucial to what I always hear you say whenever I hear you speak or I read the fabulous things you write. I read this fabulous quote in one of your articles when I was doing my homework and I wanted to read it back to you because I’d like you to comment on it. You said, “I believe that the world needs a new way of thinking to tackle these coming challenges.”
Professor Genevieve Bell: Yip. I wrote that in 2017. I usually follow it with two things. One is I quote one of the early voices in AI, who back in 1950 said that, “We’re either going to need the poets to become engineers or the engineers to become poets,” which is a lovely line, but I think wrong. I hate to disagree with Norbit.
But I think actually, it’s not an ‘and/or’, it’s an ‘as well’ and it’s an ‘all of’. And when I think about what I hoped I would do then and what I am still doing now is that building a new way of thinking isn’t just about me asking difficult questions. It’s also about creating space for other people to not just ask, but answer questions in different kinds of ways.
So, the programme we’ve been building here really is kind of focused on that, how do we create a new generation of practitioners, makers and doers, who know how to ask questions of these systems as well as to think about how you might regulate, design, build, secure and ultimately decommission them.
Ginger: And explain them to the public because I feel that one of the major challenges we have here is actually capturing the public’s attention and dragging them away from the kids, and soccer practice and how to pay their mortgage and getting us all thinking about these bigger picture things.
Professor Genevieve Bell: Again, it’s not an ‘and/or’, right. I think it has to be you come to those moments sometimes because that’s the place you are in your life. I think we need to work as technologists and commentators on how we make these concepts both easier to grasp and there’s an imperative to do so. And I think we have to imagine that people are going to find different reasons to come to them.
Ginger: Genevieve, thank you so much for talking me today.
Professor Genevieve Bell: It’s my pleasure.
Useful Links