At the Harvard Faculty Club in late April, a small group of scientists, professors, and political leaders gathered and the request of former Governor Michael Dukakis. The subject of the day was artificial intelligence – both the political and ethical ramifications of the rapidly growing technology.
“I don’t think it’s any secret to many of you that have been a part of these discussions for a long time that I’m a very strong believer in international action, through the United Nations if at all possible, to deal with some of these problem,” says Dukakis. “One of them, quite honestly, is technology which is racing ahead of us so rapidly that we can’t cope with its consequences – politically, ethically, or otherwise.”
The event was the Boston Global Forum’s G7 Summit Conference 2018 on the Ethical Development of Artificial Intelligence. The G7 Summit is an annual event where G7 leaders come together to discuss global issues. These discussions lead to decisions that each member must agree on, and those decisions inform policies and strategies enacted by each country. The G7, or Group of 7, is a group of seven nations with advanced economics: the United States, Canada, the United Kingdom, France, Germany, Japan, and Italy.
The event in Boston was a smaller offshoot of the annual conference, this year held in Charlevoix, Quebec, Canada under Prime Minister Justin Trudeau. Artificial Intelligence is a single idea on a large plate of initiatives that the G7 considers important – but one close to the heart of those in attendance.
I was able to attend the event as a press guest and I must admit I wasn’t sure what I was getting into. As the managing editor of a publication that helps technology managers implement new technology, artificial intelligence is high on my radar. However, the political leanings of the subject aren’t something I normally consider – my audience and I are more concerned about the practical use, one example of which you can read about here.
Still, the event was close by and I was intrigued. What are the politics behind AI? I must admit that after attending the event I believe they are similar to the politics behind most subjects – muddled, varied, constantly shifting. In attendance were consulate generals of Canada, Mexico, and Greece, former Mayors, Governors, and business leaders, and a handful of scientists and professors.
With all due respect, I didn’t take much away from what the political leaders had to say. Much of it was the same – we need to do something, we need to learn more, we need to create rules. Much less of it concerned what we should do and how we should do it. The political posturing is not what this article is about, other than to say that politicians were there posturing.
Two Opposing Views of the Future of AI
I want to focus on two speakers at the summit – Professor Max Tegmark of MIT, and Professor John Savage of Brown University. Tegmark is one version of a typical professor – enthusiastic, engaging, clothes slightly ill-fitting, hair unkempt, more concerned with his thoughts than his appearance. Savage is the other end of the spectrum – buttoned up, analytical, but no less intelligent than Tegmark.
I’ve included videos of each presentation, courtesy of the Boston Global Forum, below. Before you watch them, I want to point out why these two individuals and their discussions intrigued me so.
In the world of Artificial Intelligence there are two schools of thought:
- Artificial Intelligence is a software. It cannot grow past the parameters we set for it. It cannot gain new abilities. It won’t develop consciousness unless we tell it to. It won’t even gain abilities we have not programmed it to gain.
- Artificial Intelligence is an intelligence. It will inevitably evolve to the point where it is indistinguishable from human intelligence, and even surpasses human intelligence. It may even grow to the point where it rules over us one day, a superior form of life.
There are many shades of grey in between, but ask someone to speculate on artificial intelligence hundreds of years from now and they will land in one of the two camps.
AI Bounded by Parameters
John Savage is the former.
“I am here to say that I am not a utopian, and I am not a dystopian, and I believe that our technology should serve us, rather than the reverse,” says Savage. “I am also here to say that AI, in my view, offers us many challenges, but that we can meet these challenges. We are not the first to address challenges of this kind.”
Savage focuses on the practical problems of Artificial Intelligence – “realism” as he calls it. He brings up the first attempt at building an artificial computer mimicking human memory – voltage values supplied to neuron-like devices which sum up those values and produce a result. It had limited capabilities, functions it couldn’t learn. You could train it by adjusting weights with amazing results. The system works, it looks powerful, but a theorem says that a circuit with a fixed depth and you try to realize functions the number of neurons will grow exponentially with the size of the input.
In laymen terms, that means it’s impossible to reach. Savage poses that the neural nets have a limit on what they can do. Artificial Intelligence in general has a limit on what it can do. Essentially, while the technology has grown rapidly, it will not reach nearly the potential that fanatics seem to believe it will. His stance is that we need to decide what types of AI to create – should we have AI in charge of things like security, weapons, etc.? As far as global domination – unless we develop an AI to take over humanity, it won’t do so on its own.
Artificial General Intelligence (AGI)
Max Tegmark takes a different stance.
“How far will this go?” asks Tegmark. “Will AI match human intelligence at all tasks? This is the definition of Artificial General Intelligence.” Tegmark explains the idea of power, steering, and destination. First, the power.
Artificial General Intelligence (AGI) is the holy grail of Artificial Intelligence. It’s the point where AI can match human intelligence in every category – where AI is indistinguishable from humanity. Some – including John Savage – don’t believe this is possible. Others disagree.
“AGI will transform life as we know it, with humans no longer being the most intelligent. If we reach AGI, then further AI progress will be driven not by human researchers but by AI itself. Which means that AI progress could be much faster than the typical human research and development time scale, raising the very controversial possibility of an intelligence explosion – where self-improving AI rapidly leaves human intelligence far behand, producing what’s known as superintelligence.”
Some scientists think AGI won’t happen for hundreds of years. Some say it will never happen. Others believe it’s a matter of decades. Regardless, the question is raised of what happens next, if machines can do everything better and cheaper than we can. This is where steering comes in.
Tegmark postulates on where we go from there. Do we get complacent, and allow machines to handle everything while we sort of lounge about on the evolutionary ladder? Or do we choose another path?
It’s not the technology that is inherently good or bad, it’s how humanity uses it. That’s true of all technology. Safety engineering – understanding what can go wrong – is where we come in. When we sent humans to the moon we didn’t just strap them to a rocket, we considered everything that could go wrong and tailored the technology to avoid those pitfalls.
The destination comes in the form of many possible futures in a world of AGI. Tegmark speaks of the enslaved god (humans control AI, disconnected from the internet, and use it for their bidding), the benevolent dictator (AI takes over with our best interest in mind), the protector god (AI works alongside humanity to find best case scenarios for both), and more.
Things got a little trippy there, I know, but the fact is that if AGI is reached, these are real possible problems we’ll need to face.
The Unification of Opposing Ideas
You might think Tegmark is a lunatic optimist with his head in the clouds, postulating on technology that can never exist. You might believe Savage is a short-sighted pessimist with his head in the sand, ignorant to just how far artificial intelligence can grow. Where they both agree is where the technology manager needs to pay attention.
Artificial intelligence will outperform humans at a number of tasks. It already is. When that happens, organizations will be able to employ AI solutions at a far cheaper cost than human counterparts. There’s no 401k, health insurance, PTO, or otherwise for AI. A single AI program could do the job of hundreds of humans. The cost is only a monthly subscription fee.
Where the ethical framework is concerned, we need to decide how far we can go. Is it a better world if a computer program displaces thousands of employees? How will those people get food and shelter? Will we provide it? Will they need to do more menial jobs? More advanced jobs? What if they can’t?
These are the questions that sit at the center of both Tegmark and Savage’s view of the future. How does humanity continue to exist in a world where much of the labor is performed by machined? When much of the labor force becomes obsolete – an overvalued expenditure cutting into the bottom line of corporations constantly concerned with that bottom line? That is where the discussion needs to be held.
It’s not a problem for today, but it’s a discussion we should be having. Even you, the technology manager, will need to consider whether people will lose jobs due to a new technology you can implement. Where do you draw the line on human value verses production costs? It’s not an easy question to answer, and will require discussion in the private and public sector to understand.
As Dukakis says, “I hope in our own way that the Boston Global Forum can stimulate international discussion and agreement-making around some of these problems, because we’re not doing a very good job of doing that. And I include my own government in that, which seems to like to go off on its own and generally gets into trouble when it does so.”
If you enjoyed this article and want to receive more valuable industry content like this, click here to sign up for our digital newsletters!
Leave a Reply