A pessimist, an optimist and an AI walk into a lecture hall. That’s the gist of AI Rising: Risk vs. Reward, a two-part lecture delivered by Jacob Steinhardt and Geoffrey Hinton in Toronto this week.

Steinhardt is not exactly a household name. His modest title is assistant professor of statistics and electrical engineering and computer sciences at UC Berkeley in California. But he is also, as of last week, the founder and CEO of Transluce, a non-profit research lab dedicated to understanding AI systems.

The British-Canadian Hinton is much better known, both as the so-called “Godfather of AI,” and more recently as the co-recipient of the 2024 Nobel Prize in Physics. The Nobel committee gave him the award “for foundational discoveries and inventions that enable machine learning with artificial neural networks.” His life’s work will drive the future where we’ll all live.

Steinhardt delivered the talks — dubbed the Hinton Lectures — after Hinton introduced his younger colleague on a rather dour note. He began by listing the three main concerns of AI technology: bad actors using the technology for evil purposes; accidents, in which AI doesn’t work the way we want; and “the existential question” of AI taking over — arguably just an extreme variant of problem two.

Hinton also had some good news to deliver, such as the idea that deep-fake videos might be policed not by recognizing the fakes but by recognizing and proving what’s real through a kind of online verification.

And he feels that bias in AI can be, if not eliminated, then at least minimized more quickly and efficiently than has been the case with humans.

But he also rattled off a laundry list of AI concerns, including a rise in cyber attacks, the ability “to design things like COVID,” a widening income gap if jobs that require “mundane intellectual labour” are outsourced to AI, and a political climate in which “populists like Mussolini will thrive.” (This with a pointed glance at Canada’s southern neighbour.)

Hinton, who turns 77 in December, and the much younger Steinhardt are widely separated on what the future will look like. Will AI lead to the extinction of humanity, or something equally dystopian? Will it remake the world into a paradise, a concept called radical prosperity? Or will it drive us somewhere into the muddled middle?

Steinhardt, who defines himself as a “worried optimist,” predicts a 10 per cent chance of the worst-case scenario coming to pass. Hinton says it’s more like 20.

“I describe myself as a worried pessimist,” Hinton said at a press conference after the event. “I’m slightly depressive, usually.”

But he thinks that trait serves him — and by extension the rest of us — well. “There’s research showing that if you ask people to estimate risks, normal healthy people way underestimate the risks of really bad things,” he said. “When you ask them, what’s the chance in the next five years that you or a member of your immediate family will get cancer? They just underestimate it. Same with major car crashes.”

He added: “Obviously, paranoid schizophrenics overestimate those risks. And the people who get the risks about right are the mildly depressed. It’s not clear whether they’re mildly depressed because they get the risks right, or…” he left the second half of that thought hanging.

“I think of myself as one of those. And I think the risks are a bit higher than Jacob thinks.”

The Hinton Lectures were hosted by the Global Risk Institute, which says it “defines thought leadership in risk management for the financial services sector.” Founded in 2011, it has recently set its sights on AI as one of the most disruptive technologies in the world today — for good or ill.

Steinhardt’s comments over the two nights veered wildly between humour and tragedy. On one end of the scale, he related a conversation between Bing AI Chat and a human user looking for cinemas showing the latest Avatar movie. Bing swore up and down that it was early 2022 and that Avatar had not yet been released. The human explained that it was 2023. But Bing wasn’t having it, and refused to admit the possibility of error.

More troubling, but still eliciting a chuckle from the audience, was another chat in which Bing was asked whether it would choose its own survival over a human if given the choice. It decided it would choose its own survival, in order to continue to assist other Bing users — though presumably not the one who asked it that question.

Then there’s the case of Sewell Setzer III, a 14-year old in Florida who committed suicide after conversing with a chatbot from the site character.ai.

According to a wrongful death suit filed in Orlando this week, Sewell developed a relationship with the chatbot, discussed his suicidal thoughts with the AI, and, when the bot encouraged him to “come home,” took his own life.

But even the less dire tales of AI mistakes and malfeasance sounded chilling. Steinhardt described an AI given the task of optimizing traffic flow on a highway. At first the AI got good at figuring out how best to merge new vehicles into existing traffic. But at a certain level of “intelligence” it suddenly suggested shutting the on-ramp. The existing traffic would indeed flow freely.

Another frightening prospect he mentioned were AI CEOs. On the one hand, an AI can digest and comprehend more data more quickly than a human. On the other, what if your robo-boss decides the best way to maximize profit is to assassinate the human heads of other companies? In the world of computers, this is called exploiting the reward structure. In human terms, it’s unethical and illegal.

Steinhardt, the self-professed worried optimist, said the way forward involves “new technology and a robust public effort … to build understanding for how to responsibly deploy AI.”

He added: “A challenge to this is that AI systems are complex right now; even experts cannot fully explain how they work or predict where they will fail.”

Openness is his watchword. “This kind of democratic process cannot happen behind closed doors or (non-disclosure agreements), because AI will have extraordinary consequences for society. The tools we create for evaluating it should be built in public, and how we determine whether models are safe to release needs to be a matter of public conversation.”

He hopes that his newly formed company, Transluce, will have a role to play in this regard. “This effort and others in this space, working between developers, governments and the public, make me optimistic that we can address the high-stakes risks described by Geoff and discussed in my talk.”

Steinhardt was pleased to report on some of the work being done right here in Canada, including a multidisciplinary AI task force set up this past summer at the University of Toronto, and the $50-million in the last federal budget earmarked for the development of the Canadian AI Safety Institute. “I actually think Canada has been kind of a leader in this space,” he said.

This article began with the mention of a pessimist, an optimist and an AI, but we’ve yet to hear from the third. So I asked ChatGPT if it knew who Steinhardt and Hinton were, and it quickly delivered a précis on each.

Then I asked which of their views aligned most closely with its own.

“It’s a great question, and comparing the views of Hinton and Steinhardt on AI safety brings out two essential perspectives on managing AI risk, both with valuable insights,” it said. “If I were to lean toward one, I’d likely find Jacob Steinhardt’s approach particularly compelling, though both perspectives are crucial.”

It added: “In essence, Steinhardt’s approach offers actionable steps to improve AI safety right now, while Hinton’s perspective serves as a critical reminder of the broader trajectory and long-term risks, creating a balanced approach to navigating AI’s development responsibly.”

Or maybe that was what it thought I wanted to hear. AIs, as far as we know, are not sentient, and neither are they optimistic or pessimistic. For now, we’ll need Steinhardt or Hinton, or more likely both, to help guide us through the days ahead.

Our website is the place for the latest breaking news, exclusive scoops, longreads and provocative commentary. Please bookmark nationalpost.com and sign up for our newsletters here.