March 13, 2024

AI is a subject on everyone’s lips these days. But AI-driven innovations and the resulting implications seem to raise more questions than they solve. How can we continue to be responsible in our practices and avoid fraud and other pitfalls? A single conversation won’t provide a definitive answer, but it’s a good place to start.

To discuss this very timely topic, Executive Education HEC Montréal hosted a panel discussion on February 8, moderated by lawyer and corporate trainer Ginette Depelteau. She was joined by:

  • Catherine Régis, Professor, Faculty of Law, Université de Montréal, and Associate Academic Member, Mila – Quebec Artificial Intelligence Institute
  • Stéphane Eljarrat, Senior Partner, Canadian Head of White-Collar Crime, Norton Rose Fulbright Canada

Now more than ever

Catherine Régis was categorical on the subject: things have changed since the Montreal Declaration on Responsible AI in 2018. Six years ago, we had only started talking about the idea of a “moral compass” that would guide how AI would be used.


Experts in law, philosophy, AI and other disciplines were called upon to weigh in on this topic, but so was the general public. The result was a fully rounded view of various concerns in the fields of health care, justice and culture. And from where things stand today, it would seem that these concerns were well founded. In early February, the Conseil de l’innovation du Québec issued a report with a series of recommendations on AI development and use.
Today’s experts agree: legal and ethical frameworks, like the Declaration, are absolutely vital. Stéphane Eljarrat stressed during the discussion that some potential applications of AI are quite alarming, using situations he has encountered in his own legal practice. One of the most telling examples involve the creation of fake official documents (property deeds), which shook up the market. And then there are the phony emails, messages and videos that steal other people’s identities to extract confidential information.

AI-enabled crimes unpunished

Eljarrat indicated that the principles of criminology remain unchanged in the age of AI. He brought up the specific example of the motivation to commit a crime. Money, ideology, ego and compromise continue to be the four essential factors underlying AI-related crimes, no matter how sophisticated they may be.

He stressed that very few of these scammers are ever apprehended, mainly because they are so hard to track down. The lack of physical presence makes it difficult to identify the culprits and recover stolen funds.

Protecting companies

Régis feels that robust legislative solutions are required to ensure the trustworthiness of AI, be that through Bill C-27 or a future legal framework, as is currently being discussed in Quebec. But she does not believe that a law is enough to address all the issues at hand. There are simply too many of them and they keep changing. It will take a multifaceted strategy, with a combination of tools in the legal, technical (e.g., watermarking) and educational (development of critical thinking with regard to the benefits and risks of AI in our lives) spheres.

How does this affect innovation?

Providing a structure for AI without hampering innovation is quite the balancing act. If we want our ecosystems to be innovative, the approach will have to be a risk-aware one, agreed both panelists. In other words, it will have to take the corresponding risks into account in a coherent way to ensure that progress is tempered by prudence.

One of the examples given concerned messages in connection with a given transaction. As a rule of thumb, two or even three lines of communication will have to be used to protect against AI-generated interference. “We’re no longer in a world where AI might be used to break the law. It will happen. It’s just a matter of when,” cautioned Eljarrat. Hence the importance of ensuring that the processes in place are effective and in the hands of people who are qualified to do the appropriate checks before something goes awry.

Keeping humans in the loop

The last point brought up during the discussion was the sometimes-uneasy relationship between people and AI.

Both speakers emphasized that AI is potentially useful in any area where human intelligence operates, especially where there are sizeable challenges and insufficient resources. In health care, for example, the potential from a research perspective is tremendous. AI can also play a decisive role in improving the quality of medical diagnoses.

Régis brought up several examples where people in specific professions could relegate time-consuming tasks (medical forms and histories, appointment scheduling, etc.) to AI in order to focus on more value-added aspects of caregiver–patient relationships. Eljarrat added that he has been impressed by remarkable advances in scientific research driven by AI in recent years.

In terms of research breakthroughs and essential support, AI should be thought of as a way of augmenting human intelligence. But the necessary checks must be in place to make this added value worthwhile. Humans cannot be replaced in this part of the loop. Both of the experts were adamant that specialized teams must be created, with the critical thinking capacity to understand both the benefits and the vulnerabilities of AI in such applications.

In other words, the AI-driven future is far from bleak. But it will require caution and scrutiny. We need to continue seeing these technologies as a way to improve quality and support human activity. And that will keep us moving forward in the right direction.

➲ If you are looking for ways to improve the ethical practices in your organization, discover our Certification in Ethics and Compliance program and sign up for our waiting list to learn about the next cohorts.