Data privacy risks to consider when using AI

February 25, 2020

By Sarah Ovaska for Financial Management, January 31, 2020

Artificial intelligence (AI) has the potential to solve many routine business challenges — from quickly spotting a few questionable charges in thousands of invoices to predicting consumers' needs and wants.

But there may be a flipside to these advances. Privacy concerns are cropping up as companies feed more and more consumer and vendor data into advanced, AI-fuelled algorithms to create new bits of sensitive information, unbeknownst to affected consumers and employees.

This means that AI may create personal data. When it does, "it's data that has not been provided with [an individual's] consent or even with knowledge", said Chantal Bernier, assistant and interim privacy commissioner in the Office of the Privacy Commissioner of Canada from 2008 until 2014 who now consults in the privacy and cybersecurity practice of global law firm Dentons.

AI is an umbrella term used to describe advanced technologies such as machine learning and predictive analytics that essentially shift decisions once solely made by humans to computers.

While AI is still in its early stages — we may have robotic vacuums but nothing like the futuristic cartoon character Rosey, the robot maid from The Jetsons — industries are using the technology to expand revenue streams and reduce workforce costs by linking disparate bits of information.

Few corporate executives are focused on the privacy risks associated with the use of AI. Discussions in boardrooms and in C-suites are "more focused on the possibilities and the benefits of AI than the potential risks", said Imran Ahmad, a Toronto-based lawyer with Blake, Cassels and Graydon who specialises in technology and cybersecurity issues.

Continue reading at Financial Management

View all News