ChatGPT maker OpenAI has been sued by Canadian media organizations this week.

The lawsuit filed in the Superior Court of Justice argues that the company is illegally using news articles to train ChatGPT software.

The lawsuit claims, in part, “To obtain the significant quantities of text data needed to develop their GPT models, OpenAI deliberately ‘scrapes’ (i.e. accesses and copies) content from the news media companies’ websites…”

According to a Toronto Star article, after the suit was filed, TorStar CEO Neil Oliver wrote in an employee memo, “We will not stand by while tech companies steal our content.”

The claims made in this lawsuit should raise flags for any organization with a website or online presence. Perhaps your company’s data has also been scraped by AI software like ChatGPT without your express permission? Not only that, ChatGPT could have access to more data than your company is aware of.

I have written extensively about the clandestine use of ChatGPT in many workplaces. A meaningful number employees use chatGPT to complete work tasks. In many cases, employees do not disclose to employers ChatGPT is being used. In other cases, employers are rushing to introduce AI tools to the workplace without weighing the potential privacy implications.

One of the largest concerns over the use of ChatGPT in an organization is the fact that there is no real guarantee of privacy when your employees are using it. This is particularly problematic if employees are feeding client data or other confidential company information into the ChatGPT software. This can happen when workers use ChatGPT to help write letters, draft reports or organize and analyze company and client data. Once this information is fed into the software, a company has no way of controlling how it may later be disseminated by OpenAI.

The Canadian lawsuit is not the only lawsuit openAI is responding to. Various lawsuits currently exist globally that similarly allege OpenAI is training chatGPT with data it has not been permitted to access or use.

What this may lead to is the slowing down of AI implementation into workplaces. Organizations should be asking more questions about security parameters built into AI tools. At a minimum, companies must seek commitments from tech companies not to use, disseminate or sell data it collects from paying users.

RECOMMENDED VIDEO

Let’s also not forget that AI tools are powerful but not perfect. As an experiment for this column, I asked chatGPT to find me a Canadian court decision where a software employee was terminated, had worked 3-5 years and was awarded 12 months of wrongful dismissal damages by a court in Ontario. ChatGPT spit out a case called “Grieg” that purportedly hit every criteria I wanted and provided a concise summary of the case.

When I prompted chatgpt for the legal citation for the case, it produced one. When I looked the citation up on the Canadian legal database on Canlii, no results were found. ChatGPT gave me a fake citation to an Ontario case that did not exist.

The lesson? Despite its introduction to the public some time ago, ChatGPT still gets basic requests wrong. If nothing else, employers must warn employees against using it blindly and trusting it implicitly.

At least one Canadian court has been asked to harness the power of AI and curb its access. OpenAI’s legal issues may change the way chatGPT operates and uses data in 2025. Every company would be wise to watch how this case develops as the future of AI in our country could be decided by the courts soon.

Have a question? Maybe I can help! Email me at [email protected] and your question may be featured in a future column.

The content of this article is general information only and is not legal advice.