The importance of checking your citations – the non-case of Hennes & Mauritz AB v M & S Meat Shops Inc, 2012 TMOB 7; or am I seeing things?
If you repeat a lie often enough, people will believe it. Twice in the past four months, the Trademark Opposition Board has identified instances where an applicant’s agent has cited cases that do not exist as part of arguments filed to request an interlocutory ruling under section 38(6) of the Trademarks Act to strike all or part of an opposition statement. The suspected culprit behind these false citations —generative artificial intelligence.
First in Industria de Diseño Textil, S.A. v Sara Ghassai, 2024 TMOB 150 and more recently in Monster Energy Company v Pacific Smoke International Inc., 2024 TMOB 211, the parties cited “Hennes & Mauritz AB v M & S Meat Shops Inc, 2012 TMOB 7” for the same proposition that “It is well established that a Statement of Opposition must contain the [actual] material facts upon which the Opponent relies and that it is insufficient for the Opponent to merely recite the ground of opposition as stated in the Act without supporting facts”.
Hennes & Mauritz AB v M & S Meat Shops Inc, 2012 TMOB 7 does not stand for this proposition. In fact, it does not exist at all.
In both instances, the Board caught this error:
Industria de Diseño Textil, S.A. v Sara Ghassai, 2024 TMOB 150 — [6] Whether accidental or deliberate, reliance on false citations is a serious matter [see Zhang v Chen, 2024 BCSC 285]. In the event the submissions resulted in whole or in part from reliance on some form of generative artificial intelligence, the Applicant is reminded of the importance of verifying the final work product prior to its submission to the Registrar.
--
Monster Energy Company v Pacific Smoke International Inc., 2024 TMOB 211 — [16] The Applicant relies on a case inaccurately identified as “Hennes & Mauritz AB v M & S Meat Shops Inc, 2012 TMOB 7” in support of its position that this ground of opposition has not been sufficiently pleaded. There is no such case. This citation appears to be an AI “hallucination,” as discussed in paragraph 5 of Diseño Textil. I will, therefore, disregard this portion of the submission and remind the Applicant that even if accidental, reliance on a false citation, AI hallucination or otherwise, is a serious matter [see Zhang v Chen, 2024 BCSC 285].
Generative AI tools have the potential to assist lawyer and agents with drafting materials and synthesizing large volumes of documents. They may well revolutionize the profession much in the same way the personal computer and word processor did in the 1990s; however, AI technology remains in its infancy. The phenomenon of AI “hallucinations”— nonsensical or inaccurate outputs from AI tools, like ChatGPT — is well known. Over-reliance (and uncritical reliance) on AI generated outputs carries with it significant risks. Since the public release of large language model tools two years ago, Courts have increasingly observed hallucinated citations in legal briefs and other court submissions. No doubt, at some point (if it has not happened already) a “hallucinated” citation is not caught, and winds its way into precedent.
Lawyers and agents need to be mindful of these potential pitfalls, and of their professional, ethical, and practice obligations when using AI to assist with carrying out a task.
For example, rules of professional conduct of law societies across Canada, and the College of Patent and Trademark Agent (CPATAS)’s own Code of Professional Conduct impose overarching professional ethical obligations to act with integrity and competence.
Law Societies have also published practice notices and resources that specifically address the use of Generative AI in practice. For example, the Law Society of Ontario’s practice notice “Generative AI: Your professional obligations” identifies the integration of generative AI into practice as impacting numerous ethical duties, including the duty of competence, honesty and candour, supervision and delegation, and not to mislead the tribunal. The notice includes the “practice tip” to “thoroughly validate any content generated by AI systems before presenting it to the tribunal to ensure that AI-generated evidence, cases, or arguments are accurate and reliable”. The Law Society of British Columbia’s “Practice Resource: Guidance on Professional Responsibility and Generative AI” similarly reminds practitioners of the responsibility to “review the content carefully and ensure its accuracy”.
In December 2023, the Federal Court of Canada released a notice to the profession on “The Use of Artificial Intelligence in Court Proceedings”. It requires, among other things, that any document prepared for the purpose of litigation, and submitted to the Court by or on behalf of a party or intervener that contains content created or generated by AI, must include a declaration that “Artificial intelligence (AI) was used to generate content in this document”. Other provincial and territorial courts have put out their own practice directives on AI use.
CPATA has not yet issued any specific directive or practice notice on the use of AI by agents. However, like Law Society rules, the CPATA Code of Professional Conduct imposes on agents general duties that would require responsible use of generative AI. For example, Rule 1(3) requires agent to “assume complete professional responsibility for all agency services that they provide and maintain direct supervision over staff and assistants such as agents in training, students, clerks, and legal assistants to whom they may delegate particular tasks and functions”.
In short – whether arguments to be submitted to the Board or Courts are being drafted by a human or AI, always check citations. Make sure they exist. Confirm they actually stand for the proposition advanced. Personal professional credibility, and the profession’s credibility, may depend on it.
Also, don’t cite Hennes & Mauritz AB v M & S Meat Shops Inc, 2012 TMOB 7. It’s not a real case.