Contact
Probate and Will, Trust & Estate Disputes

Publish date

19 December 2023

One of the first domestic judicial decisions highlighting the risks of relying on Artificial Intelligence (AI) in case preparation has been handed down by the First-Tier Tribunal (FTT) Tax Chamber

Harber v Commissioners for His Majesty’s Revenue and Customs [2023] UKFTT 1007 (TC)

Background

Mrs Harber disposed of a property and failed to notify HMRC of her liability to capital gains tax. HMRC subsequently issued a failure to notify penalty. Mrs Harber appealed the penalty on the basis that she had reasonable excuse, because of her mental health condition and/or because it was reasonable for her to be ignorant of the law.

The response

Mrs Harber was not legally represented for this appeal. In preparation for the hearing Mrs Harber produced a Response document which included reference to nine cases she had identified in which the First-Tier Tribunal (FTT) had “sided with the taxpayer”. This Response dealt with cases in two separate categories, those which related to ignorance of the law and those which related to mental health conditions.

During the hearing HMRC’s legal representative asserted that the cases in Mrs Harber’s Response were not identifiable as genuine cases. It was suspected, and the FTT went on to consider whether, the cases Mrs Harber had referred to had been generated by an artificial Intelligence (AI) system, such as ChatGPT, which Mrs Harber agreed was “possible”.

Tribunal’s view on AI

The FTT concluded that the cases in the Response were not genuine FTT judgments and these were in fact generated by an AI system. In coming to this conclusion the FTT identified several elements of the judgments, including:

1.The fact that none of the cases were available on the FTT’s website or via other legal websites

2. The cases were outwardly plausible, but incorrect. Several of the fabricated cases had the same surnames and similar fact patterns to real FTT cases, but certain elements differed, such as:

a. First names

b. Date ranges

c. The Tribunal which heard the case.

Importantly, the FTT noted that the FTT found in the appellant’s favour in all the fabricated cases, when in fact in the real cases to which the facts related, the appellant lost the appeal, or the penalty was charged for a different failure to notify than failure to notify of liability to capital gains tax (as in this case).

  1. There were stylistic and reasoning flaws in the fabricated case summaries, with examples being

a. All but one of the cases related to penalties for late filing, and not for failures to notify a liability

b. Six of the nine cases used the American spelling of “favor”

c. Identical phrases were frequently repeated in summaries across the cases.

In their judgment the FTT also considered two important pieces of commentary on the use of AI systems. The first was made by the Solicitor’s Regulation Authority, which states that:

All computers can make mistakes. AI language models such as ChatGPT, however, can be more prone to this. That is because they work by anticipating the text that should follow the input they are given, but do not have a concept of ‘reality’. The result is known as ‘hallucination’, where a system produces highly plausible but incorrect results”.

The judgment also referred to the US case of Mata v Avianca 22-cv-1461 (PKC), in which two legal representatives sought to rely on fake cases generated by ChatGPT. The original submission was similar to that of Mrs Harber, using only summaries of cases to support their argument. The judge in Mata requested full judgments of the cases relied on and the legal representatives went back to ChatGPT to try to source these and ChatGPT then provided a much longer text. The legal representatives filed that text on the basis that amounted to “copies… of the cases previously cited”. The judge in Mata identified that the judgments had stylistic and reasoning flaws that would not generally appear in decisions issued by United States Courts of Appeal.

Although Mrs Harber only sought to rely on the summaries and therefore the flaws were less evident, as highlighted above there were still identifiable flaws which pointed to the use of AI systems, which drew parallels with the US case where similar issues arose.

Although the FTT accepted that Mrs Harber was unaware that the cases were fabricated and did not know how to locate or check authorities on the FTT website or other legal websites, the FTT highlighted the harm that seeking to rely on fictitious cases can cause. Among other things it will result in opponents wasting time and incurring unnecessary costs trying to verify the cases or argue that they are fictitious. By the same token it also wastes court time dealing with unnecessary argument and submission which could take time away from dealing with other matters. The FTT also noted that the approach has the potential to damage the reputations of judges and courts who are the parties attributed with the fabricated authorities; that has the potential to undermine the trust and confidence that the public has in the legal profession and our judicial system.

As a result the fabricated cases were disregarded and Mrs Harber lost her appeal. The judgment therefore stands as a warning to litigants in person, and to lawyers, of the risks that can arise from placing reliance on information obtained from AI systems in case preparation. Mrs Harber was not a legal professional and therefore not expected to know how to identify real cases. However, the situation is likely to be viewed differently by the court if reliance is placed on incorrectly generated AI material by a lawyer, or other professional involved in the litigation process, which could lead to serious costs consequences.

For further information on this topic, please do get in touch info@ts-p.co.uk

Heathervale House reception

Keep up to date with our newsletters and events

icon_bluestone98