top of page

Part 3: Professional risk - AI in professional practice

  • Writer: The Just Audit team
    The Just Audit team
  • Dec 1, 2025
  • 3 min read

Our risk partners Karen Eckstein Ltd give a summary on the topical issue of AI within a professional context


AI remains a hot topic with regulators. ICAEW recently referred to the need to update terms of business or engagement letters in relation to AI in its recent webinar on the new code of ethics which came into force on 1 July. In that webinar, ICAEW stated that all engagement letters should have consent to use AI for benchmarking, technology development and research. Other regulators are similarly focussed on AI issues but have not, to date, given detailed guidance on the topic of what to include in the engagement letter.

 

Karen Eckstein Ltd frequently advises professional services firms on the contractual issues relating to AI, as well as commercial, operational and procedural issues (see below). They also include the changes proposed by ICAEW in proposed terms for all professionals, as they think that they represent best practice generally.


 

So, what are the main AI issues?

 

(1) AI contractual issues

Firms often fail to ensure that their contractual documents adequately provide for the appropriate use of AI by the firm and obtaining consent from clients for the use thereof. Failure to include appropriate terms in the engagement documentation can cause issues if AI is used by the firm without the client’s prior agreement.

 

Firms who also do work outside the scope of the agreed retainer might also struggle with evidencing consent to the use of AI - and might want to consider ensuring that their scope creep policies are robust.

 

(2) AI commercial issues

Many firms are not using AI or technology, believing that it is too risky, or expensive. Failing to do so responsibly could cause commercial issues, as appropriate use of AI/technology can lead to cost savings and efficiencies which can make firms more profitable, and lead to lower costs for clients.

 

Further, failing to correctly use appropriate AI/technology might mean that a firm doesn’t identify opportunities for clients or give rise to incomplete advice being given to a client, leading to claims.

 

(3) AI operational issues

Many firms do not have a handle on who is using AI in their firms, and what AI is being used for – the processing of data or research? Firms need to consider how to manage client confidentiality and how to benchmark the outcomes of any processing of data for correctness.

 

Issues can arise in relation to research undertaken using AI. The case of Zzaman v HMRC [2025] UKFTT 9520 (TC) highlights what can go wrong when AI is used for research or drafting without proper oversight.

 

In this tax tribunal case, Mr Zzaman, a litigant in person, used AI to help draft his statement of case. While his intentions were honest and his arguments well-meaning, the AI-generated cases and citations were misapplied and did not support the propositions advanced, resulting in his case being dismissed.

 

Although no fictitious cases were cited in this instance, the tribunal warned that AI tools can still produce plausible but incorrect results, especially if outputs are not carefully reviewed. In a professional context, using AI without proper systems and checks in place could lead to client complaints, reputational damage, regulatory scrutiny, or even legal claims.

 

Be prepared for the risk of regulatory matters arising as a result of research being conducted using AI and being relied upon without being checked by seniors.

 

(4) AI procedural issues

Karen Eckstein Ltd says that it sees many firms who do not have adequate AI policies and processes in place, including a programme for training staff on an ongoing basis, not only on the firm’s policies and processes, but the reasons why they are important.

 

Issues to consider include:

  • Whether there is consent from the client to use AI: is there a signed engagement letter which provides that consent and is the work undertaken within the scope of that engagement letter?

  • What are you using AI for?

  • How are you using AI?

  • Do you have a process and guidance on how to process data?

  • Do you have a process and guidance on how to use internal AI?

  • Do you have a process and guidance on how to use external AI?

  • Is your cyber security strong?


 

What are the main AI issues to address now?

  1. If you haven’t done so recently, consider if your engagement letters and terms of business cover the use of AI.

  2. Consider your internal policies, processes and guidance to ensure that they define when and how AI may (or may not) be used, in your firm.

  3. Put a training programme in place for all staff on your AI policies, processes and guidance.

  4. Check any AI/tech that you intend to use (or are using) to ensure that it complies with appropriate legislation (for example, checking where data is stored - a common error!).

Comments


ICAEW logo
Logo
  • X
  • LinkedIn
  • Facebook
  • Instagram

Subscribe to our monthly newsletter

© 2026 Just Audit

bottom of page