Insights
AI in IA
Ciarb Guideline on the Use of AI in Arbitration (2025)
Apr 03, 2025Summary
Ask a trained lawyer what an “LLM” is, and what first will come to mind may be a “Master of Laws”; ask a tech-savvy teenager what an “LLM” is, and they most probably will answer “large language model”. The former may not be a prerequisite to becoming a qualified lawyer, but knowledge of the latter likely will be crucial for legal practitioners in the coming years.
LLMs / Generative AI are on the rise. OpenAI ChatGPT. Microsoft Copilot. Anthropic Claude. Meta AI Llama. Perplexity. Google Gemini. xAI Grok. Quora Poe. DeepSeek.
And bespoke legal AI tools, of course, also are on the rise. Harvey. Thomson Reuters CoCounsel. LexisNexis Lexis+ AI. Just to name a few.
It is timely that the Chartered Institute of Arbitrators (Ciarb) has launched the Ciarb Guideline on the Use of AI in Arbitration (2025) (the “Ciarb AI Guideline”). Issued by the leading arbitration certification organisation in the world, the Ciarb AI Guideline provides welcome guidance to arbitrators, parties, representatives and other arbitration participants alike, while at the same time supporting the arbitration sector’s practical efforts to “… mitigate some of the risk to the integrity of the process, any party’s procedural rights, and the enforceability of any ensuing award or settlement agreement”.
Overview
It is no secret that large language models have a documented susceptibility to “hallucinate”, or make up false information. A quick online search will reveal a number of examples where lawyers have cited and relied upon what turned out to be fictitious cases in Court, which were “hallucinated” by generative AI programmes. A Stanford study in January 2024 suggested that general-purpose chatbots “hallucinated” between 58% and 82% of the time on legal queries, and a Stanford study in May 2024 further suggested that even bespoke legal AI tools may “hallucinate” what some might regard as a concerning amount of the time: the study stated that Lexis+ AI and Ask Practical Law AI systems produced incorrect information allegedly “more than 17% of the time”, and Westlaw’s AI-Assisted Research hallucinate allegedly “more than 34% of the time”.
It is imperative that industry bodies, such as the Ciarb, engage with arbitration practitioners, and release guidelines such as the Ciarb AI Guideline. The Ciarb AI Guideline is split into four parts, with two appendices:-
- Part I: Benefits and risks of the use of AI in arbitration
- Part II: Recommendations on the use of AI in arbitration
- Part III: Arbitrators’ powers to give directions / make rulings on the use of AI by parties in arbitration
- Part IV: Use of AI in arbitration by arbitrators
- Appendix A: Template Agreement on the Use of AI in Arbitration
- Appendix B: Template Procedural Order on the Use of AI in Arbitration
We will discuss each of these sections in more detail below.
Part I: Benefits and risks of the use of AI in arbitration
The benefits and risks of the use of AI in the legal sector have been widely discussed. Benefits (articles 1.1 to 1.10) include efficiency and quality, data analysis and translation and interpretation. Risks (articles 2.1 to 2.9) include data security, confidentiality, and impartiality and independence.
Regarding benefit 1.8 (Detecting the use of AI), which is stated in the Ciarb AI Guideline to be:-
1.8. Detecting the use of AI: AI detection tools can detect deep fakes and assessing the authenticity of evidence by ascertaining that it has not been fabricated by other AI technology.
It remains to be seen whether there will be a battle of bespoke AI programs to become the “authority” in detecting deep fakes. What if two AI programmes disagree on the authenticity / reliability of the evidence in question? It will be interesting whether the use of AI to determine authenticity / reliability of evidence will evolve into a distinct expert discipline altogether, requiring an expert witness (human being) to give evidence, and be cross-examined by opposing counsel.
Regarding risk 2.5 (Due process), which is stated in the Ciarb AI Guideline to be:-
2.5. Due process: The use of AI Tools in arbitration may give rise to due process issues including inequality of arms and significant discrepancies in parties’ respective ability to understand the case advanced against them. The use of an AI Tool as an aid in factual case analysis might impinge upon the right to present a party’s case and the implicit obligation of arbitrators to consider and address all of the parties’ arguments and exhibits, rather than a limited selection determined by an AI Tool. Use of an AI Tool as an aid to identify the relevant legal framework may also carry such risk. Also in this case, paternity of the decision and assumption of responsibility are crucial aspects to limit the risk of due process violations.
AI may assist in addressing “inequality of arms”, particularly if the right tools are available to an under-resourced party at reasonable cost. However, this may be a double-edged sword – will there be a risk of further deepening of the “inequality of arms” if a well-resourced party is able to access cutting-edge AI tools, which an under-resourced party may not be able to afford and/or access? Reliance on AI – whether it be by the arbitrator or the parties – must entail the necessary assumption of responsibility, to ensure due process has been followed, and has been seen to be followed.
Part II: Recommendations on the use of AI in arbitration
This section (articles 3.1 to 3.4 of the Ciarb AI Guideline) sets out some general, high-level recommendations to parties and arbitrators. An important recommendation (which ties in to the due process discussion above) is at article 3.4 in relation to responsibility and accountability:-
3.4. Unless the Tribunal and the parties expressly agree to the contrary in writing (subject to any applicable Mandatory Rule), the use of an AI Tool by any participant in the arbitration, shall not diminish their responsibility and accountability that would otherwise apply to them without the use of an AI Tool.
Part III: Arbitrators’ powers to give directions / make rulings on the use of AI by parties in arbitration
Articles 4.1 to 4.7 relate to arbitrators’ powers to give directions and make procedural rulings on the use of AI. Article 4.2 in particular ties in with the discussion above regarding bespoke AI programmes to determine authenticity / reliability of evidence:-
4.2. To the extent relevant in proceedings which it conducts, the Tribunal may appoint AI experts if they require their assistance with understanding any AI Tool or aspects of AI and/or potential implications of adopting the relevant tool or technology in the circumstances of the case.
Articles 5.1 to 5.4 relate to party autonomy for the use of AI. Articles 6.1 to 6.8 relate to arbitrators ruling on the use of AI and admissibility of AI-generated material. Article 7 relates to disclosure of the use of AI. These are all useful guidelines which remind the parties to carry out necessary checks, e.g. mandatory applicable laws, regulations, policies, institutional rules, ethical rules, and other relevant legal and regulatory frameworks.
Part IV: Use of AI in arbitration by arbitrators
Articles 8.1 to 8.4 relate to discretion over use of AI by arbitrators. Articles 9.1 to 9.2 relate to transparency over use of AI by arbitrators. An important reminder to arbitrators who intend to rely on AI lies in article 8.3:-
8.3. Arbitrators should also independently verify the accuracy and correctness of information obtained through AI, ensuring their judgment is free from confirmation bias and other distortions. They should conduct their own research, using AI-generated information as a supportive tool, while maintaining a critical perspective to prevent undue influence on their decisions, including through appropriate supervision.
Appendix A: Template Agreement on the Use of AI in Arbitration
This four-page template agreement serves as a useful starting point for parties to document clearly in black and white the four corners of which AI will be used in the arbitration, if indeed AI is to be used at all. The template may be a stand-alone document, or incorporated into the arbitration agreement. The defined terms of “Permitted AI Tools” and “Permitted Use” delineate the purposes for which the parties agree AI can or cannot be used, and the AI tools which can or cannot be used. For consistency, the template agreement also provides that the Ciarb AI Guideline shall serve as guiding principles to all participants in the proceeding, and in case of conflict between the terms of this (template) agreement and the Ciarb AI Guideline, the latter shall prevail.
Appendix B: Template Procedural Order on the Use of AI in Arbitration
The template Procedural Order includes a one-page short-form option, and a three-page long-form option. The short-form simply orders that the tribunal and other arbitration participants will be guided by the Ciarb AI Guideline in the conduct of the proceedings. The long-form adds a mechanism by which specific “High Risk AI Use” – which includes risks such as potential to materially undermine the procedural integrity of the arbitration – must be disclosed.
Conclusion
In light of the launch of the Ciarb AI Guideline, it is fitting to re-visit the results of our BCLP Arbitration Survey 2023 on the use of AI in international arbitration. The respondents to the survey were involved in disputes across a wide range of sectors, including construction and engineering (62%), energy and natural resources (53%), international trade and commodities (28%), and technology (38%). 54% were lawyers in law firms; 33% arbitrators; and 12% in-house counsel. 57% were from a common law background and 23% were from a common and civil law background. Among the survey results:-
- 37% of respondents had used AI tools for the translation of documents; 30% for document review and production; 30% for text formatting and editing; and 24% for document analysis (extracting and organising data from documents).
- 88% of respondents were very concerned or somewhat concerned about AI Hallucination – the risk of the AI tool conjuring up fictitious information.
- 60% of respondents agreed or strongly agreed that there is a need for greater transparency over the use of AI tools by parties in arbitration.
- 74% of respondents agreed or strongly agreed that arbitrators should not use AI tools to formulate or draft adjudicatory elements of an award.
- 63% of respondents were in favour of the regulation of the use of AI tools in arbitration.
More than a year has passed since the results of the BCLP Arbitration Survey 2023 were released. It is suggested that the number of lawyers which have used AI tools in arbitration surely will have increased since BCLP’s survey, which makes it all the more important for robust industry guidelines to be available.
The Ciarb AI Guideline provides a solid framework for arbitration practitioners to structure the use of AI in arbitration. Understandably, its authors acknowledge that “due to the rapid development of new technologies, there may be a need to revisit and update the Ciarb AI Guideline periodically”. Nonetheless, it is a welcome development in the exciting and rapidly evolving space at the intersection of technology and the law. This is indeed a space to watch for all arbitration practitioners.
Reference Guidelines
For reference, below is a list of additional industry guidelines for legal practitioners:-
- Hong Kong Judiciary – Guidelines on the Use of Generative Artificial Intelligence for Judges and Judicial Officers and Support Staff (July 2024)
- The Law Society of Hong Kong – Position Paper on the Impact of Artificial Intelligence on the Legal Profession (20 January 2024)
- The Law Society of Hong Kong – AI Transformation in Legal Practice Survey 2024 Whitepaper (survey conducted from 26 June to 30 September 2024)
- Lexis Nexis – Generative AI and the Hong Kong Legal Profession (20 August 2024; survey conducted in May and June 2024)
- Courts and Tribunals Judiciary (England & Wales): Artificial Intelligence Guidance for Judicial Office Holders (12 December 2023)
- The Law Society (England & Wales) guide on the use of generative AI tools: “Generative AI: the essentials” (7 August 2024)
- Bar Standards Board (England & Wales) guide on the use of generative AI tools: “ChatGPT in the Courts: Safely and Effectively Navigating AI in Legal Practice” (7 October 2023)
Related Practice Areas
-
International Arbitration
-
Data Privacy & Security