Artificial intelligence (AI) will no doubt have a huge influence mediation in England and Wales, but it will not make mediators redundant. Technology will change how parties prepare for mediation, how mediators manage information and how outcomes are implemented. Human skills such as empathy, judgement and ethical decision making will remain central to mediation’s future. This blog examines likely roles for AI, regulatory and ethical implications under English and Welsh law, practical uses, risks and whether mediators should worry about being replaced.
How mediation works today
Mediation in England and Wales relies on confidentiality, voluntary participation and a neutral facilitator who helps parties find a negotiated settlement. Mediators manage dynamics, encourage disclosure and explore settlement options. Courts and the Civil Mediation Council promote mediation as a cost effective route to dispute resolution. Legal professionals like us usually recommend mediation to clients because it preserves relationships, reduces costs and shortens timescales.
AI as a support tool for mediation
AI is already enhancing preparation, case analysis and post‑settlement implementation. Natural language processing can summarise large bundles of evidence, extract key issues and highlight inconsistencies. Predictive analytics can model likely outcomes based on comparable cases and judicial tendencies. These tools will help mediators and parties set realistic expectations and focus on negotiable matters. Firms and mediation providers are using AI to triage disputes and suggest suitable mediation pathways.
Online dispute resolution and hybrid models
Online dispute resolution (ODR) platforms already use automation for low value claims and consumer disputes. AI will improve ODR by enabling smarter triage, adaptive negotiation pathways and automated drafting of settlement agreements. Mediations may adopt hybrid models with parts conducted online and key sessions held in person. It is likely that the Courts in England and Wales will support such models where they increase access to justice and maintain procedural fairness.
Will AI replace mediators?
AI will, in my view, not replace mediators because mediation depends on human qualities that machines cannot replicate. Mediators read body language, manage emotions and deploy creative caucusing to build trust and explore underlying interests. AI can suggest negotiation strategies and generate settlement options, but it will not provide the moral authority, ethical judgement or nuanced empathy that parties expect from a neutral human. Rather than replacing mediators, AI will augment their capabilities and allow them to add value in different ways.
Enhancing mediator decision making
AI is already giving mediators better information and scenario analysis. For example, a tool might analyse thousands of case outcomes to estimate settlement ranges, then present options that balance legal risk with commercial realities. Mediators can use these insights to challenge unrealistic positions and focus negotiations on achievable outcomes. AI outputs should never substitute for mediator judgement, but they will speed up fact finding and broaden the evidence base for settlement planning.
Impact on skills and training
Mediators will no doubt need to adapt and learn need a new skill set in technology literacy and data governance. Training programmes are available to teach mediators how to assess AI outputs, verify data provenance and explain algorithmic suggestions to parties. Mediators will also require stronger safeguards around confidentiality when using cloud based AI services. I consider it likely that professional bodies in England and Wales will update competency frameworks to include AI awareness, digital ethics and vendor due diligence.
Ethical and confidentiality concerns
Using AI in mediation raises significant ethical issues. Mediators must preserve confidentiality and privilege, so they should avoid uploading sensitive material to third party systems without robust contractual safeguards. The UK GDPR and Data Protection Act impose strict requirements on processing personal data, and the Information Commissioner’s Office expects organisations to use AI responsibly. Mediators must obtain informed consent for any meaningful automated processing and explain how tools will affect the mediation process.
Bias, fairness and transparency
AI systems can bake in bias through skewed training data or opaque modelling choices. In mediation, biased suggestions could disadvantage particular groups or entrench unequal bargaining power. Mediators should favour transparent algorithms, audit tools for disparate impact and escalate concerns where outputs appear discriminatory. Parties deserve to know when AI shapes proposals and to challenge those outputs as part of the mediation process.
Regulatory landscape and professional duties
Regulators in England and Wales will expect mediators to remain accountable for outcomes influenced by AI. The Civil Mediation Council and other bodies will likely issue guidance on the responsible use of technology. Mediators must maintain competence, avoid conflicts of interest and ensure informed consent under professional codes. Where mediators use AI providers, they should conduct vendor due diligence, secure appropriate data protection clauses and retain control over substantive decisions.
Access to justice benefits
AI can widen access to mediation by reducing cost and complexity. Automated intake, guided disclosure and standardised agreement drafting can lower barriers for unrepresented parties. Smart triage can direct disputes to mediation earlier in the lifecycle, relieving court backlogs in England and Wales. When designers prioritise accessibility and plain English interfaces, AI powered tools can empower users and strengthen settlement rates.
Risks to party autonomy and voluntariness
Overreliance on automation risks undermining voluntariness and party autonomy. Parties might feel pressured by algorithmic settlement ranges or accept proposals without fully understanding implications. Mediators must guard against undue influence by ensuring parties retain control, offering independent explanations of AI suggestions and confirming voluntary consent to any settlement reached with AI assistance.
Practical steps for mediators and providers
– Assess AI readiness and identify safe use cases such as document summarisation or scheduling.
– Choose vendors with transparent models, strong security and UK based data processing where possible.
– Obtain informed consent from parties before using AI in any substantive way.
– Keep logs of AI inputs and outputs and preserve audit trails for accountability.
– Train mediators on AI literacy, data protection and bias mitigation.
– Update engagement letters to disclose AI use and its limits.
Conclusion
AI will become an important tool in the mediator’s toolkit in England and Wales, but it will not render mediators obsolete. Technology will streamline preparation, improve information quality and expand access to mediation. Human skills such as empathy, judgement and process management will remain indispensable. Mediators who embrace AI responsibly, maintain ethical safeguards and invest in new competencies will enhance their practice and secure mediation’s central role in dispute resolution for years to come.
At Alexander JLO we have many years of experience of dealing with all aspects of law and will be happy to discuss your case in a free no obligation consultation. Why not call us on +44 (0)20 7537 7000, email us at info@london-law.co.uk or get in touch via the contact us button and see what we can do for you?
This blog was prepared by Alexander JLO’s senior partner, Peter Johnson on 7th November 2025 and is correct at the time of publication. With decades of experience in almost all areas of law Peter is happy to assist with any legal issue that you have. He is widely regarded as one of London’s leading lawyers. His profile on the independent Review Solicitor website can be found Here
info@london-law.co.uk
+44 0 207 537 7000