- AI in legal practice boosts efficiency but raises ethical issues like false citations, bias, and data security.
- Legal expert Lucinda Kok urges safeguards like transparency, audits, and AI literacy.
- Bridging the digital divide is crucial to ensure fair access to AI-powered legal tools.
In a rapidly digitising world, artificial intelligence (AI) is being hailed as a transformative force in legal systems, automating case reviews, accelerating legal research, and expanding access to justice. But as the legal fraternity embraces this cutting-edge innovation, a recent court case has sounded a sobering note of caution.
In Mavundla v MEC: Department of Co-Operative Government and Traditional Affairs KwaZulu-Natal and Others, a startling revelation emerged: AI-generated legal submissions contained false citations, a reminder that even the most advanced tools are only as reliable as their programming and oversight.
This incident has reignited debate about the future of AI in legal practice, highlighting the need to strike a careful balance between technological progress and ethical responsibility.
For legal scholars like Lucinda Kok, a lecturer in the Department of Private Law at the University of Pretoria (UP), this is not just a technological hiccup, it’s a red flag. “The adoption of AI in legal settings introduces a range of ethical concerns,” Kok warns. “From data privacy to algorithmic bias, the risks are real.”
Ethical tripwires in AI-enhanced law
AI tools in law often process sensitive personal data. Without ironclad cybersecurity and strict compliance with data protection laws, users may face breaches that threaten their rights and safety.
But the danger doesn’t stop at data leaks. AI models trained on historical case law may unintentionally reproduce biases embedded in the justice system itself.
“If left unchecked, these biases could perpetuate systemic inequities,” says Kok. “Legal AI must be built on representative data, with transparent development and regular audits.”
She adds that transparency in how AI decisions are reached is critical to maintaining trust in the justice system: “Practitioners and the public must understand how legal outcomes are generated.”
AI in the hands of lawyers and the public
A practical solution? AI literacy. Legal education must evolve to include digital tools training, so lawyers can evaluate AI-generated advice critically, rather than blindly relying on it.
Still, not everyone has equal access to these tools. Many people in rural or low-income communities lack reliable internet, digital skills, or the infrastructure needed to benefit from AI in legal practice.
“We cannot afford to let the AI revolution deepen inequalities in access to justice,” Kok insists. “We need inclusive design, community training, and accessible legal platforms.”
Augmentation, not automation
While AI can accelerate document drafting, research, and case sorting, it shouldn’t replace human judgment. The law is not a mechanical process, it requires ethics, empathy, and deep contextual understanding.
“AI must support, not supplant, the legal professional,” Kok emphasises. “Real justice demands more than speed – it demands soul.”
With careful planning, collaborative regulation, and a human-first mindset, AI in legal practice could democratise legal services. It could help a single mother understand her rights, or a rural community draft land agreement, without needing expensive lawyers.
But Kok is clear: “The success of AI in law depends on balance. We must innovate responsibly, protecting the constitutional values that underpin our legal system.”
#Conviction
This article was originally published in Re.Search Magazine (Issue 11).


