A Dubai-based startup says it has created the world’s first free AI-powered legal assistant — a mobile application that claims to make legal knowledge universally accessible, regardless of geography, language, or financial means.
It’s a noble idea: democratizing access to justice through technology. But as with many sweeping tech promises, the reality may be more complicated than the press release suggests.
The app, founded by entrepreneur Elena Stef, bills itself as “a digital companion for justice and education,” offering instant legal information across multiple jurisdictions in multiple languages — all for free. “We envisioned an application that would transcend borders, cultures, and languages, a digital companion for justice and education accessible to anyone, at any time, entirely free of charge,” Stef said. The project, she adds, was inspired by a family discussion about social responsibility.
The premise taps into a genuine need. Around the world, millions cannot afford lawyers or even basic legal consultations. In countries where legal aid is limited or court systems are overloaded, the idea of a free, multilingual assistant sounds almost utopian. Yet legal systems, unlike general knowledge domains, are fragmented, deeply contextual, and fraught with regulatory nuance.
But the problem with AI and law isn’t accessibility; it’s accuracy and accountability. Legal information is not like weather data — you can’t generalize it across borders without running into ethical, procedural, and cultural issues.
Even the app’s creators acknowledge the fine line between “legal information” and “legal advice.” In its disclaimer, the platform stresses that it does not provide professional counsel or establish an attorney-client relationship. But that distinction may be lost on many users, particularly those seeking urgent or personal legal solutions.
The tool’s promise of “global jurisdiction coverage” and “real-time legal education” also raises questions about how it sources, verifies, and updates its data. Law is dynamic — statutes evolve, precedents shift, and interpretations differ widely from one country to another. Without a transparent mechanism for review or oversight, AI-generated responses risk spreading misinformation with potentially serious consequences.
Stef’s rhetoric — “a gift from our family to humanity” — underscores the philanthropic ambition behind the project. Yet critics say the framing feels overly idealistic, if not naïve, about the risks of applying generalized AI models to complex legal systems. “It’s one thing to want to help people learn about their rights,” said a Dubai-based attorney familiar with AI regulation. “It’s another to deploy a tool that might inadvertently mislead users into making poor legal decisions.”
The app’s open-source foundation and multilingual support — in English, Arabic, French, Russian, and more — could make it a valuable educational tool, especially for students and researchers studying comparative law. But as a functional legal companion, its role remains uncertain.
While Stef emphasizes values like “kindness, happiness, dedication, and discipline,” experts argue that what users need most is accountability — clarity about how the AI operates, who trains it, and how its legal interpretations are validated.