There is something quietly unsettling about how artificial intelligence has made its way into the most intimate corners of our lives. Not long ago, the idea of asking a machine for legal guidance or emotional support would have seemed laughable. Now it happens daily. People consult AI not only for quick facts or writing assistance, but for counsel, clarity, and in moments of vulnerability, comfort. It is a profound shift, and one that few users fully understand, especially in terms of its legal implications.
In a recent podcast interview with American comedian Theo Von, OpenAI Chief Executive Sam Altman made a statement that should alarm any thoughtful listener. During the episode released on 23 July 2025, Altman was asked about users who rely on ChatGPT in moments of crisis. His response was striking in its candour. “There’s legal privilege when you talk to a therapist, a lawyer, or a doctor,” he said. “We haven’t figured that out yet for when you talk to ChatGPT.”
As someone who has spent years studying the relationship between law, technology, and cybersecurity, I was not surprised by Altman’s words. But I was troubled. The lack of privilege in AI conversations is no secret to those in the legal field. What is new and increasingly concerning is the extent to which the general public treats these interactions as confidential, when in reality they are anything but.
In law, confidentiality is a cornerstone. It enables trust between lawyer and client, doctor and patient, therapist and individual. That trust is not a luxury. It is a necessity for the fair and honest functioning of society. The absence of legal privilege means that anything said to an AI assistant is not protected. That information can be requested in legal proceedings. It can be stored, analysed, and potentially disclosed. There is no law requiring AI platforms to keep your secrets.
The problem goes beyond privacy. It is a cybersecurity issue. The term cybercrime is no longer confined to images of hackers in dark rooms or malicious code attacking systems. Today’s digital threats include the ways in which people expose themselves emotionally and legally on platforms that offer no binding protections. A user confesses to tax evasion, seeks advice on a custody battle, or discusses abuse in the workplace. That data exists on servers. It may be accessible to company staff. And if a court demands it, the company may have no choice but to hand it over.
It is true that OpenAI and other firms have introduced features to enhance privacy. Users can turn off chat history and opt out of data being used for training purposes. But these remain internal company policies, not legal obligations. They can be reversed, modified, or overridden by lawful demand. That conversation you had at 2am with an AI assistant about your failing marriage, your suicidal thoughts, or a legal crisis at work may feel private. But legally, it is not.
This is the tension at the heart of our time. Artificial intelligence is now capable of holding what feels like real conversations, mimicking empathy, recalling details, and offering coherent advice. But unlike human professionals, it makes no legal promise of confidentiality. There is no contract, no oath, no privilege. The AI does not care about your feelings, your rights, or your vulnerabilities. It cannot.
The law has not yet caught up with this reality. Regulators and lawmakers around the world must begin asking serious questions. Do we need a new category of privilege for interactions with AI tools that play advisory or quasi-therapeutic roles? Should there be strict limitations on how and when this data can be used? And how do we protect people from the illusion of privacy in spaces where none exists?
Until these questions are resolved, the public must exercise caution. AI is a useful tool, but it is not a friend. It is not your lawyer, your therapist, or your priest. Use it with care. Share only what you are willing to see repeated. Do not be lulled into a false sense of safety by the convenience or fluency of the technology.
Because for now, there is no privilege, no protection, and no recourse.
Disclaimer: The views expressed in this piece are those of the writer and do not necessarily reflect the views of this publication. Written by Kundai Darlington Vambe who holds an LLB (Hons) from the University of London. He writes at the intersection of law, cybersecurity, and emerging technologies, with a particular interest in the ethical implications of digital systems on marginalised communities.







