The founder of Zed Law explains what OpenAI’s legal advice policy means for your business

The founder of Zed Law explains what OpenAI’s legal advice policy means for your business

OpenAI has not banned legal advice from ChatGPT. Zed Law founder Ryan Zahrai explains what has actually changed and why the AI ​​legal vacuum is important for Australian businesses.

There has been a lot of fuss online about ChatGPT being “banned” from providing legal and health advice. That is not the case.

What OpenAI actually did was tighten the rules so that the platform cannot provide personalized legal or medical advice unless a qualified professional is involved. No custom legal documents, no specific recommendations, no acting as a lawyer or doctor. The system can still explain concepts, unpack jargon, or help you prepare for meetings; it just doesn’t tell you whether you should sue someone, fire someone, or sign anything.

And that’s exactly how it should be.

Why this is more important than most people think

If you run a business, you’ve probably used AI to create something: a policy, a clause, a contract template. It’s fast, confident and it sounds good. But sounding good and being right are two very different things.

The reality is that general purpose AI has a hallucination rate of about 15 to 30 percent. That’s fine if you’re writing marketing copy or brainstorming ideas. It is not good to rely on it to interpret workplace legislation or corporate governance obligations.

I have seen the consequences with my own eyes. One client came to me convinced he could file a general protection claim for what was effectively a breach of contract. Another believed they could seek compensation from a company for failing to tick certain governance boxes. ChatGPT told them both they could. They were wrong. And the problem wasn’t that AI gave them advice, but that they believed it.

OpenAI’s policy change is not a crackdown; it is a correction. The company basically said, “If the job requires a license, make sure someone with a license is involved.” That’s not anti-innovation; it’s common sense.

The social media firestorm missed the point

Most of the online commentary missed the nuance. People saw headlines about “bans” and “restrictions” and assumed that OpenAI had reversed its mission. In reality, this update keeps AI useful and protects users from themselves.

You can still ask ChatGPT to explain a legal principle or summarize a regulation in plain English, and it will do that. What it doesn’t do is pretend to know your company, your employees or your contracts. That boundary is healthy because it reduces risk, helps users stay compliant, and reinforces the idea that AI is a tool, not a replacement.

The legal AI vacuum

Of course, where there is a gap, there is opportunity.

OpenAI’s move created what I call a legal AI vacuum. Businesses still want the speed and efficiency of AI, but they also need the reliability and protection that comes from human oversight. The DIY model no longer fits the bill, and that’s where responsible legal AI platforms come into the picture.

At Zed Law we have worked closely with Veraty to design a model that fills this space responsibly. The technology controls the volume (generate, compare, summarize) and when the stakes are high, a lawyer validates the output before it is used. Clients produce contracts faster, save big money and stay compliant, while law firms get their work done faster and begin to move from time billing to value pricing while increasing margins.

Across the board, Veraty law firm users see an increase in recurring revenue and business users save money year after year by eliminating rework and preventing disputes. These results are achieved by linking AI to the right advice at the right time and not replacing it with legal advice.

The human in the loop is the blueprint

While the early phase of AI was about automation, the next phase is about integration. The real value is in the systems that combine automation with expertise

Having a “qualified person” is not an administrative burden. It is what keeps innovation sustainable. It ensures that AI not only moves quickly, but also moves correctly. And it’s what turns tools like Veraty from smart software into real business infrastructure.

Because when you deal with contracts, employment law, governance or compliance, speed without accuracy is nothing but a risk when you wear running shoes.

Australia’s lead in compliant AI

Regulators worldwide are catching up. The EU AI Act, the UK AI White Paper and the draft frameworks here in Australia are all moving in the same direction. That is, high-risk AI domains such as legal, medical and government sectors should be supervised by qualified people.

That coordination is good news. It sets the clear expectation that responsible innovation means designing smarter systems around people – not removing them.

The Australian legal technology scene has quietly emerged as a leader here. We built compliance-first platforms before it was fashionable. The advantage is flexibility: we’re smaller, we can adapt faster, and we don’t have to phase out outdated models that were built for volume billing instead of verified results.

What this means for entrepreneurs

If you are a founder, HR manager or business leader, this change is not something to panic about. It is an opportunity to get your processes in order.

Let AI do the groundwork: summarize, compare, prepare. But don’t let it decide strategy, interpret rights, or make calls that could end up in court. Engage professionals early and make verification part of your workflow.

AI is at its best when used as a co-pilot, not a captain.

And if you’re thinking “we don’t have time for that,” I’d go so far as to say that you probably don’t have time for the consequences of doing it wrong either.

The bottom line

OpenAI’s policy is a guardrail, not an obstacle. It reminds us that the power of AI doesn’t come from pretending to be human; it comes from working with people who know where the legal boundaries are drawn.

So use AI responsibly; let’s draft, compare and brainstorm. Then ask a lawyer to confirm the rest. That’s not slowing down; this is how you build trust at scale.

AI is powerful when it knows its limits – and when you know yours. The smartest operators don’t drop their lawyers; they build them into their AI workflows.

Stay up to date with our stories on LinkedIn, Tweet, Facebook And Instagram.


#founder #Zed #Law #explains #OpenAIs #legal #advice #policy #means #business

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *