AI IS PLAYING DOCTOR & LAWYER – And It's SCARY Accurate!

AI IS PLAYING DOCTOR & LAWYER – And It's SCARY Accurate!

A wave of concern swept across social media this week, fueled by claims that ChatGPT would no longer offer guidance on legal or health matters. The reports, initially sparked by a now-deleted post from a betting platform, ignited a flurry of anxious questions and attempts to verify the information.

OpenAI has swiftly moved to quell the rising panic, issuing a clear statement: model behavior remains unchanged, and there has been no alteration to their terms of service. The confusion appears to stem from a recent update to their Usage policies, a change that has been widely misinterpreted.

The updated policy addresses the “provision of tailored advice” requiring professional licensing – like legal or medical counsel – and emphasizes the need for qualified professional involvement. However, this isn’t a new restriction; similar guidelines were already in place, albeit buried within documentation aimed at developers using the OpenAI API.

Advice on colostrum brands offered by ChatGPT

Previously, the rule was tucked away, largely unseen by everyday users. The recent update consolidates these guidelines into a more prominent, unified list, increasing visibility without actually changing the core principle. It clarifies the rule applies to all users, not just those building applications with the API.

The key lies in the wording: “provision” and “providing.” The policy doesn’t prohibit individuals from *seeking* information from ChatGPT on these topics. Instead, it discourages businesses – hospitals, law firms – from relying on the chatbot to deliver specific advice to clients without professional oversight.

For the average person conducting research, the change is unlikely to be noticeable. There’s no indication of altered chatbot functionality, and OpenAI maintains that ChatGPT was never intended to replace the expertise of a qualified professional. It remains a valuable tool for understanding complex information.

Karan Singhal, OpenAI’s head of health AI, reinforced this point, stating that ChatGPT continues to be a helpful resource for understanding legal and health information, but not a substitute for professional advice. Despite this reassurance, some users report difficulty accessing information on certain topics.

However, OpenAI’s official release notes show no recent model updates coinciding with the policy change. Anecdotal testing reveals that ChatGPT can still offer guidance on sensitive issues, such as contesting a traffic ticket or suggesting supplement brands, even after the policy update.

Ultimately, the situation is straightforward. If you’re using ChatGPT or the OpenAI API to dispense tailored legal or health advice to others without professional review, the existing rules apply. If you’re simply using it for personal research, you’re unlikely to experience any difference.