OpenAI has stated that the functionality of its ChatGPT model remains unchanged amidst recent social media claims suggesting that updates to its usage policy prohibit the chatbot from providing legal and medical advice. Karan Singhal, OpenAI’s head of health AI, publicly refuted these claims on X, asserting they are “not true.”
Singhal emphasized that while ChatGPT is not intended to replace professional advice, it continues to serve as a valuable resource for understanding legal and medical information. This response came after a post from the betting platform Kalshi claimed that ChatGPT would stop offering such advice—a post that has since been deleted.
According to Singhal, the inclusion of policies related to legal and medical advice is not a new development in OpenAI’s terms. An update released on October 29 lists specific prohibited uses for ChatGPT, including the provision of tailored advice requiring a license, such as legal or medical advice, unless properly reviewed by a licensed professional.
This prohibition aligns with OpenAI’s prior usage policy, which advised users against engaging in activities that might significantly impair the safety and rights of others, including the provision of tailored legal, medical, or financial advice without appropriate professional oversight.
OpenAI has transitioned from having three separate policies to a unified set of guidelines that apply across all OpenAI products and services. The company states that this changelog reflects a comprehensive and consistent approach to its policies; however, the substance of the rules remains unchanged.
Source: https://www.theverge.com/news/812848/chatgpt-legal-medical-advice-rumor

