- Make no mistake, the advent of generative artificial intelligence has been extraordinarily advantageous for the cutting-edge driven information technology applications/solutions like never before. The very invention of such an epochal entity that almost mirrors humankind and more in anything and everything is being welcomed with open arms by the whole world. The amazing aspect of AI is the depth to which technology can be used for the benefit of humanity in literally every sector known to us. Little wonder, the global leadership is pumping in the required investments to ensure none are left behind in acquiring the numerous benefits for the overall good of their citizens. However, the AI cannot be the panacea for everything and anything.

PC: Marketcalls
- For one, AI is not human intelligence, you see. And AI cannot think and act like humans, where emotions, sensitivity, sensibilities, and consciousness will not be replicated and/or replaced. All of us would agree on this aspect in unison. As reported extensively in recent times, the chatbots are running amok playing doctor, doctor, even as AI companies delete the health disclaimers. This situation is extremely dangerous to say the least. For all that we know, doctors may be wishing they could return to the good old days of Dr Google. Why? Simply because Dr AI is a much more formidable competition and foe. What’s dangerous is how a chatbot super-personalizes vast amounts of health data and then, via conversational intimacy, seduces the patient to take advice.

PC: Fox News
- Mind you, what’s needed is seeing a real medical professional. In a case reported in Annals of Internal Medicine this month, a man developed bromide toxicity, a rare condition, after consulting ChatGPT about reducing salt in his diet. It advised him to swap sodium chloride with sodium bromide. The thing is, if he hadn’t landed in the hospital, this wouldn’t have gone on record. Last month, another US study found that compared to more than 26% of chatbot answers to health queries in 2022, less than 1% in 2025 contain some kind of warning about the LLM not being a doctor. This finding is especially troubling as the Stanford and US researchers used the likes of mammograms and chest X-rays to screen for disclaimer phrases.

PC: Science Photo Library
- It indicates how much AI companies are encouraging people to use it for health advice. At the GPT-5 launch event, for instance, a person spoke about uploading her biopsy results onto it and taking its help to decide whether to pursue radiation. Actual medical advice is not plain pattern matching. It engages with the complexity of individual medical histories and circumstances. That said, AI can be useful in the health sector in many ways. But Indians already pop enough pills prescribed only by their neighbourhood chemist. Substituting professional medical judgement with chatbot advice would worsen their health horribly. GenAI models must use proper disclaimers and safeguards against the dangers of providing a diagnosis in place of a doctor.






