Artificial Intelligence is used to perform a differential diagnosis

AI-based symptom detectors

A differential diagnosis is a list of possible conditions or diseases that could be causing patients’ symptoms.

Numerous start-ups have developed AI-enabled chatbot-based symptom checkers (CSC) that enable differential diagnosis from home.

These applications, such as WebMD Demo, can take a patient’s history, perform a physical examination, assess symptoms, give an initial diagnosis, order further tests, perform and analyze test results, provide a final diagnosis, and finally refer the patient to other services or follow-up treatments.

-> WebMD Demo simulation:

There is room for improvement in these applications. Two student researchers from the University of Pennsylvania conducted an analysis of 11 of these apps: Ada, K Health, Ask NHS, Your.MD, Mediktor, HealthTap, Apothēka Patient, Sensely, Health Buddy, Babylon, and NHS online.

They identified a few shortcomings such as taking into account the patient’s health history to make the diagnosis.

How does chatbot-based symptom checkers work?

A paper published in February 2020 by researchers from the National Institute for Health Innovation (NIHI) in Auckland provides an understanding of how chatbots that perform differential diagnoses work.

  • The architecture of their chatbot named HHH is based on a knowledge graph and an NLP model of semantic similarity.
  • The knowledge graph is built from medical data collected on the Internet. The NLP model of is built with a BiLSTM.

The knowledge graph provides access to a structured storage of information that optimizes the retrieval of pathology-specific knowledge.

The attention model (BiLSTM) allows to better represent and understand natural language questions.

Combining the two models allows to obtain, according to the researchers of the University of Aukland, a more efficient “AI-enabled chatbot-based symptom checkers”. cardiac arrest detection via an ASR

Artificial Intelligence can also be used in differential diagnoses performed by emergency departments. Danish start-up has developed an Automatic Speech Recognition (ASR) to help identify signs of cardiac arrest over the phone.

Well-trained dispatchers in Copenhagen can recognize cardiac arrest from descriptions over the phone in about 73% of cases. But Corti’s Artificial Intelligence can do better and correctly identifies cardiac arrest 95% of the time.

In May 2020, researchers proposed a MultiQT: (Multimodal Learning for Real-Time Question Tracking in Speech) model that treats speech and its textual representation as two distinct modalities by jointly learning from streamed audio and its transcription into text. This multimodal learning allows, according to them, to increase the performance of ASR.


Resources used to write this article, the AI able to listen to calls in real time

Diplodocus interested in the applications of artificial intelligence to healthcare. Twitter : @

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store