Facebook AI helps predict the care needed for patients COVID-19
In January 2021, Facebook AI researchers in collaboration with NYU Langone Health’s Predictive Analytics Unit and Department of Radiology, made available open source code for three models capable of predicting two types of deterioration in COVID-19 patients based on their chest X-ray: deterioration from adverse events (transfer to ICU, intubation, or mortality) and increased oxygen requirements beyond 6 L per day.
- A model for predicting patient deterioration based on a single radiograph.
- A prediction model of patient deterioration based on a sequence of radiographs.
- A model to predict the amount of supplemental oxygen a patient may require based on a single x-ray.
The model, which uses sequential chest X-rays, can predict up to four days (96 hours) in advance whether a patient may need more intensive care options, typically outperforming the predictions of human experts.
These predictions could help doctors avoid sending at-risk patients home too soon, and help hospitals better predict demand for supplemental oxygen and other limited resources.
New AI research to help predict COVID-19 resource needs from a series of X-rays
It’s challenging for doctors to predict the course of COVID-19 in a patient and how that might impact hospital…
Momentum Contrast MoCo
Facebook AI researchers, pre-trained a model called Momentum Contrast (MoCo) with two large public Chest x-ray datasets MIMIC-CXR-JPG and CheXpert.
Then they performed fine tuning of their MoCo model with the NYU COVID-19 dataset (26,838 x-rays from 4,914 covid patients). It was this small dataset that was labeled with the different labels: patient deterioration within 24 hours, 48 hours, 72 hours and 96 hours.
MoCo, the model used by Facebook AI, to anticipate the care of COVID-19 patients is a self-supervised learning model with a contrastive loss function.
Papers with Code — MoCo Explained
MoCo, or Momentum Contrast, is a self-supervised learning algorithm with a contrastive loss. Contrastive loss methods…
There are three forms of Deep Learning. The best known are Reinforcement Learning and Supervised Learning, but a third is gaining momentum: Self-Supervised Learning. According to Yann LeCun, director of Facebook AI, it is about giving the machine the possibility to build predictive models of the world by observations as humans do.
Self-supervised learning: The dark matter of intelligence
How can we build machines with human-level intelligence? There’s a limit to how far the field of AI can go with…
It is through this type of learning that machines will be able to acquire common sense because it allows the machine to learn a lot of informations that will make it able to fill in the blanks. The machine will predict the missing intermediate images from the analyze of the beginning and end images of a video sequence.
The success of the BERT and RoBERTa models shows that self-supervised learning models work very well in the NLP domain.
Contrastive methods for energy-based models
Self-supervised learning is based on energy-based models (EBM). These models can be theorized using an energy function F(x,y). If F(x,y) = 0, y is compatible with x and if F(x,y)>0 then y is not compatible with x.
For example in the case of a clustering algorithm: the energy function is minimized when the point prediction is close to a cluster and it is the opposite when the point is far from the cluster.
🎙️ Yann LeCun We will introduce a new framework for defining models. It provides a unifying umbrella that helps define…
One way to train an energy-based-model (EBM) is to use contrastive methods. Concretely, if two images are similar (two chest X-rays with white spot at the same place), they will be encoded with two similar vectors and the energy function will be pushed down.
If two images are not similar (two dissimilar chest x-rays: one with a white spot and the other without) then they will be encoded with two non-similar vectors.
Here is a contrastive cost function:
- Zi is the anchor, in my example a chest x-ray with a white spot.
- Zj is the positive example, in my example another radio similar to the anchor with a white spot in the same place or the same image as the anchor but cropped differently.
- Za is the negative example, in my example a radio with no spot.
- When training: the product Zi.Zj should be pushed up and the product Zi.Za should be pushed down. To maximize the log and therefore minimize the contrastive cost function.
Go further Self-Supervised Learning
Go further Contrastive Learning with the vidéo of Yannic Kilcher