Today’s post will be about a recently published scientific article on the use of Artificial Intelligence to fight the Covid-19 pandemic. You can find the paper at this link by the title of “Federated Learning of Electronic Health Records to Improve Mortality Prediction in Hospitalized Patients With COVID-19: Machine Learning Approach”. The authors are researchers from multiple institutions from New York City.

On AI technology

We have been hearing about how AI is changing the world in many areas, from social media, videogames, autonomous cars and, of course, medicine. But what is happening in medicine and is AI a thing when it comes to our health?

The first thing we must know is that AI is not magic but is an advanced form of mathematical modelling enabled in recent years by the latest technology advancement and the ability of both have remarkably high performing computers and store a lot of data. In healthcare the latter makes the significant difference, much more than any technological advancement.

You can think of AI as of a child who is learning to walk; you move one leg, you start having confidence that you have legs while not being aware yet of what they are, and eventually you understand that if you put one leg after another, with the correct muscular forces you can stand up and move.

As new-born we don’t know anything about what’s happening and everything is only a matter of instinct on which we have no control, but AI as we know it today is remarkably similar. First, we do need a lot of data for the AI to learn something; this data is equivalent to the sense that the little kid perceives, although unconsciously. The data must be very precise and carefully stored, as well as the fact that to learn how to walk our brain needs to process information from the legs, and the equilibrium from the ear, and the surrounding environment from the eyes, etc. If we collect data which is not representative of the problem we want to solve, we will have a tough time finding a solution.

So, the more data, the better the AI system can learn, but there’s more. We need a very precise task to learn because differently from our brain, the computer-based AI is extremely simple and cannot solve as much tasks as our brain does. We then need one and only one extremely specific clinical question for each AI system, we can build as many as we want, but each of them will only solve one question.

Finally, very much as a child, AI as we know it today is not aware of itself. There is no concept of conscience in today’s AI, and it is only a matter of mathematical relationships.

How we use AI in healthcare

Let’s say that we identify a relevant question, like whether a patient will die or not. Regardless the utility of such an answer we can build an AI tool that can predict, with a certain level of accuracy, if a patient will die or not.

The biggest problem here is that for a computer-based AI to be reliable and accurate enough to be used as a clinical tool we need extremely high accuracy, and this can be achieved only when the data we provide the AI must be very representative and very abundant (which intrinsically also improves the representativeness).

Collecting clinical data is overly complex and costly. It requires time, instrumentation, organization, clinician, nurses and, most importantly, patients which generate the data itself. AI tools are extremely data-hungry and more often than not hospitals do not have enough data on their own to properly train an AI tool. This brings up the necessity of working together with other hospitals in a joint effort to tackle the widespread problem.

When sharing data is too much of a burden

The data sharing process is very complicated as sharing clinical data between hospitals not only is a huge potential risk for data to be divulged and leaked during the date transfer from one hospital to the other, but also it brings the risk of losing track of where the data is and what is its content once it exits the hospital that generated it and reaches another clinical centre for the AI tool build.

So, in general, when we can prevent sharing data between hospitals, we should try this way so to limit as much as possible the security risks connected to data movement and to protect patients’ privacy. Another reason to try avoiding data sharing is that it is time consuming in case we are involving many hospitals the overall process is overwhelmingly complicated.

To solve this problem scientists have proposed a technique called Federated Learning that allow different entities to create a shared AI tool build upon the data of all the individuals, but instead of sharing the data for the training of the AI, the AI itself travels between institutions to then end up collecting all the information required and learn the task.

This technique has been proven to be incredibly competitive with the more traditional approach of collecting data together in one place. In paper I am telling you about the researchers applied this technique to the data of 5 hospitals from the Mount Sinai network in New York (USA) and successfully created an AI tool capable of predict the death of patients with an accuracy comparable to that of AI created by unifying the data in one single hospital.

My thoughts

The real point of this experiment is not much the mortality prediction itself which can be useful or not to clinicians and to the scientific community. The final goal is more the proof that such distributed technique can really be used in healthcare settings to improve our capacity of generating AI tools for clinical support.

I will dedicate my time and energies to study this Federated Learning technique during my PhD as I believe will determine how AI will be created in the future and it’s probably our best chance to really give determinant contributions to the field of medicine. Nonetheless not every clever idea always come with excitement and positivity, especially when rapid evolving technologies come and “threat” to disrupt entire fields like medicine.

Many times, happen to feel overwhelmed by the impression of AI to take over the world and to destroy our jobs because of its potential.

The fact is that even if we had available such advanced technology, which we don’t have yet, we are still responsible for building a world in which we want to live. If we don’t want something to happen, we should just don’t do it.

We can leverage AI to make our lives better, but at no point we are obliged to let things go us of our hands and stop enjoying the beauty of new discovery and the excitement of the unknown.


No responses yet

Leave a Reply

Your email address will not be published. Required fields are marked *