Cardiologists trained a large AI model to assess the structure and function of the heart
最近審查:14.06.2024
Artificial intelligence experts from Cedars-Sinai and the Smidt Heart Institute created a dataset of more than 1 million echocardiograms (video ultrasounds of the heart) and their corresponding clinical interpretations. Using this database, they developed EchoCLIP, a powerful machine learning algorithm that can “interpret” echocardiogram images and evaluate key indicators.
The design and evaluation of EchoCLIP, described in a paper published in Nature Medicine, suggests that interpretation of a patient's echocardiogram using EchoCLIP provides specialist-level clinical assessments, including assessment of heart function, results of past surgeries and implanted devices, and can also help doctors identify patients in need of treatment.
The basic EchoCLIP model can also identify the same patient across multiple videos, exams, and time points, and recognize clinically important changes in the patient's heart.
“To our knowledge, this is the largest model trained on echocardiography images,” said lead study author David Ouyang, MD, a member of the Cardiology Division faculty at the Smidt Heart Institute and Department of Artificial Intelligence in Medicine.
"Many previous AI models for echocardiograms are trained on only tens of thousands of examples. In contrast, EchoCLIP's uniquely high performance in image interpretation is the result of training on nearly ten times more data than existing models."
“Our results show that large, peer-reviewed medical imaging and interpretation datasets can serve as the basis for training basic medical models, which are a form of generative artificial intelligence,” Ouyang added.
EchoCLIP workflow. Source: Nature Medicine (2024). DOI: 10.1038/s41591-024-02959-y
He noted that this advanced baseline model could soon help cardiologists evaluate echocardiograms by generating preliminary estimates of cardiac measurements, identifying changes over time and common diseases.
The research team created a dataset of 1,032,975 cardiac ultrasound videos and associated expert interpretations to develop EchoCLIP. Key findings from the study include:
- EchoCLIP has demonstrated high performance in assessing cardiac function from cardiac images.
- The basic model was able to identify implanted intracardiac devices such as pacemakers, implanted mitral and aortic valves from echocardiogram images.
- EchoCLIP accurately identified unique patients across studies, identified clinically important changes such as previous cardiac surgery, and allowed the development of preliminary text interpretations of echocardiogram images.
"Baseline models are one of the newest areas in generative AI, but most models do not have enough medical data to be useful in healthcare," said Christina M. Albert, MD, MPH, chair of the Division of Cardiology at the Smidt Heart Institute.
Albert, who was not involved in the study, added: "This new baseline model integrates computer vision for echocardiogram image interpretation with natural language processing to enhance cardiologists' interpretations."