^
A
A
A

Cardiologists trained a large AI model to assess the structure and function of the heart

 
,醫學編輯
最近審查:14.06.2024
 
Fact-checked
х

所有iLive內容都經過醫學審查或事實檢查,以確保盡可能多的事實準確性。

我們有嚴格的採購指南,只鏈接到信譽良好的媒體網站,學術研究機構,並儘可能與醫學同行評審的研究相關聯。 請注意括號中的數字([1],[2]等)是這些研究的可點擊鏈接。

如果您認為我們的任何內容不准確,已過時或有疑問,請選擇它並按Ctrl + Enter。

19 May 2024, 20:00

Artificial intelligence experts from Cedars-Sinai and the Smidt Heart Institute created a dataset of more than 1 million echocardiograms (video ultrasounds of the heart) and their corresponding clinical interpretations. Using this database, they developed EchoCLIP, a powerful machine learning algorithm that can “interpret” echocardiogram images and evaluate key indicators.

The design and evaluation of EchoCLIP, described in a paper published in Nature Medicine, suggests that interpretation of a patient's echocardiogram using EchoCLIP provides specialist-level clinical assessments, including assessment of heart function, results of past surgeries and implanted devices, and can also help doctors identify patients in need of treatment.

The basic EchoCLIP model can also identify the same patient across multiple videos, exams, and time points, and recognize clinically important changes in the patient's heart.

“To our knowledge, this is the largest model trained on echocardiography images,” said lead study author David Ouyang, MD, a member of the Cardiology Division faculty at the Smidt Heart Institute and Department of Artificial Intelligence in Medicine.

"Many previous AI models for echocardiograms are trained on only tens of thousands of examples. In contrast, EchoCLIP's uniquely high performance in image interpretation is the result of training on nearly ten times more data than existing models."

“Our results show that large, peer-reviewed medical imaging and interpretation datasets can serve as the basis for training basic medical models, which are a form of generative artificial intelligence,” Ouyang added.

EchoCLIP workflow. Source: Nature Medicine (2024). DOI: 10.1038/s41591-024-02959-y

He noted that this advanced baseline model could soon help cardiologists evaluate echocardiograms by generating preliminary estimates of cardiac measurements, identifying changes over time and common diseases.

The research team created a dataset of 1,032,975 cardiac ultrasound videos and associated expert interpretations to develop EchoCLIP. Key findings from the study include:

  • EchoCLIP has demonstrated high performance in assessing cardiac function from cardiac images.
  • The basic model was able to identify implanted intracardiac devices such as pacemakers, implanted mitral and aortic valves from echocardiogram images.
  • EchoCLIP accurately identified unique patients across studies, identified clinically important changes such as previous cardiac surgery, and allowed the development of preliminary text interpretations of echocardiogram images.

"Baseline models are one of the newest areas in generative AI, but most models do not have enough medical data to be useful in healthcare," said Christina M. Albert, MD, MPH, chair of the Division of Cardiology at the Smidt Heart Institute.

Albert, who was not involved in the study, added: "This new baseline model integrates computer vision for echocardiogram image interpretation with natural language processing to enhance cardiologists' interpretations."

You are reporting a typo in the following text:
Simply click the "Send typo report" button to complete the report. You can also include a comment.