Cedars-Sinai's AI outperforms specialist models in interpreting heart scans.
EchoPrime, which was published in Nature in February 2026, surpasses both specialized AI tools and earlier foundation models across 23 cardiac benchmarks, with its code, weights, and a demo available to the public.
An echocardiogram is one of the most frequently used diagnostic tools in cardiology: it is an ultrasound of the heart that displays how it functions, how its chambers fill and empty, and whether its structure is affected. Interpreting an echocardiogram necessitates training, time, and a specific form of spatial attention, enabling the analysis of moving images of a beating heart and translating them into a clinical narrative.
Researchers at Cedars-Sinai Medical Center, in collaboration with colleagues from Kaiser Permanente Northern California, Stanford Health Care, Beth Israel Deaconess Medical Center in Boston, and Chang Gung Memorial Hospital in Taiwan, have developed an AI system capable of performing similar functions.
EchoPrime is a video-based vision-language model that analyzes echocardiogram footage and generates a written report on cardiac form and function. Its results were published in Nature (volume 650, pages 970-977) in February 2026, under the title “Comprehensive echocardiogram evaluation with view primed vision language AI.”
What distinguishes EchoPrime is the scale of its training. The model was trained on over 12 million echocardiography videos alongside written interpretations from cardiologists, sourced from 275,442 studies involving 108,913 patients at Cedars-Sinai.
No previous echocardiography AI model has utilized data of such magnitude.
What can it do?
Tested across five international health systems, EchoPrime achieved state-of-the-art performance on 23 diverse benchmarks relating to cardiac structure and function, surpassing both task-specific AI methods—models designed to address one particular task, like measuring ejection fraction—and previous broader foundation models.
The model's outputs aim to assist clinicians rather than replace them; it provides a verbal summary for cardiologists to review and act upon, rather than autonomously making diagnoses.
The research team has made the model’s code, weights, and a functioning demo accessible to the public, a choice that illustrates a broader trend in AI research towards open publication and will enable other institutions to evaluate EchoPrime within their patient populations.
Contextual background
EchoPrime is introduced in a year when AI misdiagnosis has been identified as one of the leading patient safety threats by ECRI, the healthcare safety organization. This context does not diminish the promise of EchoPrime but instead establishes the standards it must meet.
The aspiration is not for an AI that occasionally interprets echocardiograms correctly, but for one that does so reliably enough to alleviate the burden on cardiologists without creating new types of errors.
Cardiology has proven to be an effective area for AI-assisted diagnostics due to the relatively structured and abundant nature of the data, such as ultrasound videos, electrocardiograms, and imaging.
The effort from Cedars-Sinai represents arguably the most comprehensive attempt so far to convert this wealth of data into a generalized tool. Whether EchoPrime transitions from a published model to widespread clinical application will depend on several factors, including regulatory approval, institutional adoption, and liability issues that the Nature paper does not discuss.
However, as a showcase of what is now feasible in cardiac AI, it establishes a new benchmark.
Other articles
Cedars-Sinai's AI outperforms specialist models in interpreting heart scans.
EchoPrime AI from Cedars-Sinai was developed using 12 million echocardiogram videos and was featured in Nature.
