This project advances early breast cancer detection by combining physics-aware ultrasound imaging with multimodal deep learning. We developed a task-oriented deep learning beamforming framework that enhances clinically relevant features in ultrasound images, improving lesion visibility compared to conventional methods. To address limited in-vivo data, we introduced raw channel data augmentations that simulate realistic acquisition conditions, enabling more robust and accurate imaging. In parallel, we built a multimodal classification pipeline integrating ultrasound and mammography data from approximately 2,100 patients, achieving promising initial performance despite class imbalance. Together, these efforts establish a unified framework that jointly optimizes image formation and diagnosis, with strong potential to improve reliability, reduce operator dependency, and enable earlier detection.
Figure: Comparison of ultrasound beamforming methods (DAS, proposed method, MV), demonstrating improved image quality and lesion feature visibility with the proposed approach.
- A manuscript is currently under review: A. Amar, A. Grubstein, E. Atar, K. Peri Hanania, N. Glazer, R. Rosen, S. Savariego, and Y. C. Eldar, “Deep Task-Based Beamforming and Channel Data Augmentations for Enhanced Ultrasound Imaging,” submitted to IEEE Transactions on Medical Imaging, February 2025.
- Another manuscript, “Multi-Modal Learning for Automatic Breast Cancer Diagnostics from Mammography and Ultrasound,” is currently under review at Biomedical Signal Processing and Control.
