Michael Krauthammer


2023

pdf bib
Disfluent Cues for Enhanced Speech Understanding in Large Language Models
Morteza Rohanian | Farhad Nooralahzadeh | Omid Rohanian | David Clifton | Michael Krauthammer
Findings of the Association for Computational Linguistics: EMNLP 2023

In computational linguistics, the common practice is to “clean” disfluent content from spontaneous speech. However, we hypothesize that these disfluencies might serve as more than mere noise, potentially acting as informative cues. We use a range of pre-trained models for a reading comprehension task involving disfluent queries, specifically featuring different types of speech repairs. The findings indicate that certain disfluencies can indeed improve model performance, particularly those stemming from context-based adjustments. However, large-scale language models struggle to handle repairs involving decision-making or the correction of lexical or syntactic errors, suggesting a crucial area for potential improvement. This paper thus highlights the importance of a nuanced approach to disfluencies, advocating for their potential utility in enhancing model performance rather than their removal.

pdf bib
Boosting Radiology Report Generation by Infusing Comparison Prior
Sanghwan Kim | Farhad Nooralahzadeh | Morteza Rohanian | Koji Fujimoto | Mizuho Nishio | Ryo Sakamoto | Fabio Rinaldi | Michael Krauthammer
The 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks

Recent transformer-based models have made significant strides in generating radiology reports from chest X-ray images. However, a prominent challenge remains; these models often lack prior knowledge, resulting in the generation of synthetic reports that mistakenly reference non-existent prior exams. This discrepancy can be attributed to a knowledge gap between radiologists and the generation models. While radiologists possess patient-specific prior information, the models solely receive X-ray images at a specific time point. To tackle this issue, we propose a novel approach that leverages a rule-based labeler to extract comparison prior information from radiology reports. This extracted comparison prior is then seamlessly integrated into state-of-the-art transformer-based models, enabling them to produce more realistic and comprehensive reports. Our method is evaluated on English report datasets, such as IU X-ray and MIMIC-CXR. The results demonstrate that our approach surpasses baseline models in terms of natural language generation metrics. Notably, our model generates reports that are free from false references to non-existent prior exams, setting it apart from previous models. By addressing this limitation, our approach represents a significant step towards bridging the gap between radiologists and generation models in the domain of medical report generation.

2021

pdf bib
Progressive Transformer-Based Generation of Radiology Reports
Farhad Nooralahzadeh | Nicolas Perez Gonzalez | Thomas Frauenfelder | Koji Fujimoto | Michael Krauthammer
Findings of the Association for Computational Linguistics: EMNLP 2021

Inspired by Curriculum Learning, we propose a consecutive (i.e., image-to-text-to-text) generation framework where we divide the problem of radiology report generation into two steps. Contrary to generating the full radiology report from the image at once, the model generates global concepts from the image in the first step and then reforms them into finer and coherent texts using transformer-based architecture. We follow the transformer-based sequence-to-sequence paradigm at each step. We improve upon the state-of-the-art on two benchmark datasets.