๐Ÿง‘๐Ÿผโ€๐Ÿ’ป Research - March 13, 2025

Feature-Based Audiogram Value Estimator (FAVE): Estimating Numerical Thresholds from Scanned Images of Handwritten Audiograms.

๐ŸŒŸ Stay Updated!
Join Dr. Ailexa’s channels to receive the latest insights in health and AI.

โšก Quick Summary

The study introduces the Feature-Based Audiogram Value Estimator (FAVE), a machine-learning model designed to estimate numerical thresholds from scanned images of handwritten audiograms. FAVE achieved an impressive 87.0% recall and 96.2% precision for symbol locations, demonstrating its potential to enhance accessibility of hearing loss data for public health research.

๐Ÿ” Key Details

  • ๐Ÿ“Š Dataset: 556 handwritten audiograms from a longitudinal cohort study
  • ๐Ÿงฉ Features used: Sliding-window, single-class object detectors based on Aggregate Channel Features
  • โš™๏ธ Technology: Machine learning models for estimating numerical threshold values
  • ๐Ÿ† Performance: 87.0% recall and 96.2% precision for symbol locations

๐Ÿ”‘ Key Takeaways

  • ๐Ÿ“Š FAVE provides a novel approach to digitizing handwritten audiograms.
  • ๐Ÿ’ก Machine learning enhances the accessibility of hearing loss data for researchers.
  • ๐Ÿ‘ฉโ€๐Ÿ”ฌ Study utilized a unique dataset of 556 audiograms from a longitudinal study.
  • ๐Ÿ† Model accuracy showed 78.3% of estimations had no error compared to true thresholds.
  • ๐Ÿค– FAVE outperformed pre-trained deep learning approaches in threshold estimation.
  • ๐ŸŒ Implications for public health research in understanding hearing loss trends.
  • ๐Ÿ” Future work should address limitations in symbol and axis tick label detection.

๐Ÿ“š Background

Hearing loss is a significant public health concern, impacting millions worldwide. Traditionally, hearing sensitivity is assessed through pure-tone audiometry, resulting in a visual representation known as a pure-tone audiogram. While digital equipment allows for the storage of audiograms as numerical values, many practices still rely on handwritten methods, making these values difficult to access for researchers. This study aims to bridge that gap using advanced machine learning techniques.

๐Ÿ—’๏ธ Study

Conducted by Mayo et al., the study focused on developing the Feature-Based Audiogram Value Estimator (FAVE) to interpret handwritten audiograms. The training data comprised 556 audiograms collected from a longitudinal cohort study examining age-related hearing loss. The researchers employed sliding-window, single-class object detectors based on Aggregate Channel Features to create the model.

๐Ÿ“ˆ Results

The FAVE model demonstrated remarkable performance, achieving an average of 87.0% recall and 96.2% precision for symbol location accuracy. While the numerical threshold estimations were less precise, with 78.3% of estimations showing no error, they were not significantly different from the true thresholds. Notably, FAVE outperformed pre-trained deep learning models in accuracy for the current test set.

๐ŸŒ Impact and Implications

The implications of this study are profound for public health research. By enabling the extraction of numerical data from handwritten audiograms, FAVE can facilitate a better understanding of hearing loss trends and improve accessibility to critical health information. This advancement could lead to more informed public health strategies and interventions aimed at addressing hearing loss on a larger scale.

๐Ÿ”ฎ Conclusion

The development of the Feature-Based Audiogram Value Estimator (FAVE) marks a significant step forward in the intersection of machine learning and audiology. By transforming handwritten audiograms into accessible numerical data, this technology holds promise for enhancing research capabilities in hearing loss. Continued exploration and refinement of such models could pave the way for improved public health outcomes in the future.

๐Ÿ’ฌ Your comments

What are your thoughts on the potential of machine learning in audiology? We would love to hear your insights! ๐Ÿ’ฌ Join the conversation in the comments below or connect with us on social media:

Feature-Based Audiogram Value Estimator (FAVE): Estimating Numerical Thresholds from Scanned Images of Handwritten Audiograms.

Abstract

Hearing loss is a public health concern that affects millions of people globally. Clinically, a person’s hearing sensitivity is often measured using pure-tone audiometry and visualized as a pure-tone audiogram, a plot of hearing sensitivity as a function of frequency. Digital test equipment allows clinicians to store audiograms as numerical values, though some practices write audiograms by hand and store them as digital images in electronic health records systems. This leaves the numerical values inaccessible to public-health researchers unless manually interpreted. Therefore, this study developed machine-learning models for estimating numerical threshold values from scanned images of handwritten audiograms. Training data were a novel set of 556 handwritten audiograms from a longitudinal cohort study of age-related hearing loss. The models were sliding-window, single-class object detectors based on Aggregate Channel Features, altogether called Feature-based Audiogram Value Estimator or “FAVE”. Model accuracy was determined using symbol location accuracy and comparing estimated numerical threshold values to known values from an electronic database. FAVE resulted in an average of 87.0% recall and 96.2% precision for symbol locations. The numerical threshold values were less accurate, with 78.3% of estimations having no error, though threshold estimates were not significantly different from true thresholds. Threshold estimation was more accurate than pre-trained deep learning approaches for the current test set. Future work should consider implementing detectors with similar image channels and identify limitations on symbol and axis tick label detection.

Author: [‘Mayo PG’, ‘Vaden KI’, ‘Matthews LJ’, ‘Dubno JR’]

Journal: J Med Syst

Citation: Mayo PG, et al. Feature-Based Audiogram Value Estimator (FAVE): Estimating Numerical Thresholds from Scanned Images of Handwritten Audiograms. Feature-Based Audiogram Value Estimator (FAVE): Estimating Numerical Thresholds from Scanned Images of Handwritten Audiograms. 2025; 49:32. doi: 10.1007/s10916-025-02146-7

Share on facebook
Facebook
Share on twitter
Twitter
Share on linkedin
LinkedIn
Share on whatsapp
WhatsApp

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.