Follow us
pubmed meta image 2
๐Ÿง‘๐Ÿผโ€๐Ÿ’ป Research - September 30, 2024

Using Artificial Intelligence for Assessment of Velopharyngeal Competence in Children Born With Cleft Palate With or Without Cleft Lip.

๐ŸŒŸ Stay Updated!
Join Dr. Ailexa’s channels to receive the latest insights in health and AI.

โšก Quick Summary

This study developed an AI tool to assess velopharyngeal competence (VPC) in children with cleft palate, utilizing audio recordings from two distinct datasets. Although the VGGish model outperformed the convolutional neural network (CNN) with an accuracy of 57.1%, the overall performance was deemed insufficient for clinical application.

๐Ÿ” Key Details

  • ๐Ÿ“Š Datasets: SR dataset (153 recordings from 162 children) and SCโ€‰+โ€‰IC dataset (308 recordings from 399 children)
  • ๐Ÿงฉ Age of Participants: Recordings from children aged 5 or 10
  • โš™๏ธ Technology: Two neural networks developed: CNN and pre-trained CNN (VGGish)
  • ๐Ÿ† Performance: VGGish: 57.1% accuracy; CNN: 39.8% accuracy

๐Ÿ”‘ Key Takeaways

  • ๐Ÿค– AI technology was innovatively applied to assess VPC in children with cleft palate.
  • ๐Ÿ“Š Two datasets were utilized, enhancing the robustness of the study.
  • ๐Ÿ† VGGish demonstrated superior performance compared to CNN.
  • ๐Ÿ” Minor adjustments in data pre-processing improved model accuracy.
  • ๐Ÿšซ Current accuracy levels are insufficient for clinical use.
  • ๐Ÿ’ก Future research is encouraged to optimize study design and datasets.

๐Ÿ“š Background

Assessing velopharyngeal competence (VPC) is crucial for children born with cleft palate, as it significantly impacts their speech and quality of life. Traditional assessment methods often rely on subjective evaluations by speech and language pathologists, which can lead to inconsistencies. The integration of artificial intelligence offers a promising avenue for enhancing the accuracy and efficiency of these assessments.

๐Ÿ—’๏ธ Study

Conducted using retrospective audio recordings, this study aimed to develop an AI tool for assessing VPC in children with cleft palate. The researchers utilized two datasets: the SR dataset from Skรฅne University Hospital and the SCโ€‰+โ€‰IC dataset from the Scandcleft randomized trials across multiple countries. The study involved 153 recordings from the SR dataset and 308 from the SCโ€‰+โ€‰IC dataset, focusing on children aged 5 and 10.

๐Ÿ“ˆ Results

The results indicated that the VGGish model achieved an accuracy of 57.1%, significantly outperforming the CNN, which had an accuracy of 39.8%. Despite these findings, the overall accuracy of both models was considered too low for practical clinical application, highlighting the need for further refinement and optimization.

๐ŸŒ Impact and Implications

The implications of this study are significant for the field of speech pathology and the treatment of children with cleft palate. While the current AI models did not meet the necessary accuracy for clinical use, the research underscores the potential of AI in enhancing diagnostic tools. Future advancements could lead to more reliable assessments, ultimately improving patient outcomes and care strategies.

๐Ÿ”ฎ Conclusion

This study illustrates the potential of artificial intelligence in the assessment of velopharyngeal competence in children with cleft palate. Although the current models require further development to achieve clinically relevant accuracy, the findings pave the way for future research aimed at optimizing AI tools in healthcare. Continued exploration in this area could lead to transformative changes in how we assess and treat speech-related challenges in children.

๐Ÿ’ฌ Your comments

What are your thoughts on the use of AI in assessing speech competence? We would love to hear your insights! ๐Ÿ’ฌ Leave your comments below or connect with us on social media:

Using Artificial Intelligence for Assessment of Velopharyngeal Competence in Children Born With Cleft Palate With or Without Cleft Lip.

Abstract

OBJECTIVE: Development of an AI tool to assess velopharyngeal competence (VPC) in children with cleft palate, with/without cleft lip.
DESIGN: Innovation of an AI tool using retrospective audio recordings and assessments of VPC.
SETTING: Two datasets were used. The first, named the SR dataset, included data from follow-up visits to Skรฅne University Hospital, Sweden. The second, named the SCโ€‰+โ€‰IC dataset, was a combined dataset (SCโ€‰+โ€‰IC dataset) with data from the Scandcleft randomized trials across five countries and an intercenter study performed at six Swedish CL/P centers.
PARTICIPANTS: SR dataset included 153 recordings from 162 children, and SCโ€‰+โ€‰IC dataset included 308 recordings from 399 children. All recordings were from ages 5 or 10, with corresponding VPC assessments.
INTERVENTIONS: Development of two networks, a convolutional neural network (CNN) and a pre-trained CNN (VGGish). After initial testing using the SR dataset, the networks were re-tested using the SCโ€‰+โ€‰IC dataset and modified to improve performance.
MAIN OUTCOME MEASURES: Accuracy of the networks’ VPC scores, with speech and language pathologistล› scores seen as the true values. A three-point scale was used for VPC assessments.
RESULTS: VGGish outperformed CNN, achieving 57.1% accuracy compared to 39.8%. Minor adjustments in data pre-processing and network characteristics improved accuracies.
CONCLUSIONS: Network accuracies were too low for the networks to be useful alternatives for VPC assessment in clinical practice. Suggestions for future research with regards to study design and dataset optimization were discussed.

Author: [‘Cornefjord M’, ‘Bluhme J’, ‘Jakobsson A’, ‘Klintรถ K’, ‘Lohmander A’, ‘Mamedov T’, ‘Stiernman M’, ‘Svensson R’, ‘Becker M’]

Journal: Cleft Palate Craniofac J

Citation: Cornefjord M, et al. Using Artificial Intelligence for Assessment of Velopharyngeal Competence in Children Born With Cleft Palate With or Without Cleft Lip. Using Artificial Intelligence for Assessment of Velopharyngeal Competence in Children Born With Cleft Palate With or Without Cleft Lip. 2024; (unknown volume):10556656241271646. doi: 10.1177/10556656241271646

Share on facebook
Facebook
Share on twitter
Twitter
Share on linkedin
LinkedIn
Share on whatsapp
WhatsApp

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.