⚡ Quick Summary
This systematic literature review explores the integration of explainable artificial intelligence (XAI) in wearable technologies, highlighting its potential to enhance the interpretability of data generated by devices like Fitbit and Empatica E4. The findings emphasize the need for improved user evaluation and involvement in the development process to foster trust and transparency in healthcare applications.
🔍 Key Details
- 📊 Dataset: 25 peer-reviewed articles from various databases
- 🧩 Focus: Explainability in wearable devices and quantified self data
- ⚙️ Technologies: Machine learning and deep learning algorithms
- 🏆 Key Method: Shapley Additive Explanations for post hoc analysis
🔑 Key Takeaways
- 📈 Wearable technologies are increasingly used in healthcare settings.
- 💡 XAI is essential for making complex algorithms understandable.
- 🕒 Wrist-worn devices like Fitbit are prevalent but need better explainability.
- 📊 Post hoc methods like Shapley Additive Explanations are gaining traction.
- 📉 Visual outputs of explainability methods enhance user comprehension.
- 👥 User involvement in development is crucial for effective implementation.
- 🔍 Research gap identified in user evaluation of explainability methods.
- 🌐 Future research is needed to improve transparency in AI models.
📚 Background
The rise of wearable technologies in healthcare has transformed how we monitor health metrics. However, the complexity of the underlying machine learning and deep learning algorithms often results in “black box” models that are difficult for medical professionals and users to interpret. This lack of transparency can hinder the responsible use of these technologies, making the integration of XAI a vital area of exploration.
🗒️ Study
This review systematically analyzed literature published between 2018 and 2022, focusing on the application of explainability in wearable devices. The research included studies from reputable databases such as ACM Digital Library, IEEE Xplore, and PubMed, ensuring a comprehensive overview of the current state of explainability in health-related wearable technologies.
📈 Results
The analysis revealed that while wrist-worn wearables like Fitbit and Empatica E4 are widely used, there is a significant gap in making the data they generate explainable. Among the various methods of explainability, post hoc approaches, particularly Shapley Additive Explanations, emerged as a preferred choice due to their adaptability and effectiveness in visualizing complex algorithm outputs.
🌍 Impact and Implications
The findings of this review underscore the importance of integrating XAI into wearable health technologies. By enhancing the transparency and trustworthiness of AI models, stakeholders can make more informed decisions regarding health data. This could lead to improved patient outcomes and greater acceptance of wearable technologies in clinical settings. The call for further research in this area is crucial to bridge the existing gaps in user evaluation and involvement.
🔮 Conclusion
The integration of XAI into wearable health technologies is not just beneficial but essential for addressing the challenges posed by black box models. As the use of wrist-worn devices continues to grow, ensuring that the data they produce is explainable will empower users and healthcare professionals alike. Continued research and development in this field will pave the way for more transparent and reliable AI applications in healthcare.
💬 Your comments
What are your thoughts on the role of explainability in wearable technologies? We invite you to share your insights and engage in a discussion! 💬 Leave your comments below or connect with us on social media:
Exploring the Applications of Explainability in Wearable Data Analytics: Systematic Literature Review.
Abstract
BACKGROUND: Wearable technologies have become increasingly prominent in health care. However, intricate machine learning and deep learning algorithms often lead to the development of “black box” models, which lack transparency and comprehensibility for medical professionals and end users. In this context, the integration of explainable artificial intelligence (XAI) has emerged as a crucial solution. By providing insights into the inner workings of complex algorithms, XAI aims to foster trust and empower stakeholders to use wearable technologies responsibly.
OBJECTIVE: This paper aims to review the recent literature and explore the application of explainability in wearables. By examining how XAI can enhance the interpretability of generated data and models, this review sought to shed light on the possibilities that arise at the intersection of wearable technologies and XAI.
METHODS: We collected publications from ACM Digital Library, IEEE Xplore, PubMed, SpringerLink, JMIR, Nature, and Scopus. The eligible studies included technology-based research involving wearable devices, sensors, or mobile phones focused on explainability, machine learning, or deep learning and that used quantified self data in medical contexts. Only peer-reviewed articles, proceedings, or book chapters published in English between 2018 and 2022 were considered. We excluded duplicates, reviews, books, workshops, courses, tutorials, and talks. We analyzed 25 research papers to gain insights into the current state of explainability in wearables in the health care context.
RESULTS: Our findings revealed that wrist-worn wearables such as Fitbit and Empatica E4 are prevalent in health care applications. However, more emphasis must be placed on making the data generated by these devices explainable. Among various explainability methods, post hoc approaches stand out, with Shapley Additive Explanations as a prominent choice due to its adaptability. The outputs of explainability methods are commonly presented visually, often in the form of graphs or user-friendly reports. Nevertheless, our review highlights a limitation in user evaluation and underscores the importance of involving users in the development process.
CONCLUSIONS: The integration of XAI into wearable health care technologies is crucial to address the issue of black box models. While wrist-worn wearables are widespread, there is a notable gap in making the data they generate explainable. Post hoc methods such as Shapley Additive Explanations have gained traction for their adaptability in explaining complex algorithms visually. However, user evaluation remains an area in which improvement is needed, and involving users in the development process can contribute to more transparent and reliable artificial intelligence models in health care applications. Further research in this area is essential to enhance the transparency and trustworthiness of artificial intelligence models used in wearable health care technology.
Author: [‘Abdelaal Y’, ‘Aupetit M’, ‘Baggag A’, ‘Al-Thani D’]
Journal: J Med Internet Res
Citation: Abdelaal Y, et al. Exploring the Applications of Explainability in Wearable Data Analytics: Systematic Literature Review. Exploring the Applications of Explainability in Wearable Data Analytics: Systematic Literature Review. 2024; 26:e53863. doi: 10.2196/53863