๐Ÿง‘๐Ÿผโ€๐Ÿ’ป Research - March 11, 2026

Public risk perception, institutional AI adoption, and diagnostic safety: An exploratory cross-level analysis using a tracer condition approach.

๐ŸŒŸ Stay Updated!
Join AI Health Hub to receive the latest insights in health and AI.

โšก Quick Summary

This study investigates the relationship between public risk perception of medical AI, institutional adoption patterns, and diagnostic safety outcomes. It reveals a significant negative correlation between public risk perception and AI application frequency, highlighting the potential for automation bias in acute medical settings.

๐Ÿ” Key Details

  • ๐Ÿ“Š Dataset: 4,764 residents surveyed, 1,728 hospital-month observations
  • ๐Ÿงฉ Methodology: Structural Equation Modeling with a tracer condition approach
  • โš™๏ธ Key Variables: Public trust, hospital geographical remoteness
  • ๐Ÿ† Findings: Negative association between public risk perception and AI usage (ฮฒ = -0.34, p < 0.001)

๐Ÿ”‘ Key Takeaways

  • ๐Ÿ“‰ Public risk perception negatively impacts hospital AI adoption.
  • ๐Ÿค– Higher AI usage intensity correlates with increased misdiagnosis rates (ฮฒ = 0.28, p < 0.001).
  • ๐ŸŒ Social pressure can limit the deployment of advanced technologies in healthcare.
  • ๐Ÿ” Algorithmic aversion is evident at the institutional level.
  • ๐Ÿฅ High public trust mitigates negative effects of AI reliance.
  • ๐Ÿ“ Geographical remoteness influences the relationship between AI usage and diagnostic errors.
  • ๐Ÿ’ก Governance must balance social legitimacy with clinical oversight.
  • ๐Ÿงช Tracer conditions are essential for evaluating digital health safety.

๐Ÿ“š Background

The integration of artificial intelligence (AI) in healthcare has the potential to enhance diagnostic accuracy and operational efficiency. However, societal perceptions of risk associated with AI technologies can significantly influence their adoption in clinical settings. Understanding these dynamics is crucial for fostering a safe and effective healthcare environment.

๐Ÿ—’๏ธ Study

This exploratory study utilized a cross-level analytical framework, combining data from a stratified survey of residents in the Federal District of Brazil, longitudinal hospital operational panels, and social media sentiment analysis. The aim was to rigorously assess how public perceptions shape hospital technology strategies and the implications for diagnostic safety.

๐Ÿ“ˆ Results

The findings from the structural equation modeling indicated a significant negative association between aggregated public risk perception and the frequency of AI applications in hospitals. Additionally, within the context of the tracer condition, increased AI usage was linked to higher rates of misdiagnosis, suggesting a concerning trend of automation bias in critical care scenarios.

๐ŸŒ Impact and Implications

The implications of this study are profound. It underscores the need for healthcare institutions to address public concerns regarding AI technologies actively. By fostering public trust and ensuring rigorous clinical oversight, hospitals can better navigate the challenges posed by societal perceptions and enhance the safety and efficacy of AI applications in healthcare.

๐Ÿ”ฎ Conclusion

This research highlights the intricate relationship between public risk perception and the adoption of AI in healthcare. As we move forward, it is essential to balance the benefits of AI technologies with the need for diagnostic vigilance and public confidence. Continued exploration of these dynamics will be vital for the successful integration of AI in clinical practice.

๐Ÿ’ฌ Your comments

What are your thoughts on the impact of public perception on AI adoption in healthcare? We invite you to share your insights! ๐Ÿ’ฌ Leave your comments below or connect with us on social media:

Public risk perception, institutional AI adoption, and diagnostic safety: An exploratory cross-level analysis using a tracer condition approach.

Abstract

OBJECTIVE: To examine the cross-level sociotechnical linkages between societal risk perception of medical artificial intelligence, institutional adoption patterns, and clinical safety outcomes. Specifically, this study aims to explore how social pressure shapes hospital technology strategies and to rigorously assess the association between AI usage intensity and diagnostic errors using an acute imaging-dependent condition as a specific tracer.
METHODS: A cross-level analytical framework was constructed based on the Technology Acceptance Model and Institutional Theory. We integrated three heterogeneous data streams from the Federal District of Brazil: a stratified probability survey of residents (Nโ€‰=โ€‰4764), longitudinal hospital operational panels (1728 hospital-month observations), and a validating index of social media sentiment. A “Catchment Area Ecological Linkage” protocol was employed to merge micro-level psychometric data with meso-level organizational metrics. Structural Equation Modeling was employed to test the direct, mediating, and moderating effects among variables, with robustness and endogeneity checks conducted via time-lag analysis and double-validation. Moderators included public trust and hospital geographical remoteness.
RESULTS: Structural equation modeling revealed a significant negative association between aggregated public risk perception and hospital AI application frequency (ฮฒโ€‰=โ€‰-0.34, pโ€‰<โ€‰0.001), consistent with the theory of "algorithmic aversion" at the institutional level. Within the specific context of the tracer condition, higher AI usage intensity was positively associated with misdiagnosis rates (ฮฒโ€‰=โ€‰0.28, pโ€‰<โ€‰0.001), suggesting a pattern of "automation bias" in time-sensitive acute triage. These inhibitory effects are attenuated by high public trust and geographical remoteness. CONCLUSION: Public risk perception functions as an institutional constraint that throttles technology deployment. While social pressure limits adoption, the uncritical reliance on AI in high-stakes acute settings may compromise diagnostic vigilance. This study highlights the necessity of using precise tracer conditions to evaluate digital health safety and suggests that governance must balance social legitimacy with rigorous clinical oversight.

Author: [‘Li Z’, ‘Lyu M’, ‘Huang J’]

Journal: Digit Health

Citation: Li Z, et al. Public risk perception, institutional AI adoption, and diagnostic safety: An exploratory cross-level analysis using a tracer condition approach. Public risk perception, institutional AI adoption, and diagnostic safety: An exploratory cross-level analysis using a tracer condition approach. 2026; 12:20552076261425375. doi: 10.1177/20552076261425375

Share on facebook
Facebook
Share on twitter
Twitter
Share on linkedin
LinkedIn
Share on whatsapp
WhatsApp

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.