โก Quick Summary
This study investigates the relationship between public risk perception of medical AI, institutional adoption patterns, and diagnostic safety outcomes. It reveals a significant negative correlation between public risk perception and AI application frequency, highlighting the potential for automation bias in acute medical settings.
๐ Key Details
- ๐ Dataset: 4,764 residents surveyed, 1,728 hospital-month observations
- ๐งฉ Methodology: Structural Equation Modeling with a tracer condition approach
- โ๏ธ Key Variables: Public trust, hospital geographical remoteness
- ๐ Findings: Negative association between public risk perception and AI usage (ฮฒ = -0.34, p < 0.001)
๐ Key Takeaways
- ๐ Public risk perception negatively impacts hospital AI adoption.
- ๐ค Higher AI usage intensity correlates with increased misdiagnosis rates (ฮฒ = 0.28, p < 0.001).
- ๐ Social pressure can limit the deployment of advanced technologies in healthcare.
- ๐ Algorithmic aversion is evident at the institutional level.
- ๐ฅ High public trust mitigates negative effects of AI reliance.
- ๐ Geographical remoteness influences the relationship between AI usage and diagnostic errors.
- ๐ก Governance must balance social legitimacy with clinical oversight.
- ๐งช Tracer conditions are essential for evaluating digital health safety.

๐ Background
The integration of artificial intelligence (AI) in healthcare has the potential to enhance diagnostic accuracy and operational efficiency. However, societal perceptions of risk associated with AI technologies can significantly influence their adoption in clinical settings. Understanding these dynamics is crucial for fostering a safe and effective healthcare environment.
๐๏ธ Study
This exploratory study utilized a cross-level analytical framework, combining data from a stratified survey of residents in the Federal District of Brazil, longitudinal hospital operational panels, and social media sentiment analysis. The aim was to rigorously assess how public perceptions shape hospital technology strategies and the implications for diagnostic safety.
๐ Results
The findings from the structural equation modeling indicated a significant negative association between aggregated public risk perception and the frequency of AI applications in hospitals. Additionally, within the context of the tracer condition, increased AI usage was linked to higher rates of misdiagnosis, suggesting a concerning trend of automation bias in critical care scenarios.
๐ Impact and Implications
The implications of this study are profound. It underscores the need for healthcare institutions to address public concerns regarding AI technologies actively. By fostering public trust and ensuring rigorous clinical oversight, hospitals can better navigate the challenges posed by societal perceptions and enhance the safety and efficacy of AI applications in healthcare.
๐ฎ Conclusion
This research highlights the intricate relationship between public risk perception and the adoption of AI in healthcare. As we move forward, it is essential to balance the benefits of AI technologies with the need for diagnostic vigilance and public confidence. Continued exploration of these dynamics will be vital for the successful integration of AI in clinical practice.
๐ฌ Your comments
What are your thoughts on the impact of public perception on AI adoption in healthcare? We invite you to share your insights! ๐ฌ Leave your comments below or connect with us on social media:
Public risk perception, institutional AI adoption, and diagnostic safety: An exploratory cross-level analysis using a tracer condition approach.
Abstract
OBJECTIVE: To examine the cross-level sociotechnical linkages between societal risk perception of medical artificial intelligence, institutional adoption patterns, and clinical safety outcomes. Specifically, this study aims to explore how social pressure shapes hospital technology strategies and to rigorously assess the association between AI usage intensity and diagnostic errors using an acute imaging-dependent condition as a specific tracer.
METHODS: A cross-level analytical framework was constructed based on the Technology Acceptance Model and Institutional Theory. We integrated three heterogeneous data streams from the Federal District of Brazil: a stratified probability survey of residents (Nโ=โ4764), longitudinal hospital operational panels (1728 hospital-month observations), and a validating index of social media sentiment. A “Catchment Area Ecological Linkage” protocol was employed to merge micro-level psychometric data with meso-level organizational metrics. Structural Equation Modeling was employed to test the direct, mediating, and moderating effects among variables, with robustness and endogeneity checks conducted via time-lag analysis and double-validation. Moderators included public trust and hospital geographical remoteness.
RESULTS: Structural equation modeling revealed a significant negative association between aggregated public risk perception and hospital AI application frequency (ฮฒโ=โ-0.34, pโ<โ0.001), consistent with the theory of "algorithmic aversion" at the institutional level. Within the specific context of the tracer condition, higher AI usage intensity was positively associated with misdiagnosis rates (ฮฒโ=โ0.28, pโ<โ0.001), suggesting a pattern of "automation bias" in time-sensitive acute triage. These inhibitory effects are attenuated by high public trust and geographical remoteness.
CONCLUSION: Public risk perception functions as an institutional constraint that throttles technology deployment. While social pressure limits adoption, the uncritical reliance on AI in high-stakes acute settings may compromise diagnostic vigilance. This study highlights the necessity of using precise tracer conditions to evaluate digital health safety and suggests that governance must balance social legitimacy with rigorous clinical oversight.
Author: [‘Li Z’, ‘Lyu M’, ‘Huang J’]
Journal: Digit Health
Citation: Li Z, et al. Public risk perception, institutional AI adoption, and diagnostic safety: An exploratory cross-level analysis using a tracer condition approach. Public risk perception, institutional AI adoption, and diagnostic safety: An exploratory cross-level analysis using a tracer condition approach. 2026; 12:20552076261425375. doi: 10.1177/20552076261425375