Articles
| Open Access | Algorithmic Fairness, Cognitive Computing, and Mobile Sensor Integration in Clinical Trials: Toward an Ethical and Evidence-Based Framework for AI-Enabled Health Research
Abstract
The integration of artificial intelligence (AI), machine learning (ML), and mobile sensor technologies into clinical research has transformed clinical outcome assessments, patient monitoring, and regulatory decision-making. However, persistent concerns regarding algorithmic bias, interpretability, equity, and evidentiary rigor challenge the ethical deployment of AI-driven tools in clinical trials and healthcare systems.
This study synthesizes interdisciplinary scholarship on algorithmic fairness, cognitive computing, evidence generation, and equity-driven sensing technologies to develop a comprehensive theoretical framework for responsible AI integration into randomized clinical trials (RCTs) and real-world evidence ecosystems.
A qualitative, theory-driven integrative analysis was conducted using foundational and contemporary literature on algorithmic bias, fairness toolkits, interpretability debates, regulatory science, and equity-oriented AI methodologies. The study employs conceptual modeling to articulate relationships among problem formulation, system design, sensor calibration, evidence dossier construction, and regulatory evaluation.
Findings demonstrate that inequity in AI-enabled clinical research emerges at multiple stages: problem formulation, data acquisition, algorithmic optimization, deployment context, and post-market evaluation. Evidence indicates that mobile sensor technologies, when inadequately calibrated across diverse populations, risk exacerbating disparities. Moreover, algorithmic proxies-particularly cost-based healthcare allocation models-can reproduce structural inequities. Regulatory frameworks increasingly recognize real-world evidence but lack standardized fairness validation mechanisms. The study proposes a multi-layered framework combining fairness-by-design, interpretability governance, equity-driven sensing calibration, publication transparency, and global ethical alignment.
Responsible AI integration in clinical trials requires rethinking not only technical design but epistemic foundations of evidence generation. Equity must be embedded from conceptualization through regulatory submission. Without structural reforms in algorithm development, validation, and reporting, AI risks entrenching existing disparities under the guise of innovation.
Keywords
Algorithmic fairness, Clinical trials, Mobile sensor technology, Health equity
References
Abbidi, S.R., Sinha, D. AI/ML-based strategies for enhancing equity, diversity, and inclusion in randomized clinical trials. Trials (2026). https://doi.org/10.1186/s13063-026-09537-2
Adams, A.T., Mandel, I., Gao, Y., Heckman, B.W., Nandakumar, R., & Choudhury, T. (2022). Equity-driven sensing system for measuring skin tone–calibrated peripheral blood oxygen saturation (OptoBeat): development, design, and evaluation study. JMIR Biomedical Engineering, 7, e34934.
Ahmed, M.N., Toor, A.S., O'Neil, K., & Friedland, D. (2017). Cognitive computing and the future of health care. IEEE Pulse, 8, 4–9.
Burns, L., et al. (2022). Real-world evidence for regulatory decision-making: guidance from around the world. Clinical Therapeutics, 44, 420–437.
Cormen, T.H., Leiserson, C.E., Rivest, C.L., & Stein, C. (2022). Introduction to Algorithms (4th ed.). MIT Press.
Friedman, B., & Nissenbaum, H. (1996). Bias in computer systems. ACM Transactions on Information Systems, 14, 330–347.
Hagerty, A., Williams, M., Aupetit, J., & Knox, J. (2021). Global health and AI ethics: Bridging the gap. Frontiers in Digital Health, 3, 702160.
Lee, M.S., & Singh, J. (2021). The landscape and gaps in open source fairness toolkits. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 699, 1–13.
Lipton, Z.C. (2018). The mythos of model interpretability. Communications of the ACM, 61, 36–43.
Lu, M., Shi, H., Xu, M., & Cao, J. (2020). Addressing publication bias in artificial intelligence research. Journal of the American Medical Informatics Association, 27, 1755–1761.
Morley, J., Floridi, L., Kinsey, L., & Elhalal, A. (2020). From what to how: an overview of AI ethics tools, methods and research to translate principles into practices. Ethics and Information Technology, 22, 1–21.
Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366, 447–453.
Saria, S., Butte, A., & Sheikh, A. (2021). Better medicine through machine learning: What's real, and what's artificial? PLoS Medicine, 18, e1003679.
Walton, M.K., et al. (2020). Considerations for development of an evidence dossier to support the use of mobile sensor technology for clinical outcome assessments in clinical trials. Contemporary Clinical Trials, 91, 105962.
Wiens, J., Saria, S., Sendak, M., Ghassemi, M., Liu, V.X., Doshi-Velez, F., et al. (2019). Do not harm: a roadmap for responsible machine learning for health care. Nature Medicine, 25, 1337–1340.
Article Statistics
Downloads
Copyright License
Copyright (c) 2026 Dr. Eleanor Whitmore (Author)

This work is licensed under a Creative Commons Attribution 4.0 International License.