Neha Deo

Conference 2022 Hot-Doc Presentation


Video title

HUMANE: Harmonious Understanding of Machine learning Analytics Network – International consensus for research on artificial intelligence in medicine


Authors and Affiliations

Faisal A. Nawaz1, Neha Deo2, Sandosh Padmanabhan3, Chaitanya Mamillapalli4, Piyush Mathur5, Sandeep Reddy6, Shyam Visweswaran7, Thanga Prabhu8, Khalid Moidu9, Rahul Kashyap10

1. College of Medicine, Mohammed Bin Rashid University of Medicine and Health Sciences, Dubai, UAE
2. Mayo Clinic Alix School of Medicine, Rochester, MN, USA
3. Institute of Cardiovascular and Medical Science, University of Glasgow, Glasgow, Scotland, UK
4. Department of Endocrinology, Springfield Clinic, Springfield, IL, USA
5. Department of Anesthesiology, Cleveland Clinic, Cleveland, OH, USA
6. Chair, Healthcare Operations, Deakin University, Geelong, Victoria, AUS
7. Department of Biomedical Informatics, University of Pittsburgh, Pittsburgh, PA, USA
8. Chief Medical Information Officer, Apollo Hospitals, Chennai, TN, IND
9. Chief Information Officer, Consultant, Orlando, FL, USA
10. Department of Anesthesiology and Critical Care Medicine, Mayo Clinic, Rochester, MN, USA




Artificial Intelligence (AI) is a heavily expanding facet of the healthcare landscape. With the rise in interdisciplinary collaborations, applications, and investment in health technologies, more focus is driven to advancing clinical research and developing a deeper understanding of this topic. This paradigm shift in healthcare research has led to increased demands for clinical outcomes, all at the expense of a significant gap in AI-literacy within the field. This has further translated to a lack of standardization in the quality and framework of literature in the AI-Med domain. We propose HUMANE (Harmonious Understanding of Machine learning Analytics NEtwork), a checklist for establishing an international consensus for authors and reviewers involved in research focused on AI or Machine Learning (ML) in Medicine.


This study was conducted using the Delphi method by devising a survey using the Google Forms platform. The survey was developed as a checklist containing 8 sections and 57 questions following a 5-point Likert scale, including sections to deliver feedback on the scope of the checklist and how it can be refined with suggestions. The checklist was shared with a team of 33 AI-experts for feedback and revision of the proposed guidelines.


A total of 33 survey respondents were part of the initial Delphi process with majority (45%) in the 36-45 years age group. The respondents were located across USA (58%), UK (21%) and Australia (12%) as the top 3 countries, with a pre-dominant Healthcare background (42.4%) as early-career professionals (3-10 years experience) (42%). Feedback from questions in the checklist was collected using a Likert scale that showcased an overall agreeable consensus (Mean ranges 4.1-4.8/5) as cumulative scores throughout all sections. Majority of the consensus was agreeable with the Discussion (other) section of the checklist (Median IQR 4.8 (4.8-4.8)), whereas the least agreed section was the Ground Truth (Expert(s) review) section (Median IQR 4.1 (3.9-4.2)) and the Methods (Outcomes) section (Median IQR 4.1(4.1-4.1)) of the checklist. Comments on modifying, removing and accepting various aspects of the checklist were also implemented based on feedback provided.


The HUMANE international consensus has reflected on the need for standardizing the literature focused on AI in Medicine. Further research on the potential of this checklist as an established consensus can be beneficial in improving the reliability and quality of research on a global scale.