Share

Export Citation

APA
MLA
Chicago
Harvard
Vancouver
BIBTEX
RIS
Universitas Hasanuddin
Research output:Contribution to journalArticlepeer-review

A Hybrid Explainable AI Framework (HXAI) for Accurate and Interpretable Diagnosis of Alzheimer’s Disease

Al-bakri F.H.

Diagnostics

Q2
Published: 2025Citations: 1

Abstract

<b>Background/Objectives</b>: In clinical practice, Explainable AI (XAI) enables non-specialists and general practitioners to make precise diagnoses. Current XAI approaches are limited, as many rely solely on either presenting explanations of clinical data or presenting explanations of MRI, or presenting explanations in unclear ways, reducing their clinical utility. <b>Methods</b>: In this paper, we propose a novel Hybrid Explainable AI (HXAI) framework. This framework uniquely integrates both model-agnostic (SHAP) and model-specific (Grad-CAM) explanation methods within a unified structure for the diagnosis of Alzheimer's disease. The dual-layer explainability constitutes the main originality of this study, as it provides the possibility of interpreting quantitative (at the feature level) and spatial (at the region level) data within a single diagnostic framework. Clinical features (e.g., Mini-Mental State Examination (MMSE), normalized Whole Brain Volume (nWBV), Socioeconomic Status (SES), age) are combined with MRI-derived features extracted via ResNet50, and these features are integrated using ensemble learning with a logistic regression meta-model. <b>Results</b>: The corresponding validation reflects the explainability accuracy of these feature-based explanations, with removal-based tests achieving 83.61% explainability accuracy, confirming the importance of these features. Model-specific information was used to explain MRI predictions, achieving 58.16% explainability accuracy of visual explanations. <b>Conclusions</b>: Our HXAI framework integrates both model-agnostic and model-specific approaches in a structured manner, supported by quantitative metrics. This dual-layer interpretability enhances transparency, improves explainability accuracy, and provides an accurate and interpretable framework for AD diagnosis, bridging the gap between model accuracy and clinical trust.

Other files and links

Fingerprint

InterpretabilitySciences
Artificial intelligenceSciences
Bridging (networking)Sciences
Computer scienceSciences
Machine learningSciences
Feature (linguistics)Sciences
Pattern recognition (psychology)Sciences
Logistic regressionSciences
Clinical PracticeSciences
DiseaseSciences
Relation (database)Sciences
Classifier (UML)Sciences
Clinical diagnosisSciences
Feature extractionSciences
Supervised learningSciences
Data miningSciences
Ensemble learningSciences