Publications

This page shows all the publications that have been produced by the PRE-ACT project so far.

Research Challenges in Trustworthy Artificial Intelligence and Computing for Health: The Case of the PRE-ACT project

Charalampakos, F., Tsouparopoulos, T., Papageorgiou, Y., Bologna, G., Panisson, A., Perotti, A., & Koutsopoulos, I.

The PRE-ACT project is a newly launched Horizon Europe project that aims to use Artificial Intelligence (AI) towards predicting the risk of side effects of radiotherapy treatment for breast cancer patients. In this paper, we outline four main threads pertaining to AI and computing that are part of the project's research agenda, namely: (i) Explainable AI techniques to make the risk prediction interpretable for the patient and the clinician; (ii) Fair AI techniques to identify and explain potential biases in clinical decision support systems; (iii) Training of AI models from distributed data through Federated Learning algorithms to ensure data privacy; (iv) Mobile applications to provide the patients and clinicians with an interface for the side effect risk prediction. For each of these directions, we provide an overview of the state-of-the-art, with emphasis on techniques that are more relevant for the project. Collectively, these four threads can be seen as enforcing Trustworthy AI and pave the way to transparent and responsible AI systems that are adopted by end-users and may thus unleash the full potential of AI.

Link to publication

PRE-ACT: Prediction of Radiotherapy side Effects using explainable AI for patient Communication and Treatment modification

Rattay, T., Bajardi, P., Bologna, G., Bonchi, F., Cortellessa, G., Dekker, A., Fracasso, F., Joore, M., Paragios, N., Rivera, S., Roumen, C., van Soest, J., Traverso, A., Verhoeven, K., Koutsopoulos, I., & Talbot, C. J.

PRE-ACT is a recently opened multi-centre European study with the goal of using Artificial Intelligence (AI) to predict side effects from radiotherapy in breast cancer patients. In breast cancer, radiotherapy side effects include skin ulceration, breast atrophy, arm lymphedema, and heart damage. Some factors that increase risk for side effects are already known, but current approaches to risk prediction mainly use low-dimensional statistical approaches and do not use all available complex imaging and genomics data. AI is already used in some aspects of radiotherapy delivery, but PRE-ACT will leverage its huge potential towards prediction of side effects and provide an easily understood explanation to support shared decision-making between the patient and physician regarding radiation treatment options.

The outcomes of the project will advance the field of personalised radiotherapy and bring it closer to clinical
implementation.

Link to publication

On improving accuracy in Federated Learning using GANs-based pre-training and Ensemble Learning

Tsouparopoulos, T., & Koutsopoulos, I.

This paper presents a novel training pipeline for Federated Learning (FL), enriched in two aspects, with the goal of improving accuracy. First, we exploit the generative ability of Generative Adversarial Networks (GANs) to augment the clients’ local datasets with synthetic data and second, we incorporate them into the FL training procedure with the help of Ensemble Learning. Drawing inspiration from their demonstrated potential in Deep Learning (DL), we adeptly modify these techniques to address the privacy concerns and distributed nature inherent in FL. Our proposed FL pipeline lead to a 3% and 2.5% improvement in the accuracy of the global model on the MNIST and CIFAR-10 test sets, respectively, compared to the baseline and modified versions of FedAvg. This paves the way for exploring the potential of our method in achieving similar or larger  improvement in other FL algorithms.

Link to publication

Exploring Multi-Task Learning for Explainability

Charalampakos, F., & Koutsopoulos, I.

Machine Learning (ML) model understanding and interpretation is an essential component of several  applications in different domains. Several explanation techniques have been developed in order to provide insights about decisions of complex ML models. One of the most common explainability methods, Feature Attribution, assigns an importance score to each input feature that denotes its contribution (relative significance) to the complex (black-box) ML model’s decision. Such scores can be obtained through another model that acts as a surrogate, e.g., a linear one, which is trained after the black-box model so as to approximate its predictions. In this paper, we propose a training procedure based on Multi-Task Learning (MTL), where we concurrently train a black-box neural network and a surrogate linear model whose coefficients can then be used as feature significance scores. The two models exchange information through their predictions via the optimization objective which is a convex combination of a predictive loss function for the black-box model and of an explainability metric which aims to keep the predictions of the two models close together. Our method manages to make the surrogate model achieve a more accurate approximation of the black-box one, compared to the baseline of separately training the black-box and surrogate models, and therefore improves the quality of produced explanations, both global and local ones. We also achieve a good trade-off between predictive performance and explainability with minimal to negligible accuracy decrease. This enables black-box models acquired from the MTL training procedure to be used instead of normally trained models whilst being more interpretable.

Link to publication

Transferring CNN Features Maps to Ensembles of Explainable Neural Networks.

Bologna G.

The explainability of connectionist models is nowadays an ongoing research issue. Before the advent of deep learning, propositional rules were generated from Multi Layer Perceptrons (MLPs) to explain how they classify data. This type of explanation technique is much less prevalent with ensembles of MLPs and deep models, such as Convolutional Neural Networks (CNNs). Our main contribution is the transfer of CNN feature maps to ensembles of DIMLP networks, which are translatable into propositional rules. We carried out three series of experiments; in the first, we applied DIMLP ensembles to a Covid dataset related to diagnosis from symptoms to show that the generated propositional rules provided intuitive explanations of DIMLP classifications. Then, our purpose was to compare rule extraction from DIMLP ensembles to other techniques using cross-validation. On four classification problems with over 10,000 samples, the rules we extracted provided the highest average predictive accuracy and fidelity. Finally, for the melanoma diagnostic problem, the average predictive accuracy of CNNs was 84.5% and the average fidelity of the top-level generated rules was 95.5%. The propositional rules generated from the CNNs were mapped at the input layer by squares in which the relevant data for the classifications resided. These squares represented regions of attention determining the final classification, with the rules providing logical reasoning.


Link to publication

Computational approaches to Explainable Artificial Intelligence: Advances in theory, applications and trends

Górriz JM, Álvarez-Illán I, Álvarez-Marquina A, Arco JE, Atzmueller M, Ballarini F, Barakova E, Bologna G, Bonomini P, Castellanos-Dominguez G, Castillo-Barnes D, et al.

Deep Learning (DL), a groundbreaking branch of Machine Learning (ML), has emerged as a driving force in both theoretical and applied Artificial Intelligence (AI). DL algorithms, rooted in complex and non-linear artificial neural systems, excel at extracting high-level features from data. DL has demonstrated human-level performance in real-world tasks, including clinical diagnostics, and has unlocked solutions to previously intractable problems in virtual agent design, robotics, genomics, neuroimaging, computer vision, and industrial automation. In this paper, the most relevant advances from the last few years in Artificial Intelligence (AI) and several applications to neuroscience, neuroimaging, computer vision, and robotics are presented, reviewed and discussed. In this way, we summarize the state-of-the-art in AI methods, models and applications within a collection of works presented at the 9th International Conference on the Interplay between Natural and Artificial Computation (IWINAC). The works presented in this paper are excellent examples of new scientific discoveries made in laboratories that have successfully transitioned to real-life applications.

Link to publication