README.md 3.18 KB
Newer Older
Alessia Marcolini's avatar
Alessia Marcolini committed
1
2
# Multimodal learning for prognosis prediction in oncology by combined deep and radiomics features

Alessia Marcolini's avatar
Alessia Marcolini committed
3
4
![RADLER pipeline](images/RADLER_pipeline.png)

Alessia Marcolini's avatar
Alessia Marcolini committed
5
6
7
8
9
10
11
12
**Authors**: Andrea Bizzego, Nicole Bussola, Alessia Marcolini, Valerio Maggio, Marco Chierici, Damiana Salvalai, Giuseppe Jurman, Cesare Furlanello


This repository contains the source code and trained models for the RADLER pipeline for deep and radiomics features integration for the [Head-Neck-PET-CT](https://wiki.cancerimagingarchive.net/display/Public/Head-Neck-PET-CT) multimodal dataset.
Repository attached to the paper "Multimodal learning for prognosis prediction in oncology by combined deep and radiomics features".

In this study, we introduce a machine learning framework for medical imaging that combines the current pattern recognition approach ("radiomics") with Deep Learning (DL). We apply the framework in cancer bioimaging for prognosis of locoregional recurrence in head and neck squamous cell carcinoma (N=298) from Computed Tomography (CT) and Positron Emission Tomography (PET) imaging. The DL architecture is composed of two parallel cascades of Convolutional Neural Network (CNN) layers merging in a softmax classification layer. The network is first pretrained on head and neck tumor stage diagnosis, then fine-tuned on the prognostic task by internal transfer learning. In parallel, radiomics features (e.g., shape of the tumor mass, texture and pixels intensity statistics) are derived by predefined feature extractors on the PET-CT pairs. We first compare and then mix deep learning and radiomics features into a unifying classification pipeline (RADLER), where model selection and evaluation are based on a data analysis plan developed in the MAQC initiative for reproducible biomarkers. On the multimodal PET-CT cancer dataset, the mixed deep learning/radiomics approach is more accurate than using only one feature type, or image mode. Further, RADLER significantly improves over published results on the same data.

13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
## Setup
```bash
git clone https://gitlab.fbk.eu/mpba-radiomics/RADLER
cd RADLER
conda env create -f env.yml -n radler
conda activate radler
```

### Additional dependencies
#### MLPY
`mlpy` package is required for some operations included in the DAP procedure.

The `mlpy` package available on PyPI is outdated and not working on OSX platforms.
These are the steps to follow:
Let `<ANACONDA>` be your anaconda path (e.g., `/home/user/anaconda3`).

Adjust these environmental variables:
```bash
export LD_LIBRARY_PATH=<ANACONDA>/envs/<ENV>/lib:${LD_LIBRARY_PATH}
export CPATH=<ANACONDA>/envs/<ENV>/include:${CPATH}
```

and then install `mlpy` from GitLab:
```bash
pip install git+https://gitlab.fbk.eu/MPBA/mlpy.git
```
Alessia Marcolini's avatar
Alessia Marcolini committed
39

Alessia Marcolini's avatar
Alessia Marcolini committed
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
#### OCTAVE
`octave` is needed as some first order features are extracted using functions not yet implemented in Python.

To install `octave` run in a Terminal:
```bash
apt install octave
apt install liboctave-dev
```

Open the `octave` prompt:
```bash
octave
```

and run:
```octave
pkg install --forge image
```

and then in a Terminal:
```bash
conda install -c conda-forge oct2py
```

64
65
## Pretrained models
We share the weights of the models, as described in the manuscript. See folder [pretrained_weights](pretrained_weights).