README.md 2.84 KB
Newer Older
1
# Integrative Network Fusion (INF)
Nicole Bussola's avatar
Nicole Bussola committed
2
![INF pipeline ](figs/INF_pipeline.jpeg)
3

4
5
6
7
Repository attached to the article "Integrative Network Fusion: a multi-omicsapproach in molecular profiling".

**Authors**: Marco Cherici*, Nicole Bussola*, Alessia Marcolini*, Margherita Francescatto, Alessandro Zandonà, Lucia Trastulla, Claudio Agostinelli, Giuseppe Jurman, Cesare Furlanello.

8
## Setup
Alessia Marcolini's avatar
Alessia Marcolini committed
9
```bash
Alessia Marcolini's avatar
Alessia Marcolini committed
10
11
git clone https://gitlab.fbk.eu/MPBA/INF
cd INF
Alessia Marcolini's avatar
Alessia Marcolini committed
12
13
14
15
conda env create -f env.yml -n inf
conda activate inf
```

16
17
18
### Additional dependencies

#### R dependencies
Alessia Marcolini's avatar
Alessia Marcolini committed
19
20
21
22
23
To install the R dependencies (not in conda channels), run the following command via the R prompt:
```bash
install.packages("TunePareto")
```

24
25
26
27
28
29
#### MLPY
To install `mlpy` follow this instructions:

`mlpy` package is required for some operations included in the DAP procedure.

The `mlpy` package available on PyPI is outdated and not working on OSX platforms.
30

31
These are the steps to follow:
Alessia Marcolini's avatar
Alessia Marcolini committed
32

33
34
35
36
37
38
39
40
41
42
43
44
45
Let `<ANACONDA>` be your anaconda path (e.g., `/home/user/anaconda3`).

Adjust these environmental variables:
```bash
export LD_LIBRARY_PATH=<ANACONDA>/envs/<ENV>/lib:${LD_LIBRARY_PATH}
export CPATH=<ANACONDA>/envs/<ENV>/include:${CPATH}
```

and then install `mlpy` from GitLab:
```bash
pip install git+https://gitlab.fbk.eu/MPBA/mlpy.git
```

46

47
48
## Usage

49
50
51
52
53
54
55
56
57
**Input files**

* omics layer 1 data: samples x features, tab-separated, with row & column names
* omics layer 2 data: same as above (**samples must be in the same order as the first file**)
* omics layers 1+2 data: the juxtaposition of the above two files
* labels file: one column, just the labels, no header (**same order as the data files**)

**Example run**

Alessia Marcolini's avatar
Alessia Marcolini committed
58
The INF pipeline is implemented with a [Snakefile](https://snakemake.readthedocs.io/en/stable/index.html).
Marco Chierici's avatar
Marco Chierici committed
59
60
61

The following directory tree is required:

Alessia Marcolini's avatar
Alessia Marcolini committed
62
63
64
* `{datafolder}/{dataset}/{target}/{split_id}/{layer}_{tr,ts,ts2}.txt`
* `{datafolder}/{dataset}/{split_id}/labels_{target}_{tr,ts,ts2}.txt`
* `{outfolder}/{dataset}/{target}/{model}/{split_id}/{juxt,rSNF,rSNFi,single}` _(these will be created if not present)_
65

Alessia Marcolini's avatar
Alessia Marcolini committed
66
All the {variables} can be specified either in a config.yaml file or on the command line. 
67

Alessia Marcolini's avatar
Alessia Marcolini committed
68
69
70
71
Example:

```bash
snakemake --config datafolder=data outfolder=results dataset=tcga_brca target=ER layer1=gene layer2=cnv layer3=prot model=randomForest random=false split_id=0 -p 
72
```
Marco Chierici's avatar
Marco Chierici committed
73

Alessia Marcolini's avatar
Alessia Marcolini committed
74
75
76
This example showed an example pipeline using three omics layers from BRCA-ER dataset. You can use an arbitrary number of omics layers by adding or removing `layer` arguments accordingly.

A maximum number of cores can also be set (default is 1):
Marco Chierici's avatar
Marco Chierici committed
77

Alessia Marcolini's avatar
Alessia Marcolini committed
78
```bash
Marco Chierici's avatar
Marco Chierici committed
79
snakemake [--config etc.] --cores 12
80
81
```

Marco Chierici's avatar
Marco Chierici committed
82
The pipeline can be "dry-run" using the `-n` flag:
83

Alessia Marcolini's avatar
Alessia Marcolini committed
84
```bash
Marco Chierici's avatar
Marco Chierici committed
85
86
snakemake --cores 12 -n
```
Alessia Marcolini's avatar
Alessia Marcolini committed
87
88

A bash script (`runner.sh`) is provided for convenience, in order to run the pipeline for each split, to compute Borda of Bordas and to average metrics for all the splits.