# Integrative Network Fusion (INF) ![INF pipeline ](figs/INF_pipeline.jpeg) Repository attached to the article "Integrative Network Fusion: a multi-omicsapproach in molecular profiling". **Authors**: Marco Cherici*, Nicole Bussola*, Alessia Marcolini*, Margherita Francescatto, Alessandro Zandonà, Lucia Trastulla, Claudio Agostinelli, Giuseppe Jurman, Cesare Furlanello. ## Setup ```bash git clone https://gitlab.fbk.eu/MPBA/INF cd INF conda env create -f env.yml -n inf conda activate inf ``` ### Additional dependencies #### R dependencies To install the R dependencies (not in conda channels), run the following command via the R prompt: ```bash install.packages("TunePareto") ``` #### MLPY To install `mlpy` follow this instructions: `mlpy` package is required for some operations included in the DAP procedure. The `mlpy` package available on PyPI is outdated and not working on OSX platforms. These are the steps to follow: Let `` be your anaconda path (e.g., `/home/user/anaconda3`). Adjust these environmental variables: ```bash export LD_LIBRARY_PATH=/envs//lib:${LD_LIBRARY_PATH} export CPATH=/envs//include:${CPATH} ``` and then install `mlpy` from GitLab: ```bash pip install git+https://gitlab.fbk.eu/MPBA/mlpy.git ``` ## Usage **Input files** * omics layer 1 data: samples x features, tab-separated, with row & column names * omics layer 2 data: same as above (**samples must be in the same order as the first file**) * omics layers 1+2 data: the juxtaposition of the above two files * labels file: one column, just the labels, no header (**same order as the data files**) **Example run** The INF pipeline is implemented with a [Snakefile](https://snakemake.readthedocs.io/en/stable/index.html). The following directory tree is required: * `{datafolder}/{dataset}/{target}/{split_id}/{layer}_{tr,ts,ts2}.txt` * `{datafolder}/{dataset}/{split_id}/labels_{target}_{tr,ts,ts2}.txt` * `{outfolder}/{dataset}/{target}/{model}/{split_id}/{juxt,rSNF,rSNFi,single}` _(these will be created if not present)_ All the {variables} can be specified either in a config.yaml file or on the command line. Example: ```bash snakemake --config datafolder=data outfolder=results dataset=tcga_brca target=ER layer1=gene layer2=cnv layer3=prot model=randomForest random=false split_id=0 -p ``` This example showed an example pipeline using three omics layers from BRCA-ER dataset. You can use an arbitrary number of omics layers by adding or removing `layer` arguments accordingly. A maximum number of cores can also be set (default is 1): ```bash snakemake [--config etc.] --cores 12 ``` The pipeline can be "dry-run" using the `-n` flag: ```bash snakemake --cores 12 -n ``` A bash script (`runner.sh`) is provided for convenience, in order to run the pipeline for each split, to compute Borda of Bordas and to average metrics for all the splits.