PK tQX3mŒ 01_datasets/05_plot_nimads.ipynb{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n\n# Using NIMADS with NiMARE\n\nHow to use the NeuroImaging Meta-Analysis Data Structure\n[(NIMADS)](https://neurostuff.github.io/NIMADS/) with NiMARE.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from requests import request\n\nfrom nimare.io import convert_nimads_to_dataset\nfrom nimare.nimads import Studyset"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Download Data from NeuroStore\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"def download_file(url):\n \"\"\"Download a file from NeuroStore.\"\"\"\n response = request(\"GET\", url)\n return response.json()\n\n\nnimads_studyset = download_file(\"https://neurostore.org/api/studysets/Cv2LLUqG76W9?nested=true\")\nnimads_annotation = download_file(\"https://neurostore.org/api/annotations/76PyNqoTNEsE\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load Data\nLoad the json files into a NiMADS Studyset object.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"studyset = Studyset(nimads_studyset, nimads_annotation)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Convert to NiMARE Dataset\nConvert the NiMADS Studyset object to a NiMARE Dataset object.\nThen you can run NiMARE analyses on the Dataset object.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"nimare_dset = studyset.to_dataset()\nnimare_dset.coordinates.head()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Directly to NiMARE Dataset\nAlternatively, you can convert the NiMADS json files directly to a NiMARE Dataset object\nif you wish to skip using the nimads studyset object directly.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"nimare_dset_2 = convert_nimads_to_dataset(nimads_studyset, nimads_annotation)\nnimare_dset_2.coordinates.head()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.18"
}
},
"nbformat": 4,
"nbformat_minor": 0
}PK jQXs^t ( 01_datasets/02_download_neurosynth.ipynb{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n\n# Neurosynth and NeuroQuery\n\nNeurosynth and NeuroQuery are the two largest publicly-available coordinate-based databases.\nNiMARE includes functions for downloading releases of each database and converting the databases\nto NiMARE Datasets.\n\nIn this example, we download and convert the Neurosynth and NeuroQuery databases for analysis with\nNiMARE.\n\n
Warning
In August 2021, the Neurosynth database was reorganized according to a new file format.\n As such, the ``fetch_neurosynth`` function for NiMARE versions before 0.0.10 will not work\n with its default parameters.\n In order to download the Neurosynth database in its older format using NiMARE <= 0.0.9,\n do the following::\n\n nimare.extract.fetch_neurosynth(\n url=(\n \"https://github.com/neurosynth/neurosynth-data/blob/\"\n \"e8f27c4a9a44dbfbc0750366166ad2ba34ac72d6/current_data.tar.gz?raw=true\"\n ),\n )
\n\nFor information about where these files will be downloaded to on your machine,\nsee :doc:`../../fetching`.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Start with the necessary imports\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import os\nfrom pprint import pprint\n\nfrom nimare.extract import download_abstracts, fetch_neuroquery, fetch_neurosynth\nfrom nimare.io import convert_neurosynth_to_dataset\n\n# biopython is unnecessary here, but is required by download_abstracts.\n# We import it here only to document the dependency and cause an early failure if it's missing.\nimport Bio # pip install biopython"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Download Neurosynth\nNeurosynth's data files are stored at https://github.com/neurosynth/neurosynth-data.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"out_dir = os.path.abspath(\"../example_data/\")\nos.makedirs(out_dir, exist_ok=True)\n\nfiles = fetch_neurosynth(\n data_dir=out_dir,\n version=\"7\",\n overwrite=False,\n source=\"abstract\",\n vocab=\"terms\",\n)\n# Note that the files are saved to a new folder within \"out_dir\" named \"neurosynth\".\npprint(files)\nneurosynth_db = files[0]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Convert Neurosynth database to NiMARE dataset file\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"neurosynth_dset = convert_neurosynth_to_dataset(\n coordinates_file=neurosynth_db[\"coordinates\"],\n metadata_file=neurosynth_db[\"metadata\"],\n annotations_files=neurosynth_db[\"features\"],\n)\nneurosynth_dset.save(os.path.join(out_dir, \"neurosynth_dataset.pkl.gz\"))\nprint(neurosynth_dset)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Add article abstracts to dataset\nThis is only possible because Neurosynth uses PMIDs as study IDs.\n\nMake sure you replace the example email address with your own.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"neurosynth_dset = download_abstracts(neurosynth_dset, \"example@example.edu\")\nneurosynth_dset.save(os.path.join(out_dir, \"neurosynth_dataset_with_abstracts.pkl.gz\"))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Do the same with NeuroQuery\nNeuroQuery's data files are stored at https://github.com/neuroquery/neuroquery_data.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"files = fetch_neuroquery(\n data_dir=out_dir,\n version=\"1\",\n overwrite=False,\n source=\"combined\",\n vocab=\"neuroquery6308\",\n type=\"tfidf\",\n)\n# Note that the files are saved to a new folder within \"out_dir\" named \"neuroquery\".\npprint(files)\nneuroquery_db = files[0]\n\n# Note that the conversion function says \"neurosynth\".\n# This is just for backwards compatibility.\nneuroquery_dset = convert_neurosynth_to_dataset(\n coordinates_file=neuroquery_db[\"coordinates\"],\n metadata_file=neuroquery_db[\"metadata\"],\n annotations_files=neuroquery_db[\"features\"],\n)\nneuroquery_dset.save(os.path.join(out_dir, \"neuroquery_dataset.pkl.gz\"))\nprint(neuroquery_dset)\n\n# NeuroQuery also uses PMIDs as study IDs.\nneuroquery_dset = download_abstracts(neuroquery_dset, \"example@example.edu\")\nneuroquery_dset.save(os.path.join(out_dir, \"neuroquery_dataset_with_abstracts.pkl.gz\"))"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.18"
}
},
"nbformat": 4,
"nbformat_minor": 0
}PK tQX&mx% % 4 01_datasets/04_transform_images_to_coordinates.ipynb{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n\n# Transform images into coordinates\n\nCreate a dataset with coordinates derived from peak statistic identification in images.\n\nWhy would you want to do this?\n\n* Compare CBMA and IBMA\n* Add more studies to your existing CBMA dataset\n* Normalize how coordinates were derived (provided the image data is available)\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import os\n\nimport matplotlib.pyplot as plt\nfrom nilearn.plotting import plot_stat_map\n\nfrom nimare.dataset import Dataset\nfrom nimare.extract import download_nidm_pain\nfrom nimare.meta.cbma import ALE\nfrom nimare.transforms import ImagesToCoordinates, ImageTransformer\nfrom nimare.utils import get_resource_path"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Download data\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"dset_dir = download_nidm_pain()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load Dataset\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"dset_file = os.path.join(get_resource_path(), \"nidm_pain_dset.json\")\ndset = Dataset(dset_file)\ndset.update_path(dset_dir)\n\n# ImagesToCoordinates uses z or p statistical maps\nz_transformer = ImageTransformer(target=\"z\")\ndset = z_transformer.transform(dset)\n\nstudy_no_images = \"pain_02.nidm-1\"\n# delete images for study\ndset.images = dset.images.query(f\"id != '{study_no_images}'\")\n\nstudy_no_coordinates = \"pain_03.nidm-1\"\n\n# delete coordinates for study\ndset.coordinates = dset.coordinates.query(f\"id != '{study_no_coordinates}'\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Inspect Dataset\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# There is only one study contrast with coordinates, but no images\nprint(f\"studies with only coordinates: {set(dset.coordinates['id']) - set(dset.images['id'])}\\n\")\n\nprint(f\"studies with only images: {set(dset.images['id']) - set(dset.coordinates['id'])}\\n\")\n\n# the images dataframe has z maps as one of the columns\nprint(f\"columns in images dataframe: {dset.images.columns}\\n\")\n\n# there is no z_stat column in the coordinates dataframe\nprint(f\"columns in coordinates dataframe: {dset.coordinates.columns}\\n\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Use different strategies to overwrite existing coordinate data\nThere are three choices for how to treat existing coordinate\ndata in the dataset which are named: 'fill', 'replace', and 'demolish'.\n\n* 'fill' will only create coordinates for study contrasts with images, but\n no coordinates. With 'fill' you trust and want to keep all\n existing coordinate data and the transformer will help \"fill\" in\n the blanks for study contrasts with no coordinates\n* 'replace' will create coordinates for study contrasts with images.\n In addition to filling in the blanks, 'replace' will overwrite existing\n coordinate data if images are available.\n However, if image data is not available and only coordinates exist\n for a particular study contrast, those coordinates will be retained\n in the resulting dataset.\n With 'replace', you prefer to have coordinates generated consistently\n by NiMARE, but you will keep other coordinate data if that particular\n study contrast does not have images.\n* 'demolish' will create coordinates for study contrasts with images\n and remove any coordinates from the dataset it cannot overwrite.\n With 'demolish', you only trust coordinates generated by NiMARE.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# create coordinates from statistical maps\ncoord_fill = ImagesToCoordinates(merge_strategy=\"fill\")\ncoord_replace = ImagesToCoordinates(merge_strategy=\"replace\")\ncoord_demolish = ImagesToCoordinates(merge_strategy=\"demolish\")\n\ndset_fill = coord_fill.transform(dset)\ndset_replace = coord_replace.transform(dset)\ndset_demolish = coord_demolish.transform(dset)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Inspect generated datasets\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"example_study = \"pain_01.nidm-1\"\n\nprint(f\"no coordinate data for {study_no_coordinates}\")\nassert study_no_coordinates not in dset.coordinates[\"id\"]\n\n# 'fill' will add coordinates for study without coordinates\nprint(f\"'fill' strategy for study {study_no_coordinates}\")\nprint(dset_fill.coordinates.query(f\"id == '{study_no_coordinates}'\"))\nprint(\"\\n\\n\")\n\n\n# 'replace' will change the data for studies with images\nprint(f\"original data for study {example_study}\")\nprint(dset.coordinates.query(f\"id == '{example_study}'\"))\nprint(f\"'replace' strategy for study {example_study}\")\nprint(dset_replace.coordinates.query(f\"id == '{example_study}'\"))\n\n# 'demolish' will remove studies that do not have images\nprint(f\"'demolish' strategy for study {study_no_images}\")\nassert study_no_images not in dset.coordinates[\"id\"]\n\n# while studies with only coordinates (no images) are in 'replace',\n# they are removed from 'demolish'.\nprint(\n \"studies in 'replace', but not 'demolish': \"\n f\"{set(dset_replace.coordinates['id']) - set(dset_demolish.coordinates['id'])}\"\n)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## ALE (CBMA)\nRun a meta analysis using each of the strategies.\nThe biggest difference is between 'fill' and the other two strategies.\nThe difference is because in 'fill' most of the original coordinates\nin the dataset are used, whereas with 'replace' and 'demolish' the\nmajority/all of the coordinates are generated by NiMARE.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"ale = ALE()\nres_fill = ale.fit(dset_fill)\nres_replace = ale.fit(dset_replace)\nres_demolish = ale.fit(dset_demolish)\nfig, axs = plt.subplots(3, figsize=(6, 8))\nfor ax, strat, res in zip(\n axs, [\"fill\", \"replace\", \"demolist\"], [res_fill, res_replace, res_demolish]\n):\n plot_stat_map(\n res.get_map(\"z\"),\n cut_coords=[0, 0, -8],\n draw_cross=False,\n cmap=\"RdBu_r\",\n axes=ax,\n title=f\"'{strat}' strategy\",\n )\n\nfig.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Tracking positive and negative z-scores\nThere is a new column in the transformed coordinates, ``z_stat``.\nThis column contains the z-score of the individual peak.\nCurrently, no CBMA algorithm implemented in NiMARE takes advantage\nof z-scores, but we can still take advantage of whether the peak was positive\nor negative by running a CBMA on positive and negative z-scores separately,\ntesting the convergence of positive and negative z-scores separately.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"coord_two_sided = ImagesToCoordinates(merge_strategy=\"demolish\", two_sided=True)\n\ndset_two_sided = coord_two_sided.transform(dset)\n\ndset_positive = dset_two_sided.copy()\ndset_negative = dset_two_sided.copy()\ndset_positive.coordinates = dset_two_sided.coordinates.query(\"z_stat >= 0.0\")\ndset_negative.coordinates = dset_two_sided.coordinates.query(\"z_stat < 0.0\")\n\n# plot the results\nale = ALE()\nres_positive = ale.fit(dset_positive)\nres_negative = ale.fit(dset_negative)\nfig, axs = plt.subplots(2, figsize=(6, 6))\nfor ax, sign, res in zip(axs, [\"positive\", \"negative\"], [res_positive, res_negative]):\n plot_stat_map(\n res.get_map(\"z\"),\n cut_coords=[0, 0, -8],\n draw_cross=False,\n cmap=\"RdBu_r\",\n axes=ax,\n title=f\"'{sign}' z-scores\",\n )\n\nfig.show()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.18"
}
},
"nbformat": 4,
"nbformat_minor": 0
}PK tQXv(m ' 01_datasets/03_plot_neurovault_io.ipynb{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n\n# Use NeuroVault statistical maps in NiMARE\n\nDownload statistical maps from NeuroVault, then use them in a meta-analysis,\nwith NiMARE.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import matplotlib.pyplot as plt\nfrom nilearn.plotting import plot_stat_map"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Neurovault + NiMARE: Load freely shared statistical maps for Meta-Analysis\n[Neurovault](https://neurovault.org/) is an online platform that hosts\nunthresholded statistical maps, including group statistical maps.\nNiMARE can read these statistical maps when given a list of collection_ids.\nI search \"working memory\" on neurovault, and find these relevant collections:\n\n* [2884](https://neurovault.org/collections/2884/)\n* [2621](https://neurovault.org/collections/2621/)\n* [3085](https://neurovault.org/collections/3085/)\n* [5623](https://neurovault.org/collections/5623/)\n* [3264](https://neurovault.org/collections/3264/)\n* [3192](https://neurovault.org/collections/3192/)\n* [457](https://neurovault.org/collections/457/)\n\nI can load specific statistical maps from these collections\ninto a NiMARE dataset:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from nimare.io import convert_neurovault_to_dataset\n\n# The specific collections I would like to download group level\n# statistical maps from\ncollection_ids = (2884, 2621, 3085, 5623, 3264, 3192, 457)\n\n# A mapping between what I want the contrast(s) to be\n# named in the dataset and what their respective group\n# statistical maps are named on neurovault\ncontrasts = {\n \"working_memory\": (\n \"Working memory load of 2 faces versus 1 face - NT2_Tstat|\"\n \"t-value contrast 2-back minus 0-back|\"\n \"Searchlight multivariate Decoding 2: visual working memory|\"\n \"Context-dependent group-specific WM information|\"\n \"WM working memory zstat1|\"\n \"WM task over CRT task map|\"\n \"tfMRI WM 2BK PLACE zstat1\"\n )\n}\n\n# Convert how the statistical maps on neurovault are represented\n# in a NiMARE dataset.\nmap_type_conversion = {\"Z map\": \"z\", \"T map\": \"t\"}\n\ndset = convert_neurovault_to_dataset(\n collection_ids,\n contrasts,\n img_dir=None,\n map_type_conversion=map_type_conversion,\n)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Conversion of Statistical Maps\nSome of the statistical maps are T statistics and others are Z statistics.\nTo perform a Fisher's meta analysis, we need all Z maps.\nThoughtfully, NiMARE has a class named ``ImageTransformer`` that will\nhelp us.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from nimare.transforms import ImageTransformer\n\n# Not all studies have Z maps!\ndset.images[[\"z\"]]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"z_transformer = ImageTransformer(target=\"z\")\ndset = z_transformer.transform(dset)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"All studies now have Z maps!\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"dset.images[[\"z\"]]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Run a Meta-Analysis\nWith the missing Z maps filled in, we can run a Meta-Analysis\nand plot our results\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from nimare.meta.ibma import Fishers\n\n# The default template has a slightly different, but completely compatible,\n# affine than the NeuroVault images, so we allow the Estimator to resample\n# images during the fitting process.\nmeta = Fishers(resample=True)\n\nmeta_res = meta.fit(dset)\n\nfig, ax = plt.subplots()\ndisplay = plot_stat_map(meta_res.get_map(\"z\"), threshold=3.3, axes=ax, figure=fig)\nfig.show()\n# The result may look questionable, but this code provides\n# a template on how to use neurovault in your meta analysis."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.18"
}
},
"nbformat": 4,
"nbformat_minor": 0
}PK jQXXoPx* * $ 01_datasets/01_plot_dataset_io.ipynb{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n\n# The NiMARE Dataset object\n\nThis is a brief walkthrough of the :class:`~nimare.dataset.Dataset` class and its methods.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Start with the necessary imports\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import os\n\nfrom nimare.dataset import Dataset\nfrom nimare.extract import download_nidm_pain\nfrom nimare.transforms import ImageTransformer\nfrom nimare.utils import get_resource_path"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Datasets are stored as json or pkl[.gz] files\nJson files are used to create Datasets, while generated Datasets are saved\nto, and loaded from, pkl[.gz] files.\nWe use jsons because they are easy to edit, and thus build by hand, if\nnecessary.\nWe then store the generated Datasets as pkl.gz files because an initialized\nDataset is no longer a dictionary.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# Let's start by downloading a dataset\ndset_dir = download_nidm_pain()\n\n# Now we can load and save the Dataset object\ndset_file = os.path.join(get_resource_path(), \"nidm_pain_dset.json\")\ndset = Dataset(dset_file, target=\"mni152_2mm\", mask=None)\ndset.save(\"pain_dset.pkl\")\ndset = Dataset.load(\"pain_dset.pkl\")\nos.remove(\"pain_dset.pkl\") # cleanup"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Much of the data in Datasets is stored as DataFrames\nThe five DataFrames in Dataset are \"coordinates\" (reported peaks),\n\"images\" (statistical maps), \"metadata\", \"texts\", and \"annotations\" (labels).\n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### ``Dataset.annotations`` contains labels describing studies\nColumns include the standard identifiers and any labels.\nThe labels may be grouped together based on label source, in which case they\nshould be prefixed with some string followed by two underscores.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"dset.annotations.head()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### ``Dataset.coordinates`` contains reported peaks\nColumns include the standard identifiers, as well as mm coordinates (x, y, z)\nand voxel indices (i, j, k) specific to the Dataset's masker's space.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"dset.coordinates.head()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### ``Dataset.images`` contains images from studies\nColumns include the standard identifiers, as well as paths to images grouped\nby image type (e.g., z, beta, t).\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# Here we'll only show a subset of these image types to fit in the window.\ncolumns_to_show = [\"id\", \"study_id\", \"contrast_id\", \"beta__relative\", \"z__relative\"]\ndset.images[columns_to_show].head()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### ``Dataset.metadata`` contains metadata describing studies\nColumns include the standard identifiers, as well as one column for each\nmetadata field.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"dset.metadata.head()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### ``Dataset.texts`` contains texts associated with studies\nColumns include the standard identifiers, as well as one for each text type.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"dset.texts.head()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## There are a handful of other important Dataset attributes\n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"``Dataset.ids`` contains study identifiers\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"dset.ids"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"``Dataset.masker`` is a nilearn Masker object\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"dset.masker"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"``Dataset.space`` is a string\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(f\"Template space: {dset.space}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Statistical images are not stored internally\nImages are not stored within the Dataset.\nInstead, relative paths to image files are retained in the Dataset.images\nattribute.\nWhen loading a Dataset, you will likely need to specify the path to the images.\nTo do this, you can use :func:`~nimare.dataset.Dataset.update_path`.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"dset.update_path(dset_dir)\ncolumns_to_show = [\"id\", \"study_id\", \"contrast_id\", \"beta\", \"beta__relative\"]\ndset.images[columns_to_show].head()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Images can also be calculated based on available files\nWhen some images are available, but others are not, sometimes required images\ncan be calculated from the available ones.\n\nFor example, ``varcope = t / beta``, so if you have t-statistic images and\nbeta images, you can also calculate varcope (variance) images.\n\nWe use the :mod:`~nimare.transforms` module to perform these transformations\n(especially :class:`~nimare.transforms.ImageTransformer`)\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"varcope_transformer = ImageTransformer(target=\"varcope\")\ndset = varcope_transformer.transform(dset)\ndset.images[[\"id\", \"varcope\"]].head()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Datasets support many search methods\nThere are ``get_[X]`` and ``get_studies_by_[X]`` methods for a range of\npossible search criteria.\nThe ``get_[X]`` methods allow you to search for specific metadata, while the\n``get_studies_by_[X]`` methods let you search for study identifiers within\nthe Dataset matching criteria.\n\nNote that the ``get_[X]`` methods return a value for every study in the Dataset\nby default, and for every requested study if the ``ids`` argument is provided.\nIf a study does not have the data requested, the returned list will have\n``None`` for that study.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"z_images = dset.get_images(imtype=\"z\")\nz_images = [str(z) for z in z_images]\nprint(\"\\n\".join(z_images))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Let's try to fill in missing z images\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"z_transformer = ImageTransformer(target=\"z\")\ndset = z_transformer.transform(dset)\nz_images = dset.get_images(imtype=\"z\")\nz_images = [str(z) for z in z_images]\nprint(\"\\n\".join(z_images))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Datasets can also search for studies matching criteria\n``get_studies_by_[X]`` methods return a list of study identifiers matching\nthe criteria, such as reporting a peak coordinate near a search coordinate.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"sel_studies = dset.get_studies_by_coordinate(xyz=[[0, 0, 0]], r=20)\nprint(\"\\n\".join(sel_studies))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Datasets are meant to be mostly immutable\nWhile some elements of Datasets are designed to be changeable, like the paths\nto image files, most elements are not.\nNiMARE Estimators operate on Datasets and return *new*, updated Datasets.\nIf you want to reduce a Dataset based on a subset of the studies in the\nDataset, you need to use :meth:`~nimare.dataset.Dataset.slice`.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"sub_dset = dset.slice(ids=sel_studies)\nprint(\"\\n\".join(sub_dset.ids))"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.18"
}
},
"nbformat": 4,
"nbformat_minor": 0
}PK QX͔ - 02_meta-analyses/09_plot_simulated_data.ipynb{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n\n# Simulate data for coordinate based meta-analysis\n\nSimulating data before you run your meta-analysis is a great way to test your assumptions and see\nhow the meta-analysis would perform with simplified data\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import matplotlib.pyplot as plt\nfrom nilearn.plotting import plot_stat_map\n\nfrom nimare.correct import FDRCorrector\nfrom nimare.generate import create_coordinate_dataset\nfrom nimare.meta import ALE"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create function to perform a meta-analysis and plot results\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"def analyze_and_plot(dset, ground_truth_foci=None, correct=True, return_cres=False):\n meta = ALE(kernel__fwhm=10)\n results = meta.fit(dset)\n if correct:\n corr = FDRCorrector()\n cres = corr.transform(results)\n else:\n cres = results\n\n # get the z coordinates\n if ground_truth_foci:\n stat_map_kwargs = {\"cut_coords\": [c[2] for c in ground_truth_foci]}\n else:\n stat_map_kwargs = {}\n\n fig, ax = plt.subplots()\n display = plot_stat_map(\n cres.get_map(\"z\"),\n display_mode=\"z\",\n draw_cross=False,\n cmap=\"Purples\",\n threshold=2.3,\n symmetric_cbar=False,\n figure=fig,\n axes=ax,\n **stat_map_kwargs,\n )\n\n if ground_truth_foci:\n # place red dots indicating the ground truth foci\n display.add_markers(ground_truth_foci)\n\n if return_cres:\n return fig, cres\n\n return fig"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Dataset\nIn this example, each of the 30 generated fake studies\nselect 4 coordinates from a probability map representing the probability\nthat particular coordinate will be chosen.\nThere are 4 \"hot\" spots centered on 3D gaussian distributions,\nmeaning each study will likely select 4 foci that are close\nto those hot spots, but there is still random jittering.\nEach study has a ``sample_size`` sampled from a uniform distribution from 20 to 40.\nso some studies may have fewer than 30 participants and some\nmore.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"ground_truth_foci, dset = create_coordinate_dataset(foci=4, sample_size=(20, 40), n_studies=30)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Analyze and plot simple dataset\nThe red dots in this plot and subsequent plots represent the\nsimulated ground truth foci, and the clouds represent the statistical\nmaps of the simulated data.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"analyze_and_plot(dset, ground_truth_foci)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Fine-tune dataset creation\nPerhaps you want more control over the studies being generated.\nyou can set:\n\n - the specific peak coordinates (i.e., ``foci``)\n - the percentage of studies that contain the foci of interest (``foci_percentage``)\n - how tightly the study specific foci are selected around the ground truth (i.e., ``fwhm``)\n - the sample size for each study (i.e., ``sample_size``)\n - the number of noise foci in each study (i.e., ``n_noise_foci``)\n - the number of studies (i.e., ``n_studies``)\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"foci = [(0, 0, 0)]\nfoci_percentage = 1.0\nfwhm = 10.0\nn_studies = 30\nsample_sizes = [30] * n_studies\nsample_sizes[0] = 300\nn_noise_foci = 10\n\n_, manual_dset = create_coordinate_dataset(\n foci=foci, fwhm=fwhm, sample_size=sample_sizes, n_studies=n_studies, n_noise_foci=n_noise_foci\n)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Analyze and plot manual dataset\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"fig = analyze_and_plot(manual_dset, ground_truth_foci)\nfig.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Control percentage of studies with the foci of interest\nOften times a converging peak is not found in all studies within\nthe meta-analysis, but only a portion.\nWe can select a percentage of studies where a coordinate\nis selected around the ground truth foci.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"_, perc_foci_dset = create_coordinate_dataset(\n foci=ground_truth_foci[0:2], foci_percentage=\"50%\", fwhm=10.0, sample_size=30, n_studies=30\n)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Analyze and plot the 50% foci dataset\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"fig = analyze_and_plot(perc_foci_dset, ground_truth_foci[0:2])\nfig.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create a null dataset\nPerhaps you are interested in the number of false positives your favorite\nmeta-analysis algorithm typically gives.\nAt an alpha of 0.05 we would expect no more than 5% of results to be false positives.\nTo test this, we can create a dataset with no foci that converge, but have many\ndistributed foci.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"_, no_foci_dset = create_coordinate_dataset(\n foci=0, sample_size=(20, 30), n_studies=30, n_noise_foci=100\n)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Analyze and plot no foci dataset\nWhen not performing a multiple comparisons correction,\nthere is a false positive rate of approximately 5%.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"fig, cres = analyze_and_plot(no_foci_dset, correct=False, return_cres=True)\nfig.show()\n\np_values = cres.get_map(\"p\", return_type=\"array\")\n# what percentage of voxels are not significant?\nnon_significant_percent = ((p_values > 0.05).sum() / p_values.size) * 100\nprint(f\"{non_significant_percent}% of voxels are not significant\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.18"
}
},
"nbformat": 4,
"nbformat_minor": 0
}PK NQXT" , 02_meta-analyses/10_plot_cbma_workflow.ipynb{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n\n# Run a coordinate-based meta-analysis (CBMA) workflow\n\nNiMARE provides a plethora of tools for performing meta-analyses on neuroimaging data.\nSometimes it's difficult to know where to start, especially if you're new to meta-analysis.\nThis tutorial will walk you through using a CBMA workflow function which puts together\nthe fundamental steps of a CBMA meta-analysis.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import os\nfrom pathlib import Path\n\nimport matplotlib.pyplot as plt\nfrom nilearn.plotting import plot_stat_map\n\nfrom nimare.dataset import Dataset\nfrom nimare.reports.base import run_reports\nfrom nimare.utils import get_resource_path\nfrom nimare.workflows.cbma import CBMAWorkflow"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load Dataset\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"dset_file = os.path.join(get_resource_path(), \"nidm_pain_dset.json\")\ndset = Dataset(dset_file)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Run CBMA Workflow\nThe fit method of a CBMA workflow class runs the following steps:\n\n1. Runs a meta-analysis using the specified method (default: ALE)\n2. Applies a corrector to the meta-analysis results (default: FWECorrector, montecarlo)\n3. Generates cluster tables and runs diagnostics on the corrected results (default: Jackknife)\n\nAll in one call!\n\nresult = CBMAWorkflow().fit(dset)\n\nFor this example, we use an FDR correction because the default corrector (FWE correction with\nMonte Carlo simulation) takes a long time to run due to the high number of iterations that\nare required\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"workflow = CBMAWorkflow(corrector=\"fdr\")\nresult = workflow.fit(dset)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Plot Results\nThe fit method of the CBMA workflow class returns a :class:`~nimare.results.MetaResult` object,\nwhere you can access the corrected results of the meta-analysis and diagnostics tables.\n\nCorrected map:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"img = result.get_map(\"z_corr-FDR_method-indep\")\nplot_stat_map(\n img,\n cut_coords=4,\n display_mode=\"z\",\n threshold=1.65, # voxel_thresh p < .05, one-tailed\n cmap=\"RdBu_r\",\n vmax=4,\n)\nplt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Clusters table\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"result.tables[\"z_corr-FDR_method-indep_tab-clust\"]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Contribution table\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"result.tables[\"z_corr-FDR_method-indep_diag-Jackknife_tab-counts_tail-positive\"]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Report\nFinally, a NiMARE report is generated from the MetaResult.\nroot_dir = Path(os.getcwd()).parents[1] / \"docs\" / \"_build\"\nUse the previous root to run the documentation locally.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"root_dir = Path(os.getcwd()).parents[1] / \"_readthedocs\"\nhtml_dir = root_dir / \"html\" / \"auto_examples\" / \"02_meta-analyses\" / \"10_plot_cbma_workflow\"\nhtml_dir.mkdir(parents=True, exist_ok=True)\n\nrun_reports(result, html_dir)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
".. raw:: html\n\n \n\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.18"
}
},
"nbformat": 4,
"nbformat_minor": 0
}PK QXx 0 0 # 02_meta-analyses/01_plot_cbma.ipynb{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n\n# Coordinate-based meta-analysis algorithms\n\nA tour of CBMA algorithms in NiMARE.\n\nThis tutorial is intended to provide a brief description and example of each of\nthe CBMA algorithms implemented in NiMARE.\nFor a more detailed introduction to the elements of a coordinate-based\nmeta-analysis, see other stuff.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load Dataset\nNote
The data used in this example come from a collection of NIDM-Results packs\n downloaded from Neurovault collection 1425, uploaded by Dr. Camille Maumet.\n\n Creation of the Dataset from the NIDM-Results packs was done with custom\n code. The Results packs for collection 1425 are not completely\n NIDM-Results-compliant, so the nidmresults library could not be used to\n facilitate data extraction.
\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import os\nfrom pprint import pprint\n\nfrom nilearn.plotting import plot_stat_map\n\nfrom nimare.correct import FWECorrector\nfrom nimare.dataset import Dataset\nfrom nimare.utils import get_resource_path\n\ndset_file = os.path.join(get_resource_path(), \"nidm_pain_dset.json\")\ndset = Dataset(dset_file)\n\n# Some of the CBMA algorithms compare two Datasets,\n# so we'll split this example Dataset in half.\ndset1 = dset.slice(dset.ids[:10])\ndset2 = dset.slice(dset.ids[10:])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Multilevel Kernel Density Analysis\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from nimare.meta.cbma.mkda import MKDADensity\n\nmeta = MKDADensity()\nresults = meta.fit(dset)\n\ncorr = FWECorrector(method=\"montecarlo\", n_iters=10, n_cores=1)\ncres = corr.transform(results)\n\nplot_stat_map(\n results.get_map(\"z\"),\n cut_coords=[0, 0, -8],\n draw_cross=False,\n cmap=\"RdBu_r\",\n threshold=0.1,\n)\nplot_stat_map(\n cres.get_map(\"z_level-voxel_corr-FWE_method-montecarlo\"),\n cut_coords=[0, 0, -8],\n draw_cross=False,\n cmap=\"RdBu_r\",\n threshold=0.1,\n)\n\nprint(\"Description:\")\npprint(results.description_)\nprint(\"References:\")\npprint(results.bibtex_)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## MKDA Chi-Squared\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from nimare.meta.cbma.mkda import MKDAChi2\n\nmeta = MKDAChi2(kernel__r=10)\nresults = meta.fit(dset1, dset2)\n\ncorr = FWECorrector(method=\"montecarlo\", n_iters=10, n_cores=1)\ncres = corr.transform(results)\n\nplot_stat_map(\n results.get_map(\"z_desc-uniformity\"),\n draw_cross=False,\n cmap=\"RdBu_r\",\n threshold=0.1,\n)\nplot_stat_map(\n cres.get_map(\"z_desc-uniformitySize_level-cluster_corr-FWE_method-montecarlo\"),\n draw_cross=False,\n cmap=\"RdBu_r\",\n threshold=0.1,\n)\n\nprint(\"Description:\")\npprint(results.description_)\nprint(\"References:\")\npprint(results.bibtex_)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Kernel Density Analysis\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from nimare.meta.cbma.mkda import KDA\n\nmeta = KDA()\nresults = meta.fit(dset)\n\ncorr = FWECorrector(method=\"montecarlo\", n_iters=10, n_cores=1)\ncres = corr.transform(results)\n\nplot_stat_map(\n results.get_map(\"z\"),\n cut_coords=[0, 0, -8],\n draw_cross=False,\n cmap=\"RdBu_r\",\n threshold=0.1,\n)\nplot_stat_map(\n cres.get_map(\"z_desc-size_level-cluster_corr-FWE_method-montecarlo\"),\n cut_coords=[0, 0, -8],\n draw_cross=False,\n cmap=\"RdBu_r\",\n threshold=0.1,\n)\n\nprint(\"Description:\")\npprint(results.description_)\nprint(\"References:\")\npprint(results.bibtex_)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Activation Likelihood Estimation\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from nimare.meta.cbma.ale import ALE\n\nmeta = ALE()\nresults = meta.fit(dset)\n\ncorr = FWECorrector(method=\"montecarlo\", n_iters=10, n_cores=1)\ncres = corr.transform(results)\n\nplot_stat_map(\n results.get_map(\"z\"),\n cut_coords=[0, 0, -8],\n draw_cross=False,\n cmap=\"RdBu_r\",\n threshold=0.1,\n)\nplot_stat_map(\n cres.get_map(\"z_desc-size_level-cluster_corr-FWE_method-montecarlo\"),\n cut_coords=[0, 0, -8],\n draw_cross=False,\n cmap=\"RdBu_r\",\n threshold=0.1,\n)\n\nprint(\"Description:\")\npprint(results.description_)\nprint(\"References:\")\npprint(results.bibtex_)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Specific Co-Activation Likelihood Estimation\n\n.. important::\n\n The SCALE algorithm is very memory intensive, so we don't run it within the\n documentation.\n\n```python\nimport numpy as np\n\nfrom nimare.meta.cbma.ale import SCALE\nfrom nimare.utils import vox2mm\n\nxyz = vox2mm(\n np.vstack(np.where(dset.masker.mask_img.get_fdata())).T,\n dset.masker.mask_img.affine,\n)\n\nmeta = SCALE(xyz=xyz, n_iters=10)\nresults = meta.fit(dset)\n```\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## ALE-Based Subtraction Analysis\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from nimare.meta.cbma.ale import ALESubtraction\n\nmeta = ALESubtraction(n_iters=10, n_cores=1)\nresults = meta.fit(dset1, dset2)\n\nplot_stat_map(\n results.get_map(\"z_desc-group1MinusGroup2\"),\n cut_coords=[0, 0, -8],\n draw_cross=False,\n cmap=\"RdBu_r\",\n threshold=0.1,\n)\n\nprint(\"Description:\")\npprint(results.description_)\nprint(\"References:\")\npprint(results.bibtex_)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.18"
}
},
"nbformat": 4,
"nbformat_minor": 0
}PK {QX< ) 02_meta-analyses/05_plot_correctors.ipynb{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n\n# The Corrector class\n\nHere we take a look at multiple comparisons correction in meta-analyses.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from pprint import pprint\n\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nfrom nilearn.plotting import plot_stat_map"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Download data\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from nimare.extract import download_nidm_pain\n\ndset_dir = download_nidm_pain()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load Dataset\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import os\n\nfrom nimare.dataset import Dataset\nfrom nimare.utils import get_resource_path\n\ndset_file = os.path.join(get_resource_path(), \"nidm_pain_dset.json\")\ndset = Dataset(dset_file)\ndset.update_path(dset_dir)\n\nmask_img = dset.masker.mask_img"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n## Multiple comparisons correction in coordinate-based meta-analyses\n.. tip::\n For more information multiple comparisons correction and CBMA in NiMARE,\n see `multiple comparisons correction`.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from nimare.meta.cbma.ale import ALE\n\n# First, we need to fit the Estimator to the Dataset.\nmeta = ALE(null_method=\"approximate\")\nresults = meta.fit(dset)\n\n# We can check which FWE correction methods are available for the ALE Estimator\n# with the ``inspect`` class method.\nfrom nimare.correct import FWECorrector\n\nprint(FWECorrector.inspect(results))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Apply the Corrector to the MetaResult\nNow that we know what FWE correction methods are available, we can use one.\n\nThe \"montecarlo\" method is a special one that is implemented within the\nEstimator, rather than in the Corrector.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"corr = FWECorrector(method=\"montecarlo\", n_iters=50, n_cores=2)\ncres = corr.transform(results)\n\nDISTS_TO_PLOT = [\n \"values_desc-size_level-cluster_corr-fwe_method-montecarlo\",\n \"values_desc-mass_level-cluster_corr-fwe_method-montecarlo\",\n \"values_level-voxel_corr-fwe_method-montecarlo\",\n]\nXLABELS = [\n \"Maximum Cluster Size (Voxels)\",\n \"Maximum Cluster Mass\",\n \"Maximum Summary Statistic (ALE Value)\",\n]\n\nfig, axes = plt.subplots(figsize=(8, 8), nrows=3)\nnull_dists = cres.estimator.null_distributions_\n\nfor i_ax, dist_name in enumerate(DISTS_TO_PLOT):\n xlabel = XLABELS[i_ax]\n sns.histplot(x=null_dists[dist_name], bins=40, ax=axes[i_ax])\n axes[i_ax].set_title(dist_name)\n axes[i_ax].set_xlabel(xlabel)\n axes[i_ax].set_xlim(0, None)\n\nfig.tight_layout()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can also look at the description of the Corrector.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(\"Description:\")\npprint(cres.description_)\nprint(\"References:\")\npprint(cres.bibtex_)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Show corrected results\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"MAPS_TO_PLOT = [\n \"z\",\n \"z_desc-size_level-cluster_corr-FWE_method-montecarlo\",\n \"z_desc-mass_level-cluster_corr-FWE_method-montecarlo\",\n \"z_level-voxel_corr-FWE_method-montecarlo\",\n]\nTITLES = [\n \"Uncorrected z-statistics\",\n \"Cluster-size FWE-corrected z-statistics\",\n \"Cluster-mass FWE-corrected z-statistics\",\n \"Voxel-level FWE-corrected z-statistics\",\n]\n\nfig, axes = plt.subplots(figsize=(8, 10), nrows=4)\n\nfor i_ax, map_name in enumerate(MAPS_TO_PLOT):\n title = TITLES[i_ax]\n plot_stat_map(\n cres.get_map(map_name),\n draw_cross=False,\n cmap=\"RdBu_r\",\n threshold=0.5,\n cut_coords=[0, 0, -8],\n figure=fig,\n axes=axes[i_ax],\n )\n axes[i_ax].set_title(title)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Multiple comparisons correction in image-based meta-analyses\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from nimare.correct import FDRCorrector\nfrom nimare.meta.ibma import Stouffers\n\nmeta = Stouffers(resample=True)\nresults = meta.fit(dset)\nprint(f\"FWECorrector options: {FWECorrector.inspect(results)}\")\nprint(f\"FDRCorrector options: {FDRCorrector.inspect(results)}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note that the FWECorrector does not support a \"montecarlo\" method for the\nStouffers Estimator.\nThis is because NiMARE does not have a Monte Carlo-based method implemented\nfor most IBMA algorithms.\n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Apply the Corrector to the MetaResult\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"corr = FDRCorrector(method=\"indep\", alpha=0.05)\ncres = corr.transform(results)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Show corrected results\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"fig, axes = plt.subplots(figsize=(8, 6), nrows=2)\nplot_stat_map(\n cres.get_map(\"z\"),\n draw_cross=False,\n cmap=\"RdBu_r\",\n threshold=0.5,\n cut_coords=[0, 0, -8],\n figure=fig,\n axes=axes[0],\n)\naxes[0].set_title(\"Uncorrected z-statistics\")\nplot_stat_map(\n cres.get_map(\"z_corr-FDR_method-indep\"),\n draw_cross=False,\n cmap=\"RdBu_r\",\n threshold=0.5,\n cut_coords=[0, 0, -8],\n figure=fig,\n axes=axes[1],\n)\naxes[1].set_title(\"FDR-corrected z-statistics\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.18"
}
},
"nbformat": 4,
"nbformat_minor": 0
}PK QXd#+! ! 02_meta-analyses/07_macm.ipynb{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n\n# Meta-analytic coactivation modeling analysis\n\nPerform a MACM analysis with Neurosynth data.\n\nMeta-analytic coactivation modeling (MACM) is a common coordinate-based\nanalysis in which task-independent \"connectivity\" is assessed by selecting\nstudies within a larger database based on locations of report coordinates.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import nibabel as nib\nimport numpy as np\nfrom nilearn import datasets, image, plotting\n\nfrom nimare.correct import FWECorrector\nfrom nimare.dataset import Dataset\nfrom nimare.meta.cbma.ale import SCALE\nfrom nimare.meta.cbma.mkda import MKDAChi2"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load Dataset\nWe will assume that the Neurosynth database has already been downloaded and\nconverted to a NiMARE dataset.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"dset_file = \"neurosynth_nimare_with_abstracts.pkl.gz\"\ndset = Dataset.load(dset_file)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Define a region of interest\nWe'll use the right amygdala from the Harvard-Oxford atlas\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"atlas = datasets.fetch_atlas_harvard_oxford(\"sub-maxprob-thr50-2mm\")\nimg = atlas[\"maps\"]\n\nroi_idx = atlas[\"labels\"].index(\"Right Amygdala\")\nimg_vals = np.unique(img.get_fdata())\nroi_val = img_vals[roi_idx]\nroi_img = image.math_img(f\"img1 == {roi_val}\", img1=img)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Select studies with a reported coordinate in the ROI\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"roi_ids = dset.get_studies_by_mask(roi_img)\ndset_sel = dset.slice(roi_ids)\nprint(f\"{len(roi_ids)}/{len(dset.ids)} studies report at least one coordinate in the ROI\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Select studies with *no* reported coordinates in the ROI\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"no_roi_ids = list(set(dset.ids).difference(roi_ids))\ndset_unsel = dset.slice(no_roi_ids)\nprint(f\"{len(no_roi_ids)}/{len(dset.ids)} studies report zero coordinates in the ROI\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## MKDA Chi2 with FWE correction\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"mkda = MKDAChi2(kernel__r=10)\nresults = mkda.fit(dset_sel, dset_unsel)\n\ncorr = FWECorrector(method=\"montecarlo\", n_iters=10000)\ncres = corr.transform(results)\n\n# We want the \"association\" map (2-way chi-square between sel and unsel)\nplotting.plot_stat_map(\n cres.get_map(\"z_desc-uniformity_level-voxel_corr-FWE_method-montecarlo\"),\n threshold=3.09,\n draw_cross=False,\n cmap=\"RdBu_r\",\n)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## SCALE\nAnother good option for a MACM analysis is the SCALE algorithm, which was\ndesigned specifically for MACM. Unfortunately, SCALE does not support\nmultiple-comparisons correction.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# First, we must define our null model of reported coordinates in the literature.\n# We will use the coordinates in Neurosynth\nxyz = dset.coordinates[[\"x\", \"y\", \"z\"]].values\nscale = SCALE(xyz=xyz, n_iters=10000, n_cores=1, kernel__n=20)\nresults = scale.fit(dset_sel)\nplotting.plot_stat_map(results.get_map(\"z\"), draw_cross=False, cmap=\"RdBu_r\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.18"
}
},
"nbformat": 4,
"nbformat_minor": 0
}PK QXoGGO O # 02_meta-analyses/11_plot_cbmr.ipynb{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n\n# Coordinate-based meta-regression algorithms\n\nA tour of Coordinate-based meta-regression (CBMR) algorithms in NiMARE\n\nCBMR is a generative framework to approximate smooth activation intensity function\nand investigate the effect of study-level moderators (e.g., year of pubilication,\nsample size, subtype of stimuli). CBMR considers three stochastic models (Poisson,\nNegative Binomial (NB) and Clustered NB) for modeling the random variation in foci,\nand allows flexible statistical inference for either spatial homogeneity tests or\ngroup comparison tests. It is a computationally efficient approach with\ngood statistical interpretability to model the locations of activation foci.\n\nThis tutorial is intended to provide a brief description and example of the CBMR\nalgorithm implemented in NiMARE.\n\nFor a more detailed introduction to the elements of a coordinate-based meta-regression,\nsee the\n[online course](https://www.coursera.org/lecture/functional-mri-2/module-3-meta-analysis-Vd4zz)\nor a [brief overview](https://libguides.princeton.edu/neuroimaging_meta).\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import numpy as np\nimport scipy\nfrom nilearn.plotting import plot_stat_map\n\nfrom nimare.generate import create_coordinate_dataset\nfrom nimare.meta import models\nfrom nimare.transforms import StandardizeField"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load Dataset\nHere, we're going to simulate a dataset\n(using [nimare.generate.create_coordinate_dataset](https://nimare.readthedocs.io/en/latest/generated/nimare.generate.create_coordinate_dataset.html)\nthat includes 100 studies, each with 10 reported foci and sample size varying between\n20 and 40. We separate them into four groups according to diagnosis (schizophrenia or depression)\nand drug status (Yes or No). We also add two continuous study-level moderators (sample size and\naverage age) and a categorical study-level moderator (schizophrenia subtype).\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# data simulation\nground_truth_foci, dset = create_coordinate_dataset(foci=10, sample_size=(20, 40), n_studies=1000)\n# set up group columns: diagnosis & drug_status\nn_rows = dset.annotations.shape[0]\ndset.annotations[\"diagnosis\"] = [\n \"schizophrenia\" if i % 2 == 0 else \"depression\" for i in range(n_rows)\n]\ndset.annotations[\"drug_status\"] = [\"Yes\" if i % 2 == 0 else \"No\" for i in range(n_rows)]\ndset.annotations[\"drug_status\"] = (\n dset.annotations[\"drug_status\"].sample(frac=1).reset_index(drop=True)\n) # random shuffle drug_status column\n# set up continuous moderators: sample sizes & avg_age\ndset.annotations[\"sample_sizes\"] = [dset.metadata.sample_sizes[i][0] for i in range(n_rows)]\ndset.annotations[\"avg_age\"] = np.arange(n_rows)\n# set up categorical moderators: schizophrenia_subtype (as not enough data to be interpreted\n# as groups)\ndset.annotations[\"schizophrenia_subtype\"] = [\"type1\", \"type2\", \"type3\", \"type4\", \"type5\"] * int(\n n_rows / 5\n)\ndset.annotations[\"schizophrenia_subtype\"] = (\n dset.annotations[\"schizophrenia_subtype\"].sample(frac=1).reset_index(drop=True)\n) # random shuffle drug_status column"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Estimation of group-specific spatial intensity functions\nCBMR can generate estimation of group-specific spatial internsity\nfunctions for multiple groups simultaneously, with different group-specific\nspatial regression coefficients.\n\nCBMR can also consider the effects of study-level moderators\n(e.g. sample size, year of publication) by estimating regression coefficients\nof moderators (shared by all groups).\n\nNote that study-level moderators can only have global effects instead of localized\neffects within CBMR framework. In the scenario that there're multiple subgroups\nwithin a group (e.g., indexed as subgroup-1 to subgroup-n, but one or more of them\ndon't have enough number of studies to be inferred as a separate group). Using\ncategorical encoding, CBMR can interpret the subgroups as categorical moderators\nfor each study (either 0 or 1), and estimate the global activation intensity\nassociated with each subgroup (comparing to the average).\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from nimare.meta.cbmr import CBMREstimator\n\ndset = StandardizeField(fields=[\"sample_sizes\", \"avg_age\"]).transform(dset)\n\ncbmr = CBMREstimator(\n group_categories=[\"diagnosis\", \"drug_status\"],\n moderators=[\n \"standardized_sample_sizes\",\n \"standardized_avg_age\",\n \"schizophrenia_subtype:reference=type1\",\n ],\n spline_spacing=100, # a reasonable choice is 10 or 5, 100 is for speed\n model=models.PoissonEstimator,\n penalty=False,\n lr=1e-1,\n tol=1e3, # a reasonable choice is 1e-2, 1e3 is for speed\n device=\"cpu\", # \"cuda\" if you have GPU\n)\nresults = cbmr.fit(dataset=dset)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now that we have fitted the model, we can plot the spatial intensity maps.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"plot_stat_map(\n results.get_map(\"spatialIntensity_group-SchizophreniaYes\"),\n cut_coords=[0, 0, -8],\n draw_cross=False,\n cmap=\"RdBu_r\",\n title=\"Schizophrenia with drug treatment\",\n threshold=1e-4,\n vmax=1e-3,\n)\nplot_stat_map(\n results.get_map(\"spatialIntensity_group-SchizophreniaNo\"),\n cut_coords=[0, 0, -8],\n draw_cross=False,\n cmap=\"RdBu_r\",\n title=\"Schizophrenia without drug treatment\",\n threshold=1e-4,\n vmax=1e-3,\n)\nplot_stat_map(\n results.get_map(\"spatialIntensity_group-DepressionYes\"),\n cut_coords=[0, 0, -8],\n draw_cross=False,\n cmap=\"RdBu_r\",\n title=\"Depression with drug treatment\",\n threshold=1e-4,\n vmax=1e-3,\n)\nplot_stat_map(\n results.get_map(\"spatialIntensity_group-DepressionNo\"),\n cut_coords=[0, 0, -8],\n draw_cross=False,\n cmap=\"RdBu_r\",\n title=\"Depression without drug treatment\",\n threshold=1e-4,\n vmax=1e-3,\n)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Four figures correspond to group-specific spatial intensity map of four groups\n(\"schizophreniaYes\", \"schizophreniaNo\", \"depressionYes\", \"depressionNo\").\nAreas with stronger spatial intensity are highlighted.\n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Generalized Linear Hypothesis (GLH) testing for spatial homogeneity\nIn the most basic scenario of spatial homogeneity test, contrast matrix `t_con_groups`\ncan be generated by `create_contrast` function, with group names specified.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from nimare.meta.cbmr import CBMRInference\n\ninference = CBMRInference(device=\"cuda\")\ninference.fit(result=results)\nt_con_groups = inference.create_contrast(\n [\"SchizophreniaYes\", \"SchizophreniaNo\", \"DepressionYes\", \"DepressionNo\"], source=\"groups\"\n)\ncontrast_result = inference.transform(t_con_groups=t_con_groups)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now that we have done spatial homogeneity tests, we can plot the z-score maps.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# generate z-score maps for group-wise spatial homogeneity test\nplot_stat_map(\n contrast_result.get_map(\"z_group-SchizophreniaYes\"),\n cut_coords=[0, 0, -8],\n draw_cross=False,\n cmap=\"RdBu_r\",\n title=\"SchizophreniaYes\",\n threshold=scipy.stats.norm.isf(0.05),\n vmax=30,\n)\n\nplot_stat_map(\n contrast_result.get_map(\"z_group-SchizophreniaNo\"),\n cut_coords=[0, 0, -8],\n draw_cross=False,\n cmap=\"RdBu_r\",\n title=\"SchizophreniaNo\",\n threshold=scipy.stats.norm.isf(0.05),\n vmax=30,\n)\n\nplot_stat_map(\n contrast_result.get_map(\"z_group-DepressionYes\"),\n cut_coords=[0, 0, -8],\n draw_cross=False,\n cmap=\"RdBu_r\",\n title=\"DepressionYes\",\n threshold=scipy.stats.norm.isf(0.05),\n vmax=30,\n)\n\nplot_stat_map(\n contrast_result.get_map(\"z_group-DepressionNo\"),\n cut_coords=[0, 0, -8],\n draw_cross=False,\n cmap=\"RdBu_r\",\n title=\"DepressionNo\",\n threshold=scipy.stats.norm.isf(0.05),\n vmax=30,\n)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Four figures (displayed as z-statistics map) correspond to homogeneity test of\ngroup-specific spatial intensity for four groups. The null hypothesis assumes\nhomogeneous spatial intensity over the whole brain,\n$H_0: \\mu_j = \\mu_0 = sum(n_{\\text{foci}})/N$, $j=1, \\cdots, N$, where\n$N$ is the number of voxels within brain mask, $j$ is the index of voxel.\nAreas with significant p-values are highlighted (under significance level $0.05$).\n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Perform fasle discovery rate (FDR) correction on spatial homogeneity test\nThe default FDR correction method is \"indep\", using Benjamini-Hochberg(BH) procedure.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from nimare.correct import FDRCorrector\n\ncorr = FDRCorrector(method=\"indep\", alpha=0.05)\ncres = corr.transform(contrast_result)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now that we have applied the FDR correction methods,\nwe can plot the FDR corrected z-score maps.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# generate FDR corrected z-score maps for group-wise spatial homogeneity test\nplot_stat_map(\n cres.get_map(\"z_group-SchizophreniaYes_corr-FDR_method-indep\"),\n cut_coords=[0, 0, -8],\n draw_cross=False,\n cmap=\"RdBu_r\",\n title=\"Schizophrenia with drug treatment (FDR corrected)\",\n threshold=scipy.stats.norm.isf(0.05),\n vmax=30,\n)\n\nplot_stat_map(\n cres.get_map(\"z_group-SchizophreniaNo_corr-FDR_method-indep\"),\n cut_coords=[0, 0, -8],\n draw_cross=False,\n cmap=\"RdBu_r\",\n title=\"Schizophrenia without drug treatment (FDR corrected)\",\n threshold=scipy.stats.norm.isf(0.05),\n vmax=30,\n)\n\nplot_stat_map(\n cres.get_map(\"z_group-DepressionYes_corr-FDR_method-indep\"),\n cut_coords=[0, 0, -8],\n draw_cross=False,\n cmap=\"RdBu_r\",\n title=\"Depression with drug treatment (FDR corrected)\",\n threshold=scipy.stats.norm.isf(0.05),\n vmax=30,\n)\n\nplot_stat_map(\n cres.get_map(\"z_group-DepressionNo_corr-FDR_method-indep\"),\n cut_coords=[0, 0, -8],\n draw_cross=False,\n cmap=\"RdBu_r\",\n title=\"Depression without drug treatment (FDR corrected)\",\n threshold=scipy.stats.norm.isf(0.05),\n vmax=30,\n)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"After FDR correction (via BH procedure), areas with stronger spatial intensity\nare more stringent, (the number of voxels with significant p-values is reduced).\n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## GLH testing for group comparisons among any two groups\nIn the most basic scenario of group comparison test, contrast matrix `t_con_groups`\ncan be generated by `create_contrast` function, with `contrast_name` specified as\n\"group1-group2\".\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"t_con_groups = inference.create_contrast(\n [\n \"SchizophreniaYes-SchizophreniaNo\",\n \"SchizophreniaNo-DepressionNo\",\n \"DepressionYes-DepressionNo\",\n ],\n source=\"groups\",\n)\ncontrast_result = inference.transform(t_con_groups=t_con_groups, t_con_moderators=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now that we have done group comparison tests,\nwe can plot the z-score maps indicating difference in spatial intensity between two groups.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# generate z-statistics maps for each group\nplot_stat_map(\n contrast_result.get_map(\"z_group-SchizophreniaYes-SchizophreniaNo\"),\n cut_coords=[0, 0, -8],\n draw_cross=False,\n cmap=\"RdBu_r\",\n title=\"Drug Treatment Effect for Schizophrenia\",\n threshold=scipy.stats.norm.isf(0.4),\n vmax=2,\n)\n\nplot_stat_map(\n contrast_result.get_map(\"z_group-SchizophreniaNo-DepressionNo\"),\n cut_coords=[0, 0, -8],\n draw_cross=False,\n cmap=\"RdBu_r\",\n title=\"Untreated Schizophrenia vs. Untreated Depression\",\n threshold=scipy.stats.norm.isf(0.4),\n vmax=2,\n)\n\nplot_stat_map(\n contrast_result.get_map(\"z_group-DepressionYes-DepressionNo\"),\n cut_coords=[0, 0, -8],\n draw_cross=False,\n cmap=\"RdBu_r\",\n title=\"Drug Treatment Effect for Depression\",\n threshold=scipy.stats.norm.isf(0.4),\n vmax=2,\n)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Four figures (displayed as z-statistics map) correspond to group comparison\ntest of spatial intensity for any two groups. The null hypothesis assumes\nspatial intensity estimations of two groups are equal at voxel level,\n$H_0: \\mu_{1j}=\\mu_{2j}$, $j=1, \\cdots, N$, where $N$ is number\nof voxels within brain mask, $j$ is the index of voxel. Areas with significant p-values\n(significant difference in spatial intensity estimation between two groups)\nare highlighted (under significance level $0.05$).\n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## GLH testing with contrast matrix specified\nCBMR supports more flexible GLH test by specifying a contrast matrix.\nFor example, group comparison test `2xgroup_0-1xgroup_1-1xgroup_2` can be\nrepresented as `t_con_group=[2, -1, -1, 0]`, as an input in `compute_contrast`\nfunction. Multiple independent GLH tests can be conducted simultaneously by\nincluding multiple contrast vectors/matrices in `t_con_group`.\n\nCBMR also allows simultaneous GLH tests (consisting of multiple contrast vectors)\nwhen it's represented as one of elements in `t_con_group` (datatype: list).\nOnly if all of null hypotheses are rejected at voxel level, p-values are significant.\nFor example, `t_con_group=[[1,-1,0,0], [1,0,-1,0], [0,0,1,-1]]` is used for testing\nthe equality of spatial intensity estimation among all of four groups (finding the\nconsistent activation regions). Note that only $n-1$ contrast vectors are necessary\nfor testing the equality of $n$ groups.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"contrast_result = inference.transform(\n t_con_groups=[[[1, -1, 0, 0], [1, 0, -1, 0], [0, 0, 1, -1]]], t_con_moderators=False\n)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now that we have done group comparison tests with the specified contrast matrix,\nwe can plot the z-score maps indicating uniformity in activation regions among\nall four groups.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"plot_stat_map(\n contrast_result.get_map(\"z_GLH_groups_0\"),\n cut_coords=[0, 0, -8],\n draw_cross=False,\n cmap=\"RdBu_r\",\n title=\"GLH_groups_0\",\n threshold=scipy.stats.norm.isf(0.4),\n)\nprint(\"The contrast matrix of GLH_0 is {}\".format(contrast_result.metadata[\"GLH_groups_0\"]))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## GLH testing for study-level moderators\nCBMR framework can estimate global study-level moderator effects,\nand allows inference on the existence of m.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"contrast_name = results.estimator.moderators\nt_con_moderators = inference.create_contrast(contrast_name, source=\"moderators\")\ncontrast_result = inference.transform(t_con_moderators=t_con_moderators)\nprint(contrast_result.tables[\"moderators_regression_coef\"])\nprint(\n \"P-values of moderator effects `sample_sizes` is {}\".format(\n contrast_result.tables[\"p_standardized_sample_sizes\"]\n )\n)\nprint(\n \"P-value of moderator effects `avg_age` is {}\".format(\n contrast_result.tables[\"p_standardized_avg_age\"]\n )\n)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This table shows the regression coefficients of study-level moderators, here,\n`sample_sizes` and `avg_age` are standardized in the preprocessing steps.\nModerator effects of both `sample_size` and `avg_age` are not significant under\nsignificance level $0.05$. With reference to spatial intensity estimation of\na chosen subtype, spatial intensity estimations of the other $4$ subtypes of\nschizophrenia are moderatored globally.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"t_con_moderators = inference.create_contrast(\n [\"standardized_sample_sizes-standardized_avg_age\"], source=\"moderators\"\n)\ncontrast_result = inference.transform(t_con_moderators=t_con_moderators)\nprint(\n \"P-values of difference in two moderator effectors (`sample_size-avg_age`) is {}\".format(\n contrast_result.tables[\"p_standardized_sample_sizes-standardized_avg_age\"]\n )\n)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"CBMR also allows flexible contrasts between study-level covariates.\nFor example, we can write `contrast_name` (an input to `create_contrast`\nfunction) as `standardized_sample_sizes-standardized_avg_age` when exploring\nif the moderator effects of `sample_sizes` and `avg_age` are equivalent.\n\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.18"
}
},
"nbformat": 4,
"nbformat_minor": 0
}PK QQX?, , # 02_meta-analyses/02_plot_ibma.ipynb{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n\n# Image-based meta-analysis algorithms\n\nA tour of IBMA algorithms in NiMARE.\n\nThis tutorial is intended to provide a brief description and example of each of\nthe IBMA algorithms implemented in NiMARE.\nFor a more detailed introduction to the elements of an image-based\nmeta-analysis, see other stuff.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from nilearn.plotting import plot_stat_map"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Download data\nNote
The data used in this example come from a collection of NIDM-Results packs\n downloaded from Neurovault collection 1425, uploaded by Dr. Camille Maumet.
\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from nimare.extract import download_nidm_pain\n\ndset_dir = download_nidm_pain()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load Dataset\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import os\nfrom pprint import pprint\n\nfrom nimare.dataset import Dataset\nfrom nimare.transforms import ImageTransformer\nfrom nimare.utils import get_resource_path\n\ndset_file = os.path.join(get_resource_path(), \"nidm_pain_dset.json\")\ndset = Dataset(dset_file)\ndset.update_path(dset_dir)\n\n# Calculate missing images\nxformer = ImageTransformer(target=[\"varcope\", \"z\"])\ndset = xformer.transform(dset)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Stouffer's\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from nimare.meta.ibma import Stouffers\n\nmeta = Stouffers(use_sample_size=False)\nresults = meta.fit(dset)\n\nplot_stat_map(\n results.get_map(\"z\"),\n cut_coords=[0, 0, -8],\n draw_cross=False,\n cmap=\"RdBu_r\",\n)\n\nprint(\"Description:\")\npprint(results.description_)\nprint(\"References:\")\npprint(results.bibtex_)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Stouffer's with weighting by sample size\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"meta = Stouffers(use_sample_size=True)\nresults = meta.fit(dset)\n\nplot_stat_map(\n results.get_map(\"z\"),\n cut_coords=[0, 0, -8],\n draw_cross=False,\n cmap=\"RdBu_r\",\n)\n\nprint(\"Description:\")\npprint(results.description_)\nprint(\"References:\")\npprint(results.bibtex_)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Fisher's\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from nimare.meta.ibma import Fishers\n\nmeta = Fishers()\nresults = meta.fit(dset)\n\nplot_stat_map(\n results.get_map(\"z\"),\n cut_coords=[0, 0, -8],\n draw_cross=False,\n cmap=\"RdBu_r\",\n)\n\nprint(\"Description:\")\npprint(results.description_)\nprint(\"References:\")\npprint(results.bibtex_)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Permuted OLS\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from nimare.correct import FWECorrector\nfrom nimare.meta.ibma import PermutedOLS\n\nmeta = PermutedOLS(two_sided=True)\nresults = meta.fit(dset)\n\nplot_stat_map(\n results.get_map(\"z\"),\n cut_coords=[0, 0, -8],\n draw_cross=False,\n cmap=\"RdBu_r\",\n)\n\ncorrector = FWECorrector(method=\"montecarlo\", n_iters=100, n_cores=1)\ncresult = corrector.transform(results)\n\nplot_stat_map(\n cresult.get_map(\"z_level-voxel_corr-FWE_method-montecarlo\"),\n cut_coords=[0, 0, -8],\n draw_cross=False,\n cmap=\"RdBu_r\",\n)\n\nprint(\"Description:\")\npprint(cresult.description_)\nprint(\"References:\")\npprint(cresult.bibtex_)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Weighted Least Squares\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from nimare.meta.ibma import WeightedLeastSquares\n\nmeta = WeightedLeastSquares(tau2=0)\nresults = meta.fit(dset)\n\nplot_stat_map(\n results.get_map(\"z\"),\n cut_coords=[0, 0, -8],\n draw_cross=False,\n cmap=\"RdBu_r\",\n)\n\nprint(\"Description:\")\npprint(results.description_)\nprint(\"References:\")\npprint(results.bibtex_)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## DerSimonian-Laird\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from nimare.meta.ibma import DerSimonianLaird\n\nmeta = DerSimonianLaird()\nresults = meta.fit(dset)\n\nplot_stat_map(\n results.get_map(\"z\"),\n cut_coords=[0, 0, -8],\n draw_cross=False,\n cmap=\"RdBu_r\",\n)\n\nprint(\"Description:\")\npprint(results.description_)\nprint(\"References:\")\npprint(results.bibtex_)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Hedges\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from nimare.meta.ibma import Hedges\n\nmeta = Hedges()\nresults = meta.fit(dset)\n\nplot_stat_map(\n results.get_map(\"z\"),\n cut_coords=[0, 0, -8],\n draw_cross=False,\n cmap=\"RdBu_r\",\n)\n\nprint(\"Description:\")\npprint(results.description_)\nprint(\"References:\")\npprint(results.bibtex_)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.18"
}
},
"nbformat": 4,
"nbformat_minor": 0
}PK QX (nf f , 02_meta-analyses/12_plot_ibma_workflow.ipynb{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n\n# Run an image-based meta-analysis (IBMA) workflow\n\nNiMARE provides a plethora of tools for performing meta-analyses on neuroimaging data.\nSometimes it's difficult to know where to start, especially if you're new to meta-analysis.\nThis tutorial will walk you through using a IBMA workflow function which puts together\nthe fundamental steps of a IBMA meta-analysis.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import os\nfrom pathlib import Path\n\nimport matplotlib.pyplot as plt\nfrom nilearn.plotting import plot_stat_map\n\nfrom nimare.extract import download_nidm_pain"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Download data\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"dset_dir = download_nidm_pain()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load Dataset\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from nimare.dataset import Dataset\nfrom nimare.utils import get_resource_path\n\ndset_file = os.path.join(get_resource_path(), \"nidm_pain_dset.json\")\ndset = Dataset(dset_file)\ndset.update_path(dset_dir)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Run IBMA Workflow\nThe fit method of a IBMA workflow class runs the following steps:\n\n1. Runs a meta-analysis using the specified method (default: Stouffers)\n2. Applies a corrector to the meta-analysis results (default: FDRCorrector, indep)\n3. Generates cluster tables and runs diagnostics on the corrected results (default: Jackknife)\n\nAll in one call!\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from nimare.workflows.ibma import IBMAWorkflow\n\nworkflow = IBMAWorkflow()\nresult = workflow.fit(dset)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Plot Results\nThe fit method of the IBMA workflow class returns a :class:`~nimare.results.MetaResult` object,\nwhere you can access the corrected results of the meta-analysis and diagnostics tables.\n\nCorrected map:\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"img = result.get_map(\"z_corr-FDR_method-indep\")\nplot_stat_map(\n img,\n cut_coords=4,\n display_mode=\"z\",\n threshold=1.65, # voxel_thresh p < .05, one-tailed\n cmap=\"RdBu_r\",\n vmax=4,\n)\nplt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Clusters table\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"result.tables[\"z_corr-FDR_method-indep_tab-clust\"]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Contribution table\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"result.tables[\"z_corr-FDR_method-indep_diag-Jackknife_tab-counts_tail-positive\"]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Report\nFinally, a NiMARE report is generated from the MetaResult.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from nimare.reports.base import run_reports\n\n# root_dir = Path(os.getcwd()).parents[1] / \"docs\" / \"_build\"\n# Use the previous root to run the documentation locally.\nroot_dir = Path(os.getcwd()).parents[1] / \"_readthedocs\"\nhtml_dir = root_dir / \"html\" / \"auto_examples\" / \"02_meta-analyses\" / \"12_plot_ibma_workflow\"\nhtml_dir.mkdir(parents=True, exist_ok=True)\n\nrun_reports(result, html_dir)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
".. raw:: html\n\n \n\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.18"
}
},
"nbformat": 4,
"nbformat_minor": 0
}PK QXkσ 4 02_meta-analyses/06_plot_compare_ibma_and_cbma.ipynb{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n\n# Compare image and coordinate based meta-analyses\n\nRun IBMAs and CBMAs on a toy dataset, then compare the results qualitatively.\n\nCollection of NIDM-Results packs downloaded from Neurovault collection 1425,\nuploaded by Dr. Camille Maumet.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import os\n\nimport pandas as pd\nfrom nilearn.plotting import plot_stat_map\n\nfrom nimare.dataset import Dataset\nfrom nimare.extract import download_nidm_pain\nfrom nimare.meta.cbma import ALE\nfrom nimare.meta.ibma import DerSimonianLaird\nfrom nimare.transforms import ImagesToCoordinates, ImageTransformer\nfrom nimare.utils import get_resource_path"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Download data\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"dset_dir = download_nidm_pain()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load Dataset\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"dset_file = os.path.join(get_resource_path(), \"nidm_pain_dset.json\")\ndset = Dataset(dset_file)\ndset.update_path(dset_dir)\n\n# Calculate missing statistical images from the available stats.\nxformer = ImageTransformer(target=[\"varcope\"])\ndset = xformer.transform(dset)\n\n# create coordinates from statistical maps\ncoord_gen = ImagesToCoordinates(merge_strategy=\"fill\")\ndset = coord_gen.transform(dset)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## ALE (CBMA)\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"meta_cbma = ALE()\ncbma_results = meta_cbma.fit(dset)\nplot_stat_map(\n cbma_results.get_map(\"z\"),\n cut_coords=[0, 0, -8],\n draw_cross=False,\n cmap=\"RdBu_r\",\n)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## DerSimonian-Laird (IBMA)\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"meta_ibma = DerSimonianLaird()\nibma_results = meta_ibma.fit(dset)\nplot_stat_map(\n ibma_results.get_map(\"z\"),\n cut_coords=[0, 0, -8],\n draw_cross=False,\n cmap=\"RdBu_r\",\n)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Compare CBMA and IBMA Z-maps\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"stat_df = pd.DataFrame(\n {\n \"CBMA\": cbma_results.get_map(\"z\", return_type=\"array\"),\n \"IBMA\": ibma_results.get_map(\"z\", return_type=\"array\"),\n }\n)\nprint(stat_df.corr())"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.18"
}
},
"nbformat": 4,
"nbformat_minor": 0
}PK YQX+ 2 02_meta-analyses/03_plot_kernel_transformers.ipynb{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n\n# KernelTransformers and CBMA\n\n``KernelTransformer`` classes are tools for converting individual studies'\ncoordinates into images.\n\nFor coordinate-based meta-analyses, individual studies' statistical maps are\nmimicked by generating \"modeled activation\" (MA) maps from the coordinates.\nThese MA maps are used in the CBMA algorithms, although the specific method\nused to generate the MA maps differs by algorithm.\n\nThis example provides an introduction to the ``KernelTransformer`` class and\na tour of available types.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import os\n\nimport matplotlib.pyplot as plt\nfrom nilearn.plotting import plot_stat_map"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load Dataset\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from nimare.dataset import Dataset\nfrom nimare.utils import get_resource_path\n\ndset_file = os.path.join(get_resource_path(), \"nidm_pain_dset.json\")\ndset = Dataset(dset_file)\n\n# First, let us reduce this Dataset to only two studies\ndset = dset.slice(dset.ids[2:4])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Kernels ingest Datasets and can produce a few types of outputs\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from nimare.meta.kernel import MKDAKernel\n\n# First, the kernel should be initialized with any parameters.\nkernel = MKDAKernel()\n\n# Then, the ``transform`` method takes in the Dataset and produces the MA maps.\noutput = kernel.transform(dset)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"``return_type=\"image\"`` returns a list of 3D niimg objects.\n\nThis is the default option.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"image_output = kernel.transform(dset, return_type=\"image\")\nprint(type(image_output))\nprint(type(image_output[0]))\nprint(image_output[0].shape)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"``return_type=\"array\"`` returns a 2D numpy array\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"array_output = kernel.transform(dset, return_type=\"array\")\nprint(type(array_output))\nprint(array_output.shape)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"There is also an option to return an updated Dataset\n(``return_type=\"dataset\"``), with the MA maps saved as nifti files and\nreferences in the Dataset's images attribute.\nHowever, this will only work if the Dataset has a location set for its\nimages.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"try:\n dataset_output = kernel.transform(dset, return_type=\"dataset\")\nexcept ValueError as error:\n print(error)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Each kernel can accept certain parameters that control behavior\nYou can see what options are available via the API documentation or through\nthe help string.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"help(MKDAKernel)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For example, :class:`~nimare.meta.kernel.MKDAKernel` kernel accepts an ``r``\nargument to control the radius of the kernel.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"RADIUS_VALUES = [4, 8, 12]\nfig, axes = plt.subplots(ncols=3, figsize=(20, 10))\n\nfor i, radius in enumerate(RADIUS_VALUES):\n kernel = MKDAKernel(r=radius)\n ma_maps = kernel.transform(dset, return_type=\"image\")\n\n plot_stat_map(\n ma_maps[0],\n display_mode=\"z\",\n cut_coords=[-2],\n title=f\"r={radius}mm\",\n axes=axes[i],\n draw_cross=False,\n annotate=False,\n colorbar=False,\n cmap=\"RdBu_r\",\n )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## There are several kernels available\n:class:`~nimare.meta.kernel.MKDAKernel` convolves coordinates with a\nsphere and takes the union across voxels.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"kernel = MKDAKernel(r=10)\nma_maps = kernel.transform(dset, return_type=\"image\")\n\nplot_stat_map(\n ma_maps[0],\n cut_coords=[-2, -10, -4],\n title=\"MKDA\",\n draw_cross=False,\n cmap=\"RdBu_r\",\n)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
":class:`~nimare.meta.kernel.KDAKernel` convolves coordinates with a\nsphere as well, but takes the *sum* across voxels.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from nimare.meta.kernel import KDAKernel\n\nkernel = KDAKernel(r=10)\nma_maps = kernel.transform(dset, return_type=\"image\")\n\nplot_stat_map(\n ma_maps[0],\n cut_coords=[-2, -10, -4],\n title=\"KDA\",\n draw_cross=False,\n cmap=\"RdBu_r\",\n)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
":class:`~nimare.meta.kernel.ALEKernel` convolves coordinates with a 3D\nGaussian, for which the FWHM is determined by the sample size of each study.\nThis sample size will be inferred automatically, if that information is\navailable in the Dataset, or it can be set as a constant value across all\nstudies in the Dataset with the ``sample_size`` argument.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from nimare.meta.kernel import ALEKernel\n\nkernel = ALEKernel(sample_size=20)\nma_maps = kernel.transform(dset, return_type=\"image\")\n\nplot_stat_map(\n ma_maps[0],\n cut_coords=[-2, -10, -4],\n title=\"ALE\",\n draw_cross=False,\n cmap=\"RdBu_r\",\n)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.18"
}
},
"nbformat": 4,
"nbformat_minor": 0
}PK QX]oa' a' ; 02_meta-analyses/08_plot_cbma_subtraction_conjunction.ipynb{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n\n# Two-sample ALE meta-analysis\n\nMeta-analytic projects often involve a number of common steps comparing two or more samples.\n\nIn this example, we replicate the ALE-based analyses from :footcite:t:`enge2021meta`.\n\nA common project workflow with two meta-analytic samples involves the following:\n\n1. Run a within-sample meta-analysis of the first sample.\n2. Characterize/summarize the results of the first meta-analysis.\n3. Run a within-sample meta-analysis of the second sample.\n4. Characterize/summarize the results of the second meta-analysis.\n5. Compare the two samples with a subtraction analysis.\n6. Compare the two within-sample meta-analyses with a conjunction analysis.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import os\nfrom pathlib import Path\n\nimport matplotlib.pyplot as plt\nfrom nilearn.plotting import plot_stat_map"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load Sleuth text files into Datasets\nThe data for this example are a subset of studies from a meta-analysis on\nsemantic cognition in children :footcite:p:`enge2021meta`.\nA first group of studies probed children's semantic world knowledge\n(e.g., correctly naming an object after hearing its auditory description)\nwhile a second group of studies asked children to decide if two (or more)\nwords were semantically related to one another or not.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from nimare.io import convert_sleuth_to_dataset\nfrom nimare.utils import get_resource_path\n\nknowledge_file = os.path.join(get_resource_path(), \"semantic_knowledge_children.txt\")\nrelated_file = os.path.join(get_resource_path(), \"semantic_relatedness_children.txt\")\n\nknowledge_dset = convert_sleuth_to_dataset(knowledge_file)\nrelated_dset = convert_sleuth_to_dataset(related_file)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Individual group ALEs\nComputing separate ALE analyses for each group is not strictly necessary for\nperforming the subtraction analysis but will help the experimenter to appreciate the\nsimilarities and differences between the groups.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from nimare.correct import FWECorrector\nfrom nimare.meta.cbma import ALE\n\nale = ALE(null_method=\"approximate\")\nknowledge_results = ale.fit(knowledge_dset)\nrelated_results = ale.fit(related_dset)\n\ncorr = FWECorrector(method=\"montecarlo\", voxel_thresh=0.001, n_iters=100, n_cores=2)\nknowledge_corrected_results = corr.transform(knowledge_results)\nrelated_corrected_results = corr.transform(related_results)\n\nfig, axes = plt.subplots(figsize=(12, 10), nrows=2)\nknowledge_img = knowledge_corrected_results.get_map(\n \"z_desc-size_level-cluster_corr-FWE_method-montecarlo\"\n)\nplot_stat_map(\n knowledge_img,\n cut_coords=4,\n display_mode=\"z\",\n title=\"Semantic knowledge\",\n threshold=2.326, # cluster-level p < .01, one-tailed\n cmap=\"RdBu_r\",\n vmax=4,\n axes=axes[0],\n figure=fig,\n)\n\nrelated_img = related_corrected_results.get_map(\n \"z_desc-size_level-cluster_corr-FWE_method-montecarlo\"\n)\nplot_stat_map(\n related_img,\n cut_coords=4,\n display_mode=\"z\",\n title=\"Semantic relatedness\",\n threshold=2.326, # cluster-level p < .01, one-tailed\n cmap=\"RdBu_r\",\n vmax=4,\n axes=axes[1],\n figure=fig,\n)\nfig.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Characterize the relative contributions of experiments in the ALE results\nNiMARE contains two methods for this: :class:`~nimare.diagnostics.Jackknife`\nand :class:`~nimare.diagnostics.FocusCounter`.\nWe will show both below, but for the sake of speed we will only apply one to\neach subgroup meta-analysis.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from nimare.diagnostics import FocusCounter\n\ncounter = FocusCounter(\n target_image=\"z_desc-size_level-cluster_corr-FWE_method-montecarlo\",\n voxel_thresh=None,\n)\nknowledge_diagnostic_results = counter.transform(knowledge_corrected_results)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Clusters table.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"knowledge_clusters_table = knowledge_diagnostic_results.tables[\n \"z_desc-size_level-cluster_corr-FWE_method-montecarlo_tab-clust\"\n]\nknowledge_clusters_table.head(10)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Contribution table. Here ``PostiveTail`` refers to clusters with positive statistics.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"knowledge_count_table = knowledge_diagnostic_results.tables[\n \"z_desc-size_level-cluster_corr-FWE_method-montecarlo_diag-FocusCounter\"\n \"_tab-counts_tail-positive\"\n]\nknowledge_count_table.head(10)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from nimare.diagnostics import Jackknife\n\njackknife = Jackknife(\n target_image=\"z_desc-size_level-cluster_corr-FWE_method-montecarlo\",\n voxel_thresh=None,\n)\nrelated_diagnostic_results = jackknife.transform(related_corrected_results)\nrelated_jackknife_table = related_diagnostic_results.tables[\n \"z_desc-size_level-cluster_corr-FWE_method-montecarlo_diag-Jackknife_tab-counts_tail-positive\"\n]\nrelated_jackknife_table.head(10)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Subtraction analysis\nTypically, one would use at least 5000 iterations for a subtraction analysis.\nHowever, we have reduced this to 100 iterations for this example.\nSimilarly here we use a voxel-level z-threshold of 0.01, but in practice one would\nuse a more stringent threshold (e.g., 1.65).\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from nimare.meta.cbma import ALESubtraction\nfrom nimare.reports.base import run_reports\nfrom nimare.workflows import PairwiseCBMAWorkflow\n\nworkflow = PairwiseCBMAWorkflow(\n estimator=ALESubtraction(n_iters=10, n_cores=1),\n corrector=\"fdr\",\n diagnostics=FocusCounter(voxel_thresh=0.01, display_second_group=True),\n)\nres_sub = workflow.fit(knowledge_dset, related_dset)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Report\nFinally, a NiMARE report is generated from the MetaResult.\nroot_dir = Path(os.getcwd()).parents[1] / \"docs\" / \"_build\"\nUse the previous root to run the documentation locally.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"root_dir = Path(os.getcwd()).parents[1] / \"_readthedocs\"\nhtml_dir = root_dir / \"html\" / \"auto_examples\" / \"02_meta-analyses\" / \"08_subtraction\"\nhtml_dir.mkdir(parents=True, exist_ok=True)\n\nrun_reports(res_sub, html_dir)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
".. raw:: html\n\n \n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Conjunction analysis\nTo determine the overlap of the meta-analytic results, a conjunction image\ncan be computed by (a) identifying voxels that were statistically significant\nin *both* individual group maps and (b) selecting, for each of these voxels,\nthe smaller of the two group-specific *z* values :footcite:t:`nichols2005valid`.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from nimare.workflows.misc import conjunction_analysis\n\nimg_conj = conjunction_analysis([knowledge_img, related_img])\n\nplot_stat_map(\n img_conj,\n cut_coords=4,\n display_mode=\"z\",\n title=\"Conjunction\",\n threshold=2.326, # cluster-level p < .01, one-tailed\n cmap=\"RdBu_r\",\n vmax=4,\n)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## References\n.. footbibliography::\n\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.18"
}
},
"nbformat": 4,
"nbformat_minor": 0
}PK ]QX+&