NiMARE: Neuroimaging Meta-Analysis Research Environment¶
NiMARE is a Python package for neuroimaging meta-analyses. It makes conducting scary meta-analyses a dream!
To install NiMARE check out our installation guide.

About NiMARE¶
NiMARE is a Python package for performing meta-analyses, and derivative analyses using meta-analytic data, of the neuroimaging literature. While meta-analytic packages exist which implement one or two algorithms each, NiMARE provides a standard syntax for performing a wide range of analyses and for interacting with databases of coordinates and images from fMRI studies (e.g., brainspell, Neurosynth, and NeuroVault).
NiMARE joins a growing Python ecosystem for neuroimaging research, which includes such tools as Nipype, Nistats, and Nilearn. As with these other tools, NiMARE is open source, collaboratively developed, and built with ease of use in mind.
This page outlines NiMARE’s purpose and its role in a proposed meta-analytic ecosystem.
A Proposed Meta-Analytic Ecosystem¶

NiMARE aims to fill a gap in a burgeoning meta-analytic ecosystem. The goal of NiMARE is to collect a wide range of meta-analytic tools in one Python library. Currently, those methods are spread out across a range of programming languages and user interfaces, or are never even translated from the original papers into useable tools. NiMARE operates on NIMADS-format datasets, which users will be able to compile by searching the NeuroStuff database with the pyNIMADS library. A number of other services in the ecosystem will then use NiMARE functions to perform meta-analyses, including Neurosynth 2.0, NeuroVault, and metaCurious.
Note
This page outlines a tentative plan for a system of services for neuroimaging meta-analysis. Several of the services detailed here do not currently exist or only partially support the functionality described below. This plan is likely to change over time.
Neurosynth 2.0¶
Neurosynth currently stores a coordinated-based database of over 14,000 neuroimaging papers (automatically curated by ACE), provides a web interface for automated meta-analyses, functional decoding, and gene expression visualization, and provides a Python package implementing the above methods.
In order to improve modularization, the next iteration of Neurosynth will limit itself to a web interface for meta-analytic model specification and providing a centralized storage for large-scale meta-analyses, but not actually implementing the algorithms used to run those meta-analyses or to perform the other services provided on the website (e.g., functional decoding and topic modeling). The algorithms currently implemented in the Neurosynth Python package will be implemented (among many others) in NiMARE. Under the current plan, the database at the moment stored by Neurosynth will instead by stored in the NeuroStuff database, which will also store other coordinate- and image-based meta-analytic databases in NIMADS format.
NeuroVault¶
NeuroVault is a database for unthresholded images. Users may upload individual maps or NIDM Results, which can be exported from a number of fMRI analysis tools, like AfNI, SPM, FSL, and NeuroScout.
NeuroVault also has integrations with NeuroPower (for power analyses) and Neurosynth (for functional decoding), and supports simple image-based meta-analyses.
brainspell¶
brainspell is a clone of the Neurosynth database meant for crowdsourced manual annotation. It provides a website where users can correct mistakes made by ACE or can add labels from multiple cognitive ontologies (including the Cognitive Paradigm Ontology and the Cognitive Atlas) to experiments.
metaCurious¶
metaCurious is a new frontend (i.e., website) for brainspell, oriented toward meta-analysts. MetaCurious provides search and curation tools for researchers to build meta-analytic samples for analysis. Search criteria, reasons for exclusion, and other labels may be added by the researcher and fed back into the underlying database, resulting in goal-oriented manual annotation. MetaCurious generates GitHub repositories for meta-analytic samples, which will also be NiMARE-compatible in the future.
NIMADS¶
NIMADS is a new standard for organizing and representing meta-analytic neuroimaging data. NIMADS will be used by NeuroStuff, pyNIMADS, metaCurious, and NiMARE.
NeuroStuff¶
NeuroStuff (tentatively named) will act as a centralized repository for coordinates and maps from neuroimaging studies, stored in NIMADS format. Users will be able to query and add to the repository using its API and the pyNIMADS Python library.
pyNIMADS¶
pyNIMADS (also tentatively named) is a planned Python library that will act as a wrapper for the NeuroStuff API, allowing users to query the database and to build NiMARE-compatible datasets for analysis.
Installation¶
NiMARE can be installed from pip. To install the latest official release:
pip install nimare
If you want to use the most up-to-date version, you can install from master:
pip install git+https://github.com/neurostuff/NiMARE.git
NiMARE requires Python >=3.6 and the following packages:
- nibabel
- numpy
- scipy
- pandas
- statsmodels
- nipype
- scikit-learn
- nilearn
What Next?¶
For an overview of what you can do with NiMARE see NiMARE Documentation.
To get right to using NiMARE see the documentation on the command line interface.
If you have questions, or need help with using NiMARE, check out NeuroStars.
Command Line Interface¶
NiMARE provides several workflows as command-line interfaces, including ALE meta-analysis, meta-analytic coactivation modeling (MACM) analysis, peaks2maps image reconstruction, and contrast map meta-analysis. Each workflow should generate a boilerplate paragraph with details about the workflow and citations that can be used in a manuscript.
To use NiMARE from the command line, open a terminal window and type:
nimare --help
This will print the instructions for using the command line interface in your command line.
Examples¶
Working with datasets¶
Note
Click here to download the full example code
Download and convert the Neurosynth database¶
Download and convert the Neurosynth database (with abstracts) for analysis with NiMARE.
Note
This will likely change as we work to shift database querying to a remote database, rather than handling it locally with NiMARE.
Start with the necessary imports¶
import os
from neurosynth.base.dataset import download
import nimare
Download Neurosynth¶
out_dir = os.path.abspath('../example_data/')
if not os.path.isdir(out_dir):
os.mkdir(out_dir)
if not os.path.isfile(os.path.join(out_dir, 'database.txt')):
download(out_dir, unpack=True)
Convert Neurosynth database to NiMARE dataset file¶
dset = nimare.io.convert_neurosynth_to_dataset(
os.path.join(out_dir, 'database.txt'),
os.path.join(out_dir, 'features.txt'))
dset.save(os.path.join(out_dir, 'neurosynth_dataset.pkl.gz'))
Add article abstracts to dataset¶
dset = nimare.extract.download_abstracts(dset, 'tsalo006@fiu.edu')
dset.save(os.path.join(out_dir, 'neurosynth_nimare_with_abstracts.pkl.gz'))
Total running time of the script: ( 0 minutes 0.000 seconds)
Performing meta-analyses¶
Note
Click here to download the full example code
Generate modeled activation maps with peaks2maps¶
Start with the necessary imports¶
import os
import numpy as np
import matplotlib.pyplot as plt
from nilearn.plotting import plot_glass_brain
import nimare
from nimare.tests.utils import get_test_data_path
Load Dataset¶
dset_file = os.path.join(get_test_data_path(), 'nidm_pain_dset.json')
dset = nimare.dataset.Dataset(dset_file)
Run peaks2maps¶
k = nimare.meta.cbma.kernel.Peaks2MapsKernel()
imgs = k.transform(dset, masked=True)
Plot modeled activation maps¶
for img in imgs:
display = plot_glass_brain(img, display_mode='lyrz',
plot_abs=False, colorbar=True,
vmax=1, threshold=0)
Total running time of the script: ( 0 minutes 0.000 seconds)
Note
Click here to download the full example code
Generate modeled activation maps¶
Start with the necessary imports¶
import os
import numpy as np
import matplotlib.pyplot as plt
from nilearn.plotting import plot_stat_map
import nimare
from nimare.tests.utils import get_test_data_path
Load Dataset¶
dset_file = os.path.join(get_test_data_path(), 'nidm_pain_dset.json')
dset = nimare.dataset.Dataset(dset_file)
MKDA kernel maps¶
kernel = nimare.meta.cbma.MKDAKernel(r=8)
mkda_r08 = kernel.transform(dset)
kernel = nimare.meta.cbma.MKDAKernel(r=9)
mkda_r09 = kernel.transform(dset)
kernel = nimare.meta.cbma.MKDAKernel(r=10)
mkda_r10 = kernel.transform(dset)
kernel = nimare.meta.cbma.MKDAKernel(r=11)
mkda_r11 = kernel.transform(dset)
fig, axes = plt.subplots(nrows=4, ncols=1, figsize=(10, 17.5))
plot_stat_map(mkda_r08[2], cut_coords=[-2, -10, -4],
title='r=8mm', vmax=2, axes=axes[0],
draw_cross=False)
plot_stat_map(mkda_r09[2], cut_coords=[-2, -10, -4],
title='r=9mm', vmax=2, axes=axes[1],
draw_cross=False)
plot_stat_map(mkda_r10[2], cut_coords=[-2, -10, -4],
title='r=10mm', vmax=2, axes=axes[2],
draw_cross=False)
plot_stat_map(mkda_r11[2], cut_coords=[-2, -10, -4],
title='r=11mm', vmax=2, axes=axes[3],
draw_cross=False)
fig.show()
Show different kernel types together¶
kernel = nimare.meta.cbma.MKDAKernel(r=10)
mkda_res = kernel.transform(dset)
kernel = nimare.meta.cbma.KDAKernel(r=10)
kda_res = kernel.transform(dset)
kernel = nimare.meta.cbma.ALEKernel(n=20)
ale_res = kernel.transform(dset)
max_conv = np.max(kda_res[2].get_data())
plot_stat_map(ale_res[2], cut_coords=[-2, -10, -4], title='ALE')
plot_stat_map(mkda_res[2], cut_coords=[-2, -10, -4], title='MKDA', vmax=max_conv)
plot_stat_map(kda_res[2], cut_coords=[-2, -10, -4], title='KDA', vmax=max_conv)
Total running time of the script: ( 0 minutes 0.000 seconds)
Note
Click here to download the full example code
Run image-based meta-analyses on 21 pain studies¶
Collection of NIDM-Results packs downloaded from Neurovault collection 1425, uploaded by Dr. Camille Maumet.
Note
This will likely change as we work to shift database querying to a remote database, rather than handling it locally with NiMARE.
Start with the necessary imports¶
import os
import numpy as np
import pandas as pd
import nibabel as nib
from nilearn.masking import apply_mask, unmask
from nilearn.plotting import plot_stat_map
import nimare
from nimare.tests.utils import get_test_data_path
from nimare.meta.esma import fishers
from nimare.meta.ibma import (Fishers, Stouffers, WeightedStouffers,
RFX_GLM, FFX_GLM, ffx_glm)
Download data¶
dset_dir = nimare.extract.download_nidm_pain()
Load Dataset¶
dset_file = os.path.join(get_test_data_path(), 'nidm_pain_dset.json')
dset = nimare.dataset.Dataset(dset_file)
dset.update_path(dset_dir)
mask_img = dset.masker.mask_img
logp_thresh = -np.log(.05)
Fisher’s (using functions)¶
Get images for analysis
files = dset.get_images(imtype='z')
files = [f for f in files if f]
z_imgs = [nib.load(f) for f in files]
z_data = apply_mask(z_imgs, mask_img)
print('{0} studies found.'.format(z_data.shape[0]))
result = fishers(z_data, mask_img)
fishers_result = unmask(result['z'], mask_img)
plot_stat_map(fishers_result, cut_coords=[0, 0, -8],
draw_cross=False, cmap='RdBu_r')
Fisher’s (using Estimators)¶
Here is the object-oriented approach
meta = Fishers()
meta.fit(dset)
plot_stat_map(meta.results.get_map('z'), cut_coords=[0, 0, -8],
draw_cross=False, cmap='RdBu_r')
Stouffer’s with fixed-effects inference¶
meta = Stouffers(inference='ffx', null='theoretical', n_iters=None)
meta.fit(dset)
plot_stat_map(meta.results.get_map('z'), cut_coords=[0, 0, -8],
draw_cross=False, cmap='RdBu_r')
Stouffer’s with random-effects inference using theoretical null distribution¶
meta = Stouffers(inference='rfx', null='theoretical', n_iters=None)
meta.fit(dset)
plot_stat_map(meta.results.get_map('z'), cut_coords=[0, 0, -8],
draw_cross=False, cmap='RdBu_r')
Stouffer’s with random-effects inference using empirical null distribution¶
meta = Stouffers(inference='rfx', null='empirical', n_iters=1000)
meta.fit(dset)
plot_stat_map(meta.results.get_map('z'), cut_coords=[0, 0, -8],
draw_cross=False, cmap='RdBu_r')
Weighted Stouffer’s¶
meta = WeightedStouffers()
meta.fit(dset)
plot_stat_map(meta.results.get_map('z'), cut_coords=[0, 0, -8],
draw_cross=False, cmap='RdBu_r')
RFX GLM with theoretical null distribution¶
meta = RFX_GLM(null='theoretical', n_iters=None)
meta.fit(dset)
plot_stat_map(meta.results.get_map('z'), cut_coords=[0, 0, -8],
draw_cross=False, cmap='RdBu_r')
RFX GLM with empirical null distribution¶
meta = RFX_GLM(null='empirical', n_iters=1000)
meta.fit(dset)
plot_stat_map(meta.results.get_map('z'), cut_coords=[0, 0, -8],
draw_cross=False, cmap='RdBu_r')
Total running time of the script: ( 0 minutes 0.000 seconds)
Note
Click here to download the full example code
Run coordinate-based meta-analyses on 21 pain studies¶
Collection of NIDM-Results packs downloaded from Neurovault collection 1425, uploaded by Dr. Camille Maumet.
Note
This will likely change as we work to shift database querying to a remote database, rather than handling it locally with NiMARE.
Start with the necessary imports¶
import os
import json
from glob import glob
import numpy as np
import pandas as pd
import nibabel as nib
from scipy.stats import t
from nilearn.masking import apply_mask
from nilearn.plotting import plot_stat_map
import nimare
from nimare.tests.utils import get_test_data_path
Load Dataset¶
dset_file = os.path.join(get_test_data_path(), 'nidm_pain_dset.json')
dset = nimare.dataset.Dataset(dset_file)
mask_img = dset.masker.mask_img
MKDA density analysis¶
mkda = nimare.meta.cbma.MKDADensity(kernel__r=10)
mkda.fit(dset)
corr = nimare.correct.FWECorrector(method='permutation', n_iters=10, n_cores=1)
cres = corr.transform(mkda.results)
plot_stat_map(cres.get_map('logp_level-voxel_corr-FWE_method-permutation'),
cut_coords=[0, 0, -8], draw_cross=False, cmap='RdBu_r')
MKDA Chi2 with FDR correction¶
mkda = nimare.meta.cbma.MKDAChi2(kernel__r=10)
dset1 = dset.slice(dset.ids)
dset2 = dset.slice(dset.ids)
mkda.fit(dset1, dset2)
corr = nimare.correct.FDRCorrector(method='fdr_bh', alpha=0.001)
cres = corr.transform(mkda.results)
plot_stat_map(cres.get_map('consistency_z_FDR_corr-FDR_method-fdr_bh'),
threshold=1.65, cut_coords=[0, 0, -8], draw_cross=False,
cmap='RdBu_r')
MKDA Chi2 with FWE correction¶
corr = nimare.correct.FWECorrector(method='permutation', n_iters=10, n_cores=1)
cres = corr.transform(mkda.results)
plot_stat_map(cres.get_map('consistency_z'), threshold=1.65,
cut_coords=[0, 0, -8], draw_cross=False, cmap='RdBu_r')
KDA¶
kda = nimare.meta.cbma.KDA(kernel__r=10)
kda.fit(dset)
corr = nimare.correct.FWECorrector(method='permutation', n_iters=10, n_cores=1)
cres = corr.transform(kda.results)
plot_stat_map(cres.get_map('logp_level-voxel_corr-FWE_method-permutation'),
cut_coords=[0, 0, -8], draw_cross=False, cmap='RdBu_r')
ALE¶
ale = nimare.meta.cbma.ALE()
ale.fit(dset)
corr = nimare.correct.FWECorrector(method='permutation', n_iters=10, n_cores=1)
cres = corr.transform(ale.results)
plot_stat_map(cres.get_map('logp_level-cluster_corr-FWE_method-permutation'),
cut_coords=[0, 0, -8], draw_cross=False, cmap='RdBu_r')
SCALE¶
ijk = np.vstack(np.where(mask_img.get_data())).T
scale = nimare.meta.cbma.SCALE(ijk=ijk, n_iters=10, n_cores=1)
scale.fit(dset)
plot_stat_map(scale.results.get_map('z_vthresh'), cut_coords=[0, 0, -8],
draw_cross=False, cmap='RdBu_r')
Total running time of the script: ( 0 minutes 0.000 seconds)
Automated annotation¶
Note
Click here to download the full example code
Train an LDA model and use it¶
This example trains a latent Dirichlet allocation with MALLET using abstracts from Neurosynth.
Start with the necessary imports¶
import os
import nimare
from nimare import annotate
from nimare.tests.utils import get_test_data_path
Load dataset with abstracts¶
dset = nimare.dataset.Dataset.load(
os.path.join(get_test_data_path(), 'neurosynth_laird_studies.pkl.gz'))
Run model¶
Five iterations will take ~10 minutes
model = annotate.topic.LDAModel(dset.texts, text_column='abstract', n_iters=5)
model.fit()
model.save('lda_model.pkl.gz')
Total running time of the script: ( 0 minutes 0.000 seconds)
Note
Click here to download the full example code
Train a GCLDA model and use it¶
This example trains a generalized corresponded latent Dirichlet allocation using abstracts from Neurosynth and then uses it for decoding.
Start with the necessary imports¶
import os
import numpy as np
import nibabel as nib
import nimare
from nimare import annotate, decode
from nimare.tests.utils import get_test_data_path
Load dataset with abstracts¶
dset = nimare.dataset.Dataset.load(
os.path.join(get_test_data_path(), 'neurosynth_laird_studies.pkl.gz'))
Generate term counts¶
counts_df = annotate.text.generate_counts(
dset.texts, text_column='abstract', tfidf=False, max_df=0.99, min_df=0)
Run model¶
Five iterations will take ~10 minutes
model = annotate.topic.GCLDAModel(
counts_df, dset.coordinates, mask=dset.masker.mask_img)
model.fit(n_iters=5, loglikely_freq=5)
model.save('gclda_model.pkl.gz')
Decode an ROI image¶
Make an ROI from a single voxel
arr = np.zeros(dset.masker.mask_img.shape, int)
arr[40:44, 45:49, 40:44] = 1
mask_img = nib.Nifti1Image(arr, dset.masker.mask_img.affine)
# Run the decoder
decoded_df, _ = decode.discrete.gclda_decode_roi(model, mask_img)
decoded_df.sort_values(by='Weight', ascending=False).head(10)
Total running time of the script: ( 0 minutes 0.000 seconds)
Contributing to NiMARE¶
Welcome to the NiMARE repository! We’re excited you’re here and want to contribute.
These guidelines are designed to make it as easy as possible to get involved. If you have any questions that aren’t discussed below, please let us know by opening an issue!
Before you start you’ll need to set up a free GitHub account and sign in. Here are some instructions.
Governance¶
Governance is a hugely important part of any project.
It is especially important to have clear process and communication channels for open source projects that rely on a distributed network of volunteers, such as NiMARE
.
NiMARE
is currently supported by a small group of core developers.
Even with only a couple of individuals involved in decision making processes, we’ve found that setting expectations and communicating a shared vision has great value.
By starting the governance structure early in our development, we hope to welcome more people into the contributing team.
We are committed to continuing to update the governance structures as necessary.
Every member of the NiMARE
community is encouraged to comment on these processes and suggest improvements.
As the first interim Benevolent Dictator for Life (BDFL), Taylor Salo is ultimately responsible for any major decisions pertaining to NiMARE
development.
However, all potential changes are explicitly and openly discussed in the described channels of communication, and we strive for consensus amongst all community members.
Code of conduct¶
All NiMARE
community members are expected to follow our code of conduct during any interaction with the project.
That includes- but is not limited to- online conversations, in-person workshops or development sprints, and when giving talks about the software.
As stated in the code, severe or repeated violations by community members may result in exclusion from collective decision-making and rejection of future contributions to the NiMARE
project.
Asking questions about using NiMARE¶
Please direct usage-related questions to NeuroStars, with the tag “nimare”. The NiMARE developers follow NeuroStars, and will be able to answer your question there.
Labels¶
The current list of labels are here and include:
These issues contain a task that a member of the team has determined should require minimal knowledge of the existing codebase, and should be good for people new to the project. If you are interested in contributing to NiMARE, but aren’t sure where to start, we encourage you to take a look at these issues in particular.
These issues contain a task that a member of the team has determined we need additional help with. If you feel that you can contribute to one of these issues, we especially encourage you to do so!
These issues point to problems in the project. If you find new a bug, please give as much detail as possible in your issue, including steps to recreate the error. If you experience the same bug as one already listed, please add any additional information that you have as a comment.
These issues are asking for new features to be added to the project. Please try to make sure that your requested feature is distinct from any others that have already been requested or implemented. If you find one that’s similar but there are subtle differences please reference the other request in your issue.
Making a change¶
We appreciate all contributions to NiMARE, but those accepted fastest will follow a workflow similar to the following:
1. Comment on an existing issue or open a new issue referencing your addition.
This allows other members of the NiMARE development team to confirm that you aren’t overlapping with work that’s currently underway and that everyone is on the same page with the goal of the work you’re going to carry out.
This blog is a nice explanation of why putting this work in up front is so useful to everyone involved.
2. Fork NiMARE.
Fork the NiMARE repository to your profile.
This is now your own unique copy of NiMARE. Changes here won’t effect anyone else’s work, so it’s a safe space to explore edits to the code!
Make sure to keep your fork up to date with the master repository.
3. Make the changes you’ve discussed.
Try to keep the changes focused. We’ve found that working on a new branch makes it easier to keep your changes targeted.
When you’re creating your pull request, please do your best to follow NiMARE’s preferred style conventions. Namely, documentation should follow the numpydoc convention and code should adhere to PEP8 as much as possible.
4. Submit a pull request.
Submit a pull request.
A member of the development team will review your changes to confirm that they can be merged into the main codebase.
Recognizing contributions¶
We welcome and recognize all contributions from documentation to testing to code development. You can see a list of current contributors in our zenodo file. If you are new to the project, don’t forget to add your name and affiliation there!
Thank you!¶
You’re awesome.
- NOTE: These guidelines are based on contributing guidelines from the STEMMRoleModels project.
NiMARE Developer Guide¶
This guide provides a more detailed description of the organization and preferred coding style for NiMARE, for prospective code contributors.
Coding Style¶
NiMARE code should follow PEP8 recommendations. Additionally, we have modeled NiMARE’s code on scikit-learn.
API¶
nimare.dataset
: Dataset IO¶
Classes for representing datasets of images and/or coordinates.
nimare.dataset |
Classes for representing datasets of images and/or coordinates. |
nimare.dataset.Dataset (source[, target, mask]) |
Storage container for a coordinate- and/or image-based meta-analytic dataset/database. |
nimare.meta
: Meta-analytic algorithms¶
Coordinate-, image-, and effect-size-based meta-analysis estimators.
nimare.meta |
Coordinate-, image-, and effect-size-based meta-analysis estimators. |
nimare.meta.esma |
Effect-size meta-analysis functions |
nimare.meta.ibma |
Image-based meta-analysis estimators |
nimare.meta.cbma.kernel |
Methods for estimating thresholded cluster maps from neuroimaging contrasts (Contrasts) from sets of foci and optional additional information (e.g., sample size and test statistic values). |
nimare.meta.cbma.ale |
CBMA methods from the activation likelihood estimation (ALE) family |
nimare.meta.cbma.mkda |
CBMA methods from the multilevel kernel density analysis (MKDA) family |
nimare.meta.cbma.model |
Model-based coordinate-based meta-analysis estimators |
nimare.meta.base |
nimare.results
: Meta-analytic results¶
Base classes for datasets.
nimare.results |
Base classes for datasets. |
nimare.results.MetaResult (estimator, mask[, …]) |
Base class for meta-analytic results. |
nimare.correct
: Multiple comparisons correction¶
Multiple comparisons correction
nimare.correct |
Multiple comparisons correction |
nimare.correct.FWECorrector ([method]) |
Perform family-wise error rate correction on a meta-analysis. |
nimare.correct.FDRCorrector ([alpha, method]) |
Perform false discovery rate correction on a meta-analysis. |
nimare.annotate
: Automated annotation¶
Automated annotation tools
nimare.annotate |
Automated annotation tools |
nimare.annotate.ontology |
Automated annotation tools for existing ontologies. |
nimare.annotate.topic |
Automated annotation with text-derived topic models. |
nimare.annotate.vector |
Automated annotation with text-derived vector models. |
nimare.annotate.text |
Text extraction tools. |
nimare.decode
: Functional characterization analysis¶
Functional decoding tools
nimare.decode |
Functional decoding tools |
nimare.decode.discrete |
Methods for decoding subsets of voxels (e.g., ROIs) or experiments (e.g., from meta-analytic clustering on a database) into text. |
nimare.decode.continuous |
Methods for decoding unthresholded brain maps into text. |
nimare.decode.encode |
Methods for encoding text into brain maps. |
nimare.parcellate
: Meta-analytic parcellation¶
Meta-analytic parcellation tools
nimare.parcellate |
Meta-analytic parcellation tools |
nimare.parcellate.cbp |
Coactivation-based parcellation |
nimare.parcellate.mamp |
Meta-analytic activation modeling-based parcellation (MAMP). |
nimare.parcellate.mapbot |
Meta-analytic parcellation based on text (MAPBOT). |
nimare.io
: Input/Output¶
Input/Output operations.
nimare.io |
Input/Output operations. |
nimare.io.convert_neurosynth_to_dict (text_file) |
Convert Neurosynth database files to a dictionary. |
nimare.io.convert_neurosynth_to_json (…[, …]) |
Convert Neurosynth dataset text file to a NiMARE json file. |
nimare.io.convert_neurosynth_to_dataset (…) |
Convert Neurosynth database files into dictionary and create NiMARE Dataset with dictionary. |
nimare.io.convert_sleuth_to_dict (text_file) |
Convert Sleuth text file to a dictionary. |
nimare.io.convert_sleuth_to_json (text_file, …) |
Convert Sleuth output text file into json. |
nimare.io.convert_sleuth_to_dataset (text_file) |
Convert Sleuth output text file into dictionary and create NiMARE Dataset with dictionary. |
nimare.extract
: Dataset and model fetching¶
Dataset and trained model downloading functions
nimare.extract |
Dataset and trained model downloading functions |
nimare.extract.download_nidm_pain ([…]) |
Download NIDM Results for 21 pain studies from NeuroVault for tests. |
nimare.extract.download_mallet ([data_dir, …]) |
Download the MALLET toolbox for LDA topic modeling. |
nimare.extract.download_cognitive_atlas ([…]) |
Download Cognitive Atlas ontology and combine Concepts, Tasks, and Disorders to create ID and relationship DataFrames. |
nimare.extract.download_abstracts (dataset, email) |
Download the abstracts for a list of PubMed IDs. |
nimare.extract.download_peaks2maps_model ([…]) |
Download the trained Peaks2Maps model from OHBM 2018. |
nimare.stats
: Statistical functions¶
Various statistical helper functions
nimare.stats |
Various statistical helper functions |
nimare.stats.one_way (data, n) |
One-way chi-square test of independence. |
nimare.stats.two_way (cells) |
Two-way chi-square test of independence. |
nimare.stats.pearson (x, y) |
Correlates row vector x with each row vector in 2D array y. |
nimare.stats.null_to_p (test_value, null_array) |
Return two-sided p-value for test value against null array. |
nimare.stats.p_to_z (p[, tail]) |
Convert p-values to z-values. |
nimare.stats.t_to_z (t_values, dof) |
From Vanessa Sochat’s TtoZ package. |
nimare.stats.fdr (p[, q]) |
Determine FDR threshold given a p value array and desired false discovery rate q. |
nimare.utils
: Utility functions and submodules¶
Utilities
nimare.utils |
Utilities |
nimare.utils.get_template ([space, mask]) |
Load template file. |
nimare.utils.listify (obj) |
Wraps all non-list or tuple objects in a list; provides a simple way to accept flexible arguments. |
nimare.utils.round2 (ndarray) |
Numpy rounds X.5 values to nearest even integer. |
nimare.utils.vox2mm (ijk, affine) |
Convert matrix subscripts to coordinates. |
nimare.utils.mm2vox (xyz, affine) |
Convert coordinates to matrix subscripts. |
nimare.utils.tal2mni (coords) |
Python version of BrainMap’s tal2icbm_other.m. |
nimare.utils.mni2tal (coords) |
Python version of BrainMap’s icbm_other2tal.m. |
nimare.utils.get_resource_path () |
Returns the path to general resources, terminated with separator. |
nimare.workflows
: Common workflows¶
Common meta-analytic workflows
nimare.workflows |
Common meta-analytic workflows |
nimare.workflows.ale |
Workflow for running an ALE meta-analysis from a Sleuth text file. |
nimare.workflows.conperm |
Workflow for running a contrast permutation meta-analysis on a set of images. |
nimare.workflows.macm |
Perform MACM with ALE algorithm. |
nimare.workflows.peaks2maps |
Workflow for contrast permutation meta-analysis on images constructed from coordinates using the Peaks2Maps kernel. |
nimare.workflows.scale |
Workflow for running a SCALE meta-analysis from a Sleuth text file. |
nimare.base
: Base classes¶
Base classes for datasets.
nimare.base |
Base classes for datasets. |
nimare.base.NiMAREBase () |
Base class for NiMARE. |
nimare.base.Estimator () |
Estimators take in Datasets and return MetaResults |
nimare.base.Transformer () |
Transformers take in Datasets and return Datasets |