Two-sample ALE meta-analysis

Meta-analytic projects often involve a number of common steps comparing two or more samples.

In this example, we replicate the ALE-based analyses from Enge et al.[1].

A common project workflow with two meta-analytic samples involves the following:

  1. Run a within-sample meta-analysis of the first sample.

  2. Characterize/summarize the results of the first meta-analysis.

  3. Run a within-sample meta-analysis of the second sample.

  4. Characterize/summarize the results of the second meta-analysis.

  5. Compare the two samples with a subtraction analysis.

  6. Compare the two within-sample meta-analyses with a conjunction analysis.

import os
from pathlib import Path

import matplotlib.pyplot as plt
from nilearn.plotting import plot_stat_map

Load Sleuth text files into Datasets

The data for this example are a subset of studies from a meta-analysis on semantic cognition in children [1]. A first group of studies probed children’s semantic world knowledge (e.g., correctly naming an object after hearing its auditory description) while a second group of studies asked children to decide if two (or more) words were semantically related to one another or not.

Individual group ALEs

Computing separate ALE analyses for each group is not strictly necessary for performing the subtraction analysis but will help the experimenter to appreciate the similarities and differences between the groups.

from nimare.correct import FWECorrector
from nimare.meta.cbma import ALE

ale = ALE(null_method="approximate")
knowledge_results = ale.fit(knowledge_dset)
related_results = ale.fit(related_dset)

corr = FWECorrector(method="montecarlo", voxel_thresh=0.001, n_iters=100, n_cores=2)
knowledge_corrected_results = corr.transform(knowledge_results)
related_corrected_results = corr.transform(related_results)

fig, axes = plt.subplots(figsize=(12, 10), nrows=2)
knowledge_img = knowledge_corrected_results.get_map(
    "z_desc-size_level-cluster_corr-FWE_method-montecarlo"
)
plot_stat_map(
    knowledge_img,
    cut_coords=4,
    display_mode="z",
    title="Semantic knowledge",
    threshold=2.326,  # cluster-level p < .01, one-tailed
    cmap="RdBu_r",
    symmetric_cbar=True,
    vmax=4,
    axes=axes[0],
    figure=fig,
)

related_img = related_corrected_results.get_map(
    "z_desc-size_level-cluster_corr-FWE_method-montecarlo"
)
plot_stat_map(
    related_img,
    cut_coords=4,
    display_mode="z",
    title="Semantic relatedness",
    threshold=2.326,  # cluster-level p < .01, one-tailed
    cmap="RdBu_r",
    symmetric_cbar=True,
    vmax=4,
    axes=axes[1],
    figure=fig,
)
fig.show()
08 plot cbma subtraction conjunction
  0%|          | 0/100 [00:00<?, ?it/s]
  1%|          | 1/100 [00:01<01:43,  1.05s/it]
  3%|▎         | 3/100 [00:02<01:05,  1.49it/s]
  5%|▌         | 5/100 [00:03<00:54,  1.75it/s]
  7%|▋         | 7/100 [00:04<00:52,  1.78it/s]
  9%|▉         | 9/100 [00:05<00:47,  1.90it/s]
 11%|█         | 11/100 [00:06<00:47,  1.88it/s]
 12%|█▏        | 12/100 [00:06<00:39,  2.21it/s]
 13%|█▎        | 13/100 [00:07<00:46,  1.86it/s]
 14%|█▍        | 14/100 [00:07<00:37,  2.28it/s]
 15%|█▌        | 15/100 [00:08<00:49,  1.72it/s]
 16%|█▌        | 16/100 [00:08<00:38,  2.19it/s]
 17%|█▋        | 17/100 [00:09<00:46,  1.77it/s]
 18%|█▊        | 18/100 [00:09<00:35,  2.31it/s]
 19%|█▉        | 19/100 [00:10<00:48,  1.68it/s]
 20%|██        | 20/100 [00:10<00:37,  2.14it/s]
 21%|██        | 21/100 [00:11<00:44,  1.79it/s]
 22%|██▏       | 22/100 [00:11<00:36,  2.13it/s]
 23%|██▎       | 23/100 [00:12<00:44,  1.74it/s]
 24%|██▍       | 24/100 [00:12<00:38,  1.96it/s]
 25%|██▌       | 25/100 [00:13<00:39,  1.91it/s]
 26%|██▌       | 26/100 [00:13<00:40,  1.82it/s]
 27%|██▋       | 27/100 [00:14<00:39,  1.83it/s]
 28%|██▊       | 28/100 [00:15<00:41,  1.75it/s]
 29%|██▉       | 29/100 [00:15<00:34,  2.05it/s]
 30%|███       | 30/100 [00:16<00:41,  1.68it/s]
 31%|███       | 31/100 [00:16<00:35,  1.96it/s]
 32%|███▏      | 32/100 [00:17<00:37,  1.82it/s]
 33%|███▎      | 33/100 [00:17<00:35,  1.89it/s]
 34%|███▍      | 34/100 [00:18<00:36,  1.79it/s]
 35%|███▌      | 35/100 [00:18<00:32,  1.97it/s]
 36%|███▌      | 36/100 [00:19<00:34,  1.85it/s]
 37%|███▋      | 37/100 [00:19<00:33,  1.88it/s]
 38%|███▊      | 38/100 [00:20<00:34,  1.77it/s]
 39%|███▉      | 39/100 [00:20<00:29,  2.03it/s]
 40%|████      | 40/100 [00:21<00:35,  1.70it/s]
 41%|████      | 41/100 [00:21<00:29,  1.98it/s]
 42%|████▏     | 42/100 [00:22<00:32,  1.78it/s]
 43%|████▎     | 43/100 [00:22<00:26,  2.11it/s]
 44%|████▍     | 44/100 [00:23<00:33,  1.70it/s]
 45%|████▌     | 45/100 [00:23<00:27,  2.03it/s]
 46%|████▌     | 46/100 [00:24<00:30,  1.78it/s]
 47%|████▋     | 47/100 [00:24<00:24,  2.15it/s]
 48%|████▊     | 48/100 [00:25<00:31,  1.67it/s]
 49%|████▉     | 49/100 [00:26<00:25,  2.03it/s]
 50%|█████     | 50/100 [00:26<00:27,  1.79it/s]
 51%|█████     | 51/100 [00:27<00:24,  1.96it/s]
 52%|█████▏    | 52/100 [00:27<00:27,  1.72it/s]
 53%|█████▎    | 53/100 [00:28<00:21,  2.15it/s]
 54%|█████▍    | 54/100 [00:28<00:26,  1.77it/s]
 55%|█████▌    | 55/100 [00:29<00:21,  2.07it/s]
 56%|█████▌    | 56/100 [00:30<00:26,  1.68it/s]
 57%|█████▋    | 57/100 [00:30<00:19,  2.24it/s]
 58%|█████▊    | 58/100 [00:31<00:24,  1.72it/s]
 59%|█████▉    | 59/100 [00:31<00:18,  2.18it/s]
 60%|██████    | 60/100 [00:32<00:25,  1.59it/s]
 62%|██████▏   | 62/100 [00:33<00:22,  1.72it/s]
 63%|██████▎   | 63/100 [00:33<00:17,  2.16it/s]
 64%|██████▍   | 64/100 [00:34<00:20,  1.77it/s]
 65%|██████▌   | 65/100 [00:34<00:17,  2.02it/s]
 66%|██████▌   | 66/100 [00:35<00:19,  1.74it/s]
 67%|██████▋   | 67/100 [00:35<00:16,  1.95it/s]
 68%|██████▊   | 68/100 [00:36<00:17,  1.87it/s]
 69%|██████▉   | 69/100 [00:36<00:15,  2.03it/s]
 70%|███████   | 70/100 [00:37<00:16,  1.79it/s]
 71%|███████   | 71/100 [00:37<00:14,  1.93it/s]
 72%|███████▏  | 72/100 [00:38<00:14,  1.91it/s]
 73%|███████▎  | 73/100 [00:38<00:13,  1.98it/s]
 74%|███████▍  | 74/100 [00:39<00:13,  1.86it/s]
 75%|███████▌  | 75/100 [00:39<00:13,  1.86it/s]
 76%|███████▌  | 76/100 [00:40<00:11,  2.03it/s]
 77%|███████▋  | 77/100 [00:40<00:12,  1.89it/s]
 78%|███████▊  | 78/100 [00:41<00:11,  1.99it/s]
 79%|███████▉  | 79/100 [00:42<00:11,  1.75it/s]
 80%|████████  | 80/100 [00:42<00:09,  2.17it/s]
 81%|████████  | 81/100 [00:43<00:11,  1.67it/s]
 82%|████████▏ | 82/100 [00:43<00:08,  2.09it/s]
 83%|████████▎ | 83/100 [00:44<00:09,  1.73it/s]
 84%|████████▍ | 84/100 [00:44<00:07,  2.27it/s]
 85%|████████▌ | 85/100 [00:45<00:09,  1.62it/s]
 86%|████████▌ | 86/100 [00:45<00:06,  2.14it/s]
 87%|████████▋ | 87/100 [00:46<00:07,  1.66it/s]
 89%|████████▉ | 89/100 [00:47<00:06,  1.69it/s]
 91%|█████████ | 91/100 [00:48<00:04,  1.83it/s]
 92%|█████████▏| 92/100 [00:48<00:03,  2.09it/s]
 93%|█████████▎| 93/100 [00:49<00:04,  1.73it/s]
 94%|█████████▍| 94/100 [00:50<00:03,  1.98it/s]
 95%|█████████▌| 95/100 [00:50<00:02,  1.88it/s]
 96%|█████████▌| 96/100 [00:51<00:02,  1.91it/s]
 97%|█████████▋| 97/100 [00:51<00:01,  1.89it/s]
 98%|█████████▊| 98/100 [00:52<00:01,  1.96it/s]
 99%|█████████▉| 99/100 [00:52<00:00,  2.06it/s]
100%|██████████| 100/100 [00:53<00:00,  1.95it/s]
100%|██████████| 100/100 [00:53<00:00,  1.88it/s]

  0%|          | 0/100 [00:00<?, ?it/s]
  1%|          | 1/100 [00:00<01:29,  1.10it/s]
  3%|▎         | 3/100 [00:01<00:59,  1.64it/s]
  5%|▌         | 5/100 [00:02<00:47,  2.01it/s]
  7%|▋         | 7/100 [00:03<00:47,  1.97it/s]
  9%|▉         | 9/100 [00:04<00:43,  2.08it/s]
 11%|█         | 11/100 [00:05<00:43,  2.04it/s]
 13%|█▎        | 13/100 [00:06<00:40,  2.12it/s]
 15%|█▌        | 15/100 [00:07<00:40,  2.09it/s]
 17%|█▋        | 17/100 [00:08<00:38,  2.16it/s]
 19%|█▉        | 19/100 [00:09<00:38,  2.10it/s]
 21%|██        | 21/100 [00:10<00:36,  2.17it/s]
 23%|██▎       | 23/100 [00:11<00:36,  2.12it/s]
 25%|██▌       | 25/100 [00:12<00:34,  2.19it/s]
 27%|██▋       | 27/100 [00:13<00:34,  2.15it/s]
 29%|██▉       | 29/100 [00:13<00:32,  2.18it/s]
 31%|███       | 31/100 [00:14<00:31,  2.16it/s]
 33%|███▎      | 33/100 [00:15<00:30,  2.21it/s]
 35%|███▌      | 35/100 [00:16<00:30,  2.15it/s]
 37%|███▋      | 37/100 [00:17<00:28,  2.20it/s]
 39%|███▉      | 39/100 [00:18<00:27,  2.18it/s]
 41%|████      | 41/100 [00:19<00:26,  2.25it/s]
 43%|████▎     | 43/100 [00:20<00:26,  2.19it/s]
 45%|████▌     | 45/100 [00:21<00:24,  2.28it/s]
 47%|████▋     | 47/100 [00:22<00:24,  2.19it/s]
 49%|████▉     | 49/100 [00:22<00:22,  2.25it/s]
 51%|█████     | 51/100 [00:23<00:22,  2.19it/s]
 53%|█████▎    | 53/100 [00:24<00:20,  2.25it/s]
 55%|█████▌    | 55/100 [00:25<00:20,  2.17it/s]
 57%|█████▋    | 57/100 [00:26<00:19,  2.22it/s]
 59%|█████▉    | 59/100 [00:27<00:19,  2.15it/s]
 61%|██████    | 61/100 [00:28<00:17,  2.24it/s]
 63%|██████▎   | 63/100 [00:29<00:16,  2.18it/s]
 65%|██████▌   | 65/100 [00:30<00:15,  2.25it/s]
 67%|██████▋   | 67/100 [00:31<00:15,  2.16it/s]
 69%|██████▉   | 69/100 [00:31<00:13,  2.26it/s]
 71%|███████   | 71/100 [00:32<00:13,  2.19it/s]
 73%|███████▎  | 73/100 [00:33<00:11,  2.26it/s]
 75%|███████▌  | 75/100 [00:34<00:11,  2.21it/s]
 77%|███████▋  | 77/100 [00:35<00:10,  2.28it/s]
 79%|███████▉  | 79/100 [00:36<00:09,  2.22it/s]
 81%|████████  | 81/100 [00:37<00:08,  2.14it/s]
 83%|████████▎ | 83/100 [00:38<00:08,  2.11it/s]
 85%|████████▌ | 85/100 [00:39<00:07,  2.13it/s]
 87%|████████▋ | 87/100 [00:40<00:06,  2.06it/s]
 89%|████████▉ | 89/100 [00:41<00:05,  2.04it/s]
 91%|█████████ | 91/100 [00:42<00:04,  1.99it/s]
 93%|█████████▎| 93/100 [00:43<00:03,  2.14it/s]
 95%|█████████▌| 95/100 [00:44<00:02,  2.14it/s]
 96%|█████████▌| 96/100 [00:44<00:01,  2.44it/s]
 97%|█████████▋| 97/100 [00:45<00:01,  2.14it/s]
 99%|█████████▉| 99/100 [00:46<00:00,  2.12it/s]
100%|██████████| 100/100 [00:46<00:00,  2.47it/s]
100%|██████████| 100/100 [00:46<00:00,  2.17it/s]

Characterize the relative contributions of experiments in the ALE results

NiMARE contains two methods for this: Jackknife and FocusCounter. We will show both below, but for the sake of speed we will only apply one to each subgroup meta-analysis.

from nimare.diagnostics import FocusCounter

counter = FocusCounter(
    target_image="z_desc-size_level-cluster_corr-FWE_method-montecarlo",
    voxel_thresh=None,
)
knowledge_diagnostic_results = counter.transform(knowledge_corrected_results)
  0%|          | 0/21 [00:00<?, ?it/s]
 10%|▉         | 2/21 [00:00<00:01, 10.33it/s]
 19%|█▉        | 4/21 [00:00<00:01,  9.37it/s]
 24%|██▍       | 5/21 [00:00<00:01,  9.18it/s]
 29%|██▊       | 6/21 [00:00<00:01,  9.06it/s]
 33%|███▎      | 7/21 [00:00<00:01,  8.94it/s]
 38%|███▊      | 8/21 [00:00<00:01,  8.88it/s]
 43%|████▎     | 9/21 [00:00<00:01,  8.86it/s]
 48%|████▊     | 10/21 [00:01<00:01,  8.82it/s]
 52%|█████▏    | 11/21 [00:01<00:01,  8.80it/s]
 57%|█████▋    | 12/21 [00:01<00:01,  8.80it/s]
 62%|██████▏   | 13/21 [00:01<00:00,  8.78it/s]
 67%|██████▋   | 14/21 [00:01<00:00,  8.77it/s]
 71%|███████▏  | 15/21 [00:01<00:00,  8.76it/s]
 76%|███████▌  | 16/21 [00:01<00:00,  8.78it/s]
 81%|████████  | 17/21 [00:01<00:00,  8.75it/s]
 86%|████████▌ | 18/21 [00:02<00:00,  8.76it/s]
 90%|█████████ | 19/21 [00:02<00:00,  8.85it/s]
100%|██████████| 21/21 [00:02<00:00,  8.49it/s]
100%|██████████| 21/21 [00:02<00:00,  8.83it/s]

Clusters table.

knowledge_clusters_table = knowledge_diagnostic_results.tables[
    "z_desc-size_level-cluster_corr-FWE_method-montecarlo_tab-clust"
]
knowledge_clusters_table.head(10)
Cluster ID X Y Z Peak Stat Cluster Size (mm3)
0 PositiveTail 1 36.0 24.0 -6.0 2.326348 1128
1 PositiveTail 2 -2.0 20.0 46.0 2.326348 3472
2 PositiveTail 3 -44.0 12.0 30.0 2.326348 3096
3 PositiveTail 4 -34.0 22.0 0.0 2.326348 992
4 PositiveTail 5 -52.0 -38.0 4.0 1.475791 672
5 PositiveTail 6 54.0 -28.0 4.0 0.806421 512
6 PositiveTail 7 -6.0 -14.0 12.0 0.806421 512


Contribution table. Here PostiveTail refers to clusters with positive statistics.

knowledge_count_table = knowledge_diagnostic_results.tables[
    "z_desc-size_level-cluster_corr-FWE_method-montecarlo_diag-FocusCounter"
    "_tab-counts_tail-positive"
]
knowledge_count_table.head(10)
id PositiveTail 1 PositiveTail 2 PositiveTail 3 PositiveTail 4 PositiveTail 5 PositiveTail 6 PositiveTail 7
0 arnoldussen2006nc- 0 0 1 0 0 0 0
1 arnoldussen2006rm- 0 0 1 0 0 0 0
2 backes2002- 0 1 1 0 0 0 0
3 balsamo2002- 0 0 0 0 1 0 0
4 balsamo2006- 0 1 1 0 0 0 0
5 bauer2017- 1 0 0 0 0 0 1
6 berl2014- 1 2 2 0 0 1 1
7 brauer2007- 0 1 0 0 0 0 0
8 gaillard2001- 0 0 1 0 0 0 0
9 gaillard2003- 1 1 1 0 0 0 0


from nimare.diagnostics import Jackknife

jackknife = Jackknife(
    target_image="z_desc-size_level-cluster_corr-FWE_method-montecarlo",
    voxel_thresh=None,
)
related_diagnostic_results = jackknife.transform(related_corrected_results)
related_jackknife_table = related_diagnostic_results.tables[
    "z_desc-size_level-cluster_corr-FWE_method-montecarlo_diag-Jackknife_tab-counts_tail-positive"
]
related_jackknife_table.head(10)
  0%|          | 0/16 [00:00<?, ?it/s]
  6%|▋         | 1/16 [00:02<00:32,  2.15s/it]
 12%|█▎        | 2/16 [00:04<00:30,  2.16s/it]
 19%|█▉        | 3/16 [00:06<00:27,  2.14s/it]
 25%|██▌       | 4/16 [00:08<00:25,  2.15s/it]
 31%|███▏      | 5/16 [00:10<00:23,  2.14s/it]
 38%|███▊      | 6/16 [00:12<00:21,  2.14s/it]
 44%|████▍     | 7/16 [00:15<00:19,  2.16s/it]
 50%|█████     | 8/16 [00:17<00:17,  2.16s/it]
 56%|█████▋    | 9/16 [00:19<00:15,  2.16s/it]
 62%|██████▎   | 10/16 [00:21<00:12,  2.15s/it]
 69%|██████▉   | 11/16 [00:23<00:10,  2.15s/it]
 75%|███████▌  | 12/16 [00:25<00:08,  2.18s/it]
 81%|████████▏ | 13/16 [00:28<00:06,  2.19s/it]
 88%|████████▊ | 14/16 [00:30<00:04,  2.17s/it]
 94%|█████████▍| 15/16 [00:32<00:02,  2.15s/it]
100%|██████████| 16/16 [00:34<00:00,  2.13s/it]
100%|██████████| 16/16 [00:34<00:00,  2.15s/it]
id PositiveTail 1 PositiveTail 2 PositiveTail 3 PositiveTail 4 PositiveTail 5
0 booth2001- 0.0 0.0 0.029415 0.054821 0.0
1 booth2003- 0.0 0.0 0.0 0.000008 0.0
2 booth2007- 0.021799 0.068333 0.051598 0.000028 0.000019
3 cao2008- 0.134026 0.06561 0.035607 0.094222 0.0
4 chou2006a- 0.211268 0.123099 0.000075 0.095687 0.0
5 chou2006b- 0.21024 0.117018 0.004724 0.117981 0.0
6 chou2009- 0.188491 0.1575 0.22325 0.212472 0.330668
7 chou2019- 0.0 0.061734 0.0 0.0 0.0
8 fan2020- 0.0 0.070051 0.002654 0.101595 0.0
9 lee2011aud- 0.000026 0.000246 0.164283 0.068827 0.243831


Subtraction analysis

Typically, one would use at least 5000 iterations for a subtraction analysis. However, we have reduced this to 100 iterations for this example. Similarly here we use a voxel-level z-threshold of 0.01, but in practice one would use a more stringent threshold (e.g., 1.65).

from nimare.meta.cbma import ALESubtraction
from nimare.reports.base import run_reports
from nimare.workflows import PairwiseCBMAWorkflow

workflow = PairwiseCBMAWorkflow(
    estimator=ALESubtraction(n_iters=10, n_cores=1),
    corrector="fdr",
    diagnostics=FocusCounter(voxel_thresh=0.01, display_second_group=True),
)
res_sub = workflow.fit(knowledge_dset, related_dset)
  0%|          | 0/10 [00:00<?, ?it/s]
 10%|█         | 1/10 [00:00<00:05,  1.72it/s]
 20%|██        | 2/10 [00:01<00:04,  1.73it/s]
 30%|███       | 3/10 [00:01<00:04,  1.73it/s]
 40%|████      | 4/10 [00:02<00:03,  1.74it/s]
 50%|█████     | 5/10 [00:02<00:02,  1.74it/s]
 60%|██████    | 6/10 [00:03<00:02,  1.75it/s]
 70%|███████   | 7/10 [00:04<00:01,  1.75it/s]
 80%|████████  | 8/10 [00:04<00:01,  1.75it/s]
 90%|█████████ | 9/10 [00:05<00:00,  1.75it/s]
100%|██████████| 10/10 [00:05<00:00,  1.75it/s]
100%|██████████| 10/10 [00:05<00:00,  1.74it/s]

  0%|          | 0/228483 [00:00<?, ?it/s]
  0%|          | 2/228483 [00:00<13:08, 289.64it/s]
/home/docs/checkouts/readthedocs.org/user_builds/nimare/envs/stable/lib/python3.9/site-packages/nilearn/reporting/get_clusters_table.py:104: UserWarning: Attention: At least one of the (sub)peaks falls outside of the cluster body. Identifying the nearest in-cluster voxel.
  warnings.warn(

  0%|          | 0/21 [00:00<?, ?it/s]
  5%|▍         | 1/21 [00:00<00:03,  6.64it/s]
 10%|▉         | 2/21 [00:00<00:02,  6.71it/s]
 14%|█▍        | 3/21 [00:00<00:02,  6.79it/s]
 19%|█▉        | 4/21 [00:00<00:02,  6.65it/s]
 24%|██▍       | 5/21 [00:00<00:02,  6.71it/s]
 29%|██▊       | 6/21 [00:00<00:02,  6.73it/s]
 33%|███▎      | 7/21 [00:01<00:02,  6.72it/s]
 38%|███▊      | 8/21 [00:01<00:01,  6.67it/s]
 43%|████▎     | 9/21 [00:01<00:01,  6.76it/s]
 48%|████▊     | 10/21 [00:01<00:01,  6.75it/s]
 52%|█████▏    | 11/21 [00:01<00:01,  6.65it/s]
 57%|█████▋    | 12/21 [00:01<00:01,  6.72it/s]
 62%|██████▏   | 13/21 [00:01<00:01,  6.64it/s]
 67%|██████▋   | 14/21 [00:02<00:01,  6.65it/s]
 71%|███████▏  | 15/21 [00:02<00:00,  6.64it/s]
 76%|███████▌  | 16/21 [00:02<00:00,  6.65it/s]
 81%|████████  | 17/21 [00:02<00:00,  6.80it/s]
 86%|████████▌ | 18/21 [00:02<00:00,  6.88it/s]
 90%|█████████ | 19/21 [00:02<00:00,  6.82it/s]
 95%|█████████▌| 20/21 [00:02<00:00,  6.86it/s]
100%|██████████| 21/21 [00:03<00:00,  6.84it/s]
100%|██████████| 21/21 [00:03<00:00,  6.74it/s]

  0%|          | 0/16 [00:00<?, ?it/s]
  6%|▋         | 1/16 [00:00<00:02,  6.73it/s]
 12%|█▎        | 2/16 [00:00<00:02,  6.78it/s]
 19%|█▉        | 3/16 [00:00<00:01,  6.67it/s]
 25%|██▌       | 4/16 [00:00<00:01,  6.67it/s]
 31%|███▏      | 5/16 [00:00<00:01,  6.58it/s]
 38%|███▊      | 6/16 [00:00<00:01,  6.55it/s]
 44%|████▍     | 7/16 [00:01<00:01,  6.53it/s]
 50%|█████     | 8/16 [00:01<00:01,  6.60it/s]
 56%|█████▋    | 9/16 [00:01<00:01,  6.67it/s]
 62%|██████▎   | 10/16 [00:01<00:00,  6.60it/s]
 69%|██████▉   | 11/16 [00:01<00:00,  6.60it/s]
 75%|███████▌  | 12/16 [00:01<00:00,  6.64it/s]
 81%|████████▏ | 13/16 [00:01<00:00,  6.64it/s]
 88%|████████▊ | 14/16 [00:02<00:00,  6.66it/s]
 94%|█████████▍| 15/16 [00:02<00:00,  6.61it/s]
100%|██████████| 16/16 [00:02<00:00,  6.58it/s]
100%|██████████| 16/16 [00:02<00:00,  6.61it/s]

Report

Finally, a NiMARE report is generated from the MetaResult. root_dir = Path(os.getcwd()).parents[1] / “docs” / “_build” Use the previous root to run the documentation locally.

root_dir = Path(os.getcwd()).parents[1] / "_readthedocs"
html_dir = root_dir / "html" / "auto_examples" / "02_meta-analyses" / "08_subtraction"
html_dir.mkdir(parents=True, exist_ok=True)

run_reports(res_sub, html_dir)

Conjunction analysis

To determine the overlap of the meta-analytic results, a conjunction image can be computed by (a) identifying voxels that were statistically significant in both individual group maps and (b) selecting, for each of these voxels, the smaller of the two group-specific z values Nichols et al.[2].

from nimare.workflows.misc import conjunction_analysis

img_conj = conjunction_analysis([knowledge_img, related_img])

plot_stat_map(
    img_conj,
    cut_coords=4,
    display_mode="z",
    title="Conjunction",
    threshold=2.326,  # cluster-level p < .01, one-tailed
    cmap="RdBu_r",
    symmetric_cbar=True,
    vmax=4,
)
08 plot cbma subtraction conjunction
<nilearn.plotting.displays._slicers.ZSlicer object at 0x7f0e6a80f340>

References

Total running time of the script: (3 minutes 38.752 seconds)

Gallery generated by Sphinx-Gallery