nimare.meta.cbma.ale
.ALESubtraction
- class ALESubtraction(kernel_transformer=<class 'nimare.meta.kernel.ALEKernel'>, n_iters=10000, memory_limit=None, **kwargs)[source]
Bases:
nimare.meta.cbma.base.PairwiseCBMAEstimator
ALE subtraction analysis.
Changed in version 0.0.8:
[FIX] Assume non-symmetric null distribution.
Changed in version 0.0.7:
[FIX] Assume a zero-centered and symmetric null distribution.
- Parameters
kernel_transformer (
nimare.meta.kernel.KernelTransformer
, optional) – Kernel with which to convolve coordinates from dataset. Default is ALEKernel.n_iters (
int
, optional) – Default is 10000.memory_limit (
str
or None, optional) – Memory limit to apply to data. If None, no memory management will be applied. Otherwise, the memory limit will be used to (1) assign memory-mapped files and (2) restrict memory during array creation to the limit. Default is None.**kwargs – Keyword arguments. Arguments for the kernel_transformer can be assigned here, with the prefix ‘kernel__’ in the variable name. Another optional argument is
mask
.
Notes
This method was originally developed in 1 and refined in 2.
The ALE subtraction algorithm is also implemented as part of the GingerALE app provided by the BrainMap organization (https://www.brainmap.org/ale/).
Warning
This implementation contains one key difference from the original version. In the original version, group 1 > group 2 difference values are only evaluated for voxels significant in the group 1 meta-analysis, and group 2 > group 1 difference values are only evaluated for voxels significant in the group 2 meta-analysis. In NiMARE’s implementation, the analysis is run in a two-sided manner for all voxels in the mask.
References
- 1
Laird, Angela R., et al. “ALE meta‐analysis: Controlling the false discovery rate and performing statistical contrasts.” Human brain mapping 25.1 (2005): 155-164. https://doi.org/10.1002/hbm.20136
- 2
Eickhoff, Simon B., et al. “Activation likelihood estimation meta-analysis revisited.” Neuroimage 59.3 (2012): 2349-2361. https://doi.org/10.1016/j.neuroimage.2011.09.017
- compute_summarystat(data)[source]
Compute summary statistics from data.
The actual summary statistic varies across Estimators. For ALE and SCALE, the values are known as ALE values. For (M)KDA, they are “OF” scores.
- Parameters
data (array, pandas.DataFrame, or list of img_like) – Data from which to estimate summary statistics. The data can be: (1) a 1d contrast-len or 2d contrast-by-voxel array of MA values, (2) a DataFrame containing coordinates to produce MA values, or (3) a list of imgs containing MA values.
- Returns
stat_values (1d array) – Summary statistic values. One value per voxel.
- correct_fwe_montecarlo(result, voxel_thresh=0.001, n_iters=10000, n_cores=- 1, vfwe_only=False)[source]
Perform FWE correction using the max-value permutation method.
Only call this method from within a Corrector.
- Parameters
result (
nimare.results.MetaResult
) – Result object from a KDA meta-analysis.voxel_thresh (
float
, optional) – Cluster-defining p-value threshold. Default is 0.001.n_iters (
int
, optional) – Number of iterations to build the vFWE and cFWE null distributions. Default is 10000.n_cores (
int
, optional) – Number of cores to use for parallelization. If <=0, defaults to using all available cores. Default is -1.
- Returns
images (
dict
) – Dictionary of 1D arrays corresponding to masked images generated by the correction procedure. The following arrays are generated by this method: ‘vthresh’, ‘logp_level-cluster’, and ‘logp_level-voxel’.
See also
nimare.correct.FWECorrector
The Corrector from which to call this method.
Examples
>>> meta = MKDADensity() >>> result = meta.fit(dset) >>> corrector = FWECorrector(method='montecarlo', voxel_thresh=0.01, n_iters=5, n_cores=1) >>> cresult = corrector.transform(result)
- fit(dataset1, dataset2, drop_invalid=True)[source]
Fit Estimator to two Datasets.
- Parameters
dataset1/dataset2 (
nimare.dataset.Dataset
) – Dataset objects to analyze.- Returns
nimare.results.MetaResult
– Results of Estimator fitting.
Notes
The
fit
method is a light wrapper that runs input validation and preprocessing before fitting the actual model. Estimators’ individual “fitting” methods are implemented as_fit
, although users should callfit
.