nimare.meta.cbma.ale.ALE

class ALE(kernel_transformer=<class 'nimare.meta.kernel.ALEKernel'>, null_method='approximate', n_iters=5000, memory=Memory(location=None), memory_level=0, n_cores=1, **kwargs)[source]

Bases: CBMAEstimator

Activation likelihood estimation.

Changed in version 0.2.1:

  • New parameters: memory and memory_level for memory caching.

Changed in version 0.0.12:

  • Use a 4D sparse array for modeled activation maps.

Parameters:
  • kernel_transformer (KernelTransformer, optional) – Kernel with which to convolve coordinates from dataset. Default is ALEKernel.

  • null_method ({"approximate", "montecarlo"}, optional) –

    Method by which to determine uncorrected p-values. The available options are

    ”approximate” (default)

    Build a histogram of summary-statistic values and their expected frequencies under the assumption of random spatial associated between studies, via a weighted convolution, as described in Eickhoff et al.[1].

    This method is much faster, but slightly less accurate, than the “montecarlo” option.

    ”montecarlo”

    Perform a large number of permutations, in which the coordinates in the studies are randomly drawn from the Estimator’s brain mask and the full set of resulting summary-statistic values are incorporated into a null distribution (stored as a histogram for memory reasons).

    This method is must slower, and is only slightly more accurate.

  • n_iters (int, default=5000) – Number of iterations to use to define the null distribution. This is only used if null_method=="montecarlo". Default is 5000.

  • memory (instance of joblib.Memory, str, or pathlib.Path) – Used to cache the output of a function. By default, no caching is done. If a str is given, it is the path to the caching directory.

  • memory_level (int, default=0) – Rough estimator of the amount of memory used by caching. Higher value means more memory for caching. Zero means no caching.

  • n_cores (int, default=1) – Number of cores to use for parallelization. This is only used if null_method=="montecarlo". If <=0, defaults to using all available cores. Default is 1.

  • **kwargs – Keyword arguments. Arguments for the kernel_transformer can be assigned here, with the prefix kernel__ in the variable name. Another optional argument is mask.

Variables:
  • masker (NiftiMasker or similar) – Masker object.

  • inputs (dict) – Inputs to the Estimator. For CBMA estimators, there is only one key: coordinates. This is an edited version of the dataset’s coordinates DataFrame.

  • null_distributions (dict of numpy.ndarray) –

    Null distributions for the uncorrected summary-statistic-to-p-value conversion and any multiple-comparisons correction methods. Entries are added to this attribute if and when the corresponding method is applied.

    If null_method == "approximate":

    • histogram_bins: Array of bin centers for the null distribution histogram, ranging from zero to the maximum possible summary statistic value for the Dataset.

    • histweights_corr-none_method-approximate: Array of weights for the null distribution histogram, with one value for each bin in histogram_bins.

    If null_method == "montecarlo":

    • histogram_bins: Array of bin centers for the null distribution histogram, ranging from zero to the maximum possible summary statistic value for the Dataset.

    • histweights_corr-none_method-montecarlo: Array of weights for the null distribution histogram, with one value for each bin in histogram_bins. These values are derived from the full set of summary statistics from each iteration of the Monte Carlo procedure.

    • histweights_level-voxel_corr-fwe_method-montecarlo: Array of weights for the voxel-level FWE-correction null distribution, with one value for each bin in histogram_bins. These values are derived from the maximum summary statistic from each iteration of the Monte Carlo procedure.

    If correct_fwe_montecarlo() is applied:

    • values_level-voxel_corr-fwe_method-montecarlo: The maximum summary statistic value from each Monte Carlo iteration. An array of shape (n_iters,).

    • values_desc-size_level-cluster_corr-fwe_method-montecarlo: The maximum cluster size from each Monte Carlo iteration. An array of shape (n_iters,).

    • values_desc-mass_level-cluster_corr-fwe_method-montecarlo: The maximum cluster mass from each Monte Carlo iteration. An array of shape (n_iters,).

Notes

The ALE algorithm was originally developed in Turkeltaub et al.[2], then updated in Turkeltaub et al.[3] and Eickhoff et al.[1].

The ALE algorithm is also implemented as part of the GingerALE app provided by the BrainMap organization (https://www.brainmap.org/ale/).

Available correction methods: correct_fwe_montecarlo().

References

Methods

correct_fwe_montecarlo(result[, ...])

Perform FWE correction using the max-value permutation method.

fit(dataset[, drop_invalid])

Fit Estimator to Dataset.

get_params([deep])

Get parameters for this estimator.

load(filename[, compressed])

Load a pickled class instance from file.

save(filename[, compress])

Pickle the class instance to the provided file.

set_params(**params)

Set the parameters of this estimator.

correct_fwe_montecarlo(result, voxel_thresh=0.001, n_iters=5000, n_cores=1, vfwe_only=False)[source]

Perform FWE correction using the max-value permutation method.

Only call this method from within a Corrector.

Changed in version 0.0.13: Change cluster neighborhood from faces+edges to faces, to match Nilearn.

Changed in version 0.0.12:

  • Fix the vfwe_only option.

Changed in version 0.0.11:

  • Rename *_level-cluster maps to *_desc-size_level-cluster.

  • Add new *_desc-mass_level-cluster maps that use cluster mass-based inference.

Parameters:
  • result (MetaResult) – Result object from a CBMA meta-analysis.

  • voxel_thresh (float, default=0.001) – Cluster-defining p-value threshold. Default is 0.001.

  • n_iters (int, default=5000) – Number of iterations to build the voxel-level, cluster-size, and cluster-mass FWE null distributions. Default is 5000.

  • n_cores (int, default=1) – Number of cores to use for parallelization. If <=0, defaults to using all available cores. Default is 1.

  • vfwe_only (bool, default=False) – If True, only calculate the voxel-level FWE-corrected maps. Voxel-level correction can be performed very quickly if the Estimator’s null_method was “montecarlo”. Default is False.

Returns:

  • images (dict) – Dictionary of 1D arrays corresponding to masked images generated by the correction procedure. The following arrays are generated by this method:

    • logp_desc-size_level-cluster: Cluster-level FWE-corrected -log10(p) map based on cluster size. This was previously simply called “logp_level-cluster”. This array is not generated if vfwe_only is True.

    • logp_desc-mass_level-cluster: Cluster-level FWE-corrected -log10(p) map based on cluster mass. According to Bullmore et al.[4] and Zhang et al.[5], cluster mass-based inference is more powerful than cluster size. This array is not generated if vfwe_only is True.

    • logp_level-voxel: Voxel-level FWE-corrected -log10(p) map. Voxel-level correction is generally more conservative than cluster-level correction, so it is only recommended for very large meta-analyses (i.e., hundreds of studies), per Eickhoff et al.[6].

  • description_ (str) – A text description of the correction procedure.

Notes

If vfwe_only is False, this method adds three new keys to the null_distributions_ attribute:

  • values_level-voxel_corr-fwe_method-montecarlo: The maximum summary statistic value from each Monte Carlo iteration. An array of shape (n_iters,).

  • values_desc-size_level-cluster_corr-fwe_method-montecarlo: The maximum cluster size from each Monte Carlo iteration. An array of shape (n_iters,).

  • values_desc-mass_level-cluster_corr-fwe_method-montecarlo: The maximum cluster mass from each Monte Carlo iteration. An array of shape (n_iters,).

See also

nimare.correct.FWECorrector

The Corrector from which to call this method.

References

Examples

>>> meta = MKDADensity()
>>> result = meta.fit(dset)
>>> corrector = FWECorrector(method='montecarlo', voxel_thresh=0.01,
                             n_iters=5, n_cores=1)
>>> cresult = corrector.transform(result)
fit(dataset, drop_invalid=True)[source]

Fit Estimator to Dataset.

Parameters:
  • dataset (Dataset) – Dataset object to analyze.

  • drop_invalid (bool, optional) – Whether to automatically ignore any studies without the required data or not. Default is False.

Returns:

Results of Estimator fitting.

Return type:

MetaResult

Variables:

inputs (dict) – Inputs used in _fit.

Notes

The fit method is a light wrapper that runs input validation and preprocessing before fitting the actual model. Estimators’ individual “fitting” methods are implemented as _fit, although users should call fit.

get_params(deep=True)[source]

Get parameters for this estimator.

Parameters:

deep (bool, default=True) – If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:

params – Parameter names mapped to their values.

Return type:

dict

classmethod load(filename, compressed=True)[source]

Load a pickled class instance from file.

Parameters:
  • filename (str) – Name of file containing object.

  • compressed (bool, default=True) – If True, the file is assumed to be compressed and gzip will be used to load it. Otherwise, it will assume that the file is not compressed. Default = True.

Returns:

obj – Loaded class object.

Return type:

class object

save(filename, compress=True)[source]

Pickle the class instance to the provided file.

Parameters:
  • filename (str) – File to which object will be saved.

  • compress (bool, optional) – If True, the file will be compressed with gzip. Otherwise, the uncompressed version will be saved. Default = True.

set_params(**params)[source]

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Return type:

self