nimare.meta.kernel
.ALEKernel
- class ALEKernel(fwhm=None, sample_size=None, memory=Memory(location=None), memory_level=0)[source]
Bases:
KernelTransformer
Generate ALE modeled activation images from coordinates and sample size.
By default (if neither
fwhm
norsample_size
is provided), the FWHM of the kernel will be determined on a study-wise basis based on the sample sizes available in the input, via the method described in Eickhoff et al.[1].Changed in version 0.2.1:
New parameters:
memory
andmemory_level
for memory caching.
Changed in version 0.0.13:
Remove “dataset” return_type option.
Changed in version 0.0.12:
Remove low-memory option in favor of sparse arrays for kernel transformers.
- Parameters:
fwhm (
float
, optional) – Full-width half-max for Gaussian kernel, if you want to have a constant kernel across Contrasts. Mutually exclusive withsample_size
.sample_size (
int
, optional) – Sample size, used to derive FWHM for Gaussian kernel based on formulae from Eickhoff et al. (2012). This sample size overwrites the Contrast-specific sample sizes in the dataset, in order to hold kernel constant across Contrasts. Mutually exclusive withfwhm
.memory (instance of
joblib.Memory
,str
, orpathlib.Path
) – Used to cache the output of a function. By default, no caching is done. If astr
is given, it is the path to the caching directory.memory_level (
int
, default=0) – Rough estimator of the amount of memory used by caching. Higher value means more memory for caching. Zero means no caching.
References
Methods
get_params
([deep])Get parameters for this estimator.
load
(filename[, compressed])Load a pickled class instance from file.
save
(filename[, compress])Pickle the class instance to the provided file.
set_params
(**params)Set the parameters of this estimator.
transform
(dataset[, masker, return_type])Generate modeled activation images for each Contrast in dataset.
- classmethod load(filename, compressed=True)[source]
Load a pickled class instance from file.
- Parameters:
- Returns:
obj – Loaded class object.
- Return type:
class object
- set_params(**params)[source]
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form
<component>__<parameter>
so that it’s possible to update each component of a nested object.- Return type:
self
- transform(dataset, masker=None, return_type='image')[source]
Generate modeled activation images for each Contrast in dataset.
- Parameters:
dataset (
Dataset
orpandas.DataFrame
) – Dataset for which to make images. Can be a DataFrame if necessary.masker (img_like or None, optional) – Mask to apply to MA maps. Required if
dataset
is a DataFrame. If None (anddataset
is a Dataset), the Dataset’s masker attribute will be used. Default is None.return_type ({'sparse', 'array', 'image', 'summary_array'}, optional) – Whether to return a sparse matrix (‘sparse’), a numpy array (‘array’), or a list of niimgs (‘image’). Default is ‘image’.
- Returns:
imgs – If return_type is ‘sparse’, a 4D sparse array (E x S), where E is the number of unique experiments, and the remaining 3 dimensions are equal to shape of the images. If return_type is ‘array’, a 2D numpy array (C x V), where C is contrast and V is voxel. If return_type is ‘summary_array’, a 1D numpy array (V,) containing a summary measure for each voxel that has been combined across experiments. If return_type is ‘image’, a list of modeled activation images (one for each of the Contrasts in the input dataset).
- Return type:
(C x V)
numpy.ndarray
orlist
ofnibabel.Nifti1Image
orDataset
- Variables: