nimare.decode.continuous.gclda_decode_map

gclda_decode_map(model, image, topic_priors=None, prior_weight=1)[source]

Perform image-to-text decoding for continuous inputs using method from Rubin et al. (2017).

The method used in this function was originally described in Rubin et al.[1].

Parameters:
  • model (GCLDAModel) – Model object needed for decoding.

  • image (nibabel.nifti1.Nifti1Image or str) – Whole-brain image to decode into text. Must be in same space as model and dataset. Model’s template available in model.dataset.mask_img.

  • topic_priors (numpy.ndarray of float, optional) – A 1d array of size (n_topics) with values for topic weighting. If None, no weighting is done. Default is None.

  • prior_weight (float, optional) – The weight by which the prior will affect the decoding. Default is 1.

Returns:

  • decoded_df (pandas.DataFrame) – A DataFrame with the word-tokens and their associated weights.

  • topic_weights (numpy.ndarray of float) – The weights of the topics used in decoding.

Notes

Notation

Meaning

v

Voxel

t

Topic

w

Word type

i

Input image

p(v|t)

Probability of topic given voxel (p_topic_g_voxel)

\\tau_{t}

Topic weight vector (topic_weights)

p(w|t)

Probability of word type given topic (p_word_g_topic)

\omega

1d array from input image (input_values)

  1. Compute p(t|v) (p_topic_g_voxel).

    • From gclda.model.Model.get_spatial_probs()

  2. Squeeze input image to 1d array \omega (input_values).

  3. Compute topic weight vector (\\tau_{t}) by multiplying p(t|v) by input image.

    • \\tau_{t} = p(t|v) \cdot \omega

  4. Multiply \\tau_{t} by p(w|t).

    • p(w|i) \propto \\tau_{t} \cdot p(w|t)

  5. The resulting vector (word_weights) reflects arbitrarily scaled term weights for the input image.

References