nimare.decode.encode.gclda_encode

gclda_encode(model, text, out_file=None, topic_priors=None, prior_weight=1.0)[source]

Perform text-to-image encoding according to the method described in Rubin et al. (2017).

This method was originally described in Rubin et al.[1].

Parameters:
  • model (GCLDAModel) – Model object needed for decoding.

  • text (str or list) – Text to encode into an image.

  • out_file (str, optional) – If not None, writes the encoded image to a file.

  • topic_priors (numpy.ndarray of float, optional) – A 1d array of size (n_topics) with values for topic weighting. If None, no weighting is done. Default is None.

  • prior_weight (float, optional) – The weight by which the prior will affect the encoding. Default is 1.

Returns:

Notes

Notation

Meaning

v

Voxel

t

Topic

w

Word type

h

Input text

p(v|t)

Probability of voxel given topic (p_voxel_g_topic_)

\\tau_{t}

Topic weight vector (topic_weights)

p(w|t)

Probability of word type given topic (p_word_g_topic)

\omega

1d array from input image (input_values)

  1. Compute p(v|t) (p_voxel_g_topic).

    • From gclda.model.Model.get_spatial_probs()

  2. Compute p(t|w) (p_topic_g_word).

  3. Vectorize input text according to model vocabulary.

  4. Reduce p(t|w) to only include word types in input text.

  5. Compute p(t|h) (p_topic_g_text) by multiplying p(t|w) by word counts for input text.

  6. Sum topic weights (\\tau_{t}) across words.

    • \\tau_{t} = \sum_{i}{p(t|h_{i})}

  7. Compute voxel weights.

    • p(v|h) \propto p(v|t) \cdot \\tau_{t}

  8. The resulting array (voxel_weights) reflects arbitrarily scaled voxel weights for the input text.

  9. Unmask and reshape voxel_weights into brain image.

References