Guiding Brain-to-Vocalization Decoder Design Using Structured Generalization Error

Abstract

State-of-the-art intracortical neuroprostheses currently enable communication at 60+ words per minute for anarthric individuals by training on over 10K sentences to account for phoneme variability in different word contexts. There is limited understanding about whether this performance can be maintained in decoding naturalistic speech production. We introduce a vocal-unit-level generalization test to explicitly evaluate neural decoder performance with a diverse behavioral repertoire. Tested on neural decoders modeling zebra finch vocalization, an analog to human vocal production, we compare three decoders with different inputs types: spike, factors, and rates. The factors and rates are inferred using trained LFADS models that capture the population neural dynamics. While the conventional random holdout generalization error measure is similar for all three models, factor and rate based decoders outperform spike-based decoders when testing vocal-unit-level generalization error, suggesting adaptability to flexible vocalization inference from partially observed data variation in training and motivating further exploration of decoders incorporating neural and vocalization dynamics.

Date
Jul 17, 2024 3:30 PM
Event
Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), 2024
Location
Orlando, FA