Probabilistic Co-Control in Brain-Computer Interfaces: Uncertainty as a Control Signal in Brain-to-Text Decoding

Abstract

Neural decoders serve as probabilistic interfaces in co-control brain-to-text BCIs, where predicted uncertainty shapes hypothesis generation and language model integration, enabling decisions to be made safely under uncertainty. However, it remains unclear whether these decoders produce reliable and informative uncertainty, or how training objectives shape these properties. This work characterizes and improves uncertainty representations in brain-to-text decoding. We extend two metrics, calibration error (ECE) and resolution (RES), to evaluate sequential probabilistic predictions from frame-level phoneme estimates to word-level hypotheses, quantifying the reliability and informativeness of model uncertainty. Using this framework, we analyze neural decoders trained with connectionist temporal classification (CTC). To isolate the causal role of uncertainty independent of accuracy, we manipulate predicted probability distributions while holding predicted sequences fixed. Motivated by the observed failures, we further examine the role of the training objective and propose a two-stage cross-entropy (CE) formulation that decouples alignment inference from classification. We show that widely used CTC-trained neural decoders in brain-to-text BCIs produce systematically over-confident predictions, with high confidence persisting even when predictions are incorrect. Controlled manipulations of the prediction reveal that improved ECE and RES enhance hypothesis generation and language-model integration by promoting diverse alternatives and more effective re-ranking of hypotheses aligned with user intent. Mechanistically, CTC relies on over-confident predictions to resolve alignment ambiguity. Replacing CTC with CE loss yields significantly more reliable and informative probabilistic predictions without degrading decoding accuracy. Uncertainty emerges as a system-level design variable in brain-to-text interfaces. Calibrated uncertainty from neural decoders enables effective integration with independently trained language models and reliable error detection. This work reframes uncertainty from a passive output into an active control signal, identifies key components and evaluation criteria for probabilistic co-control, and outlines a pathway toward next-generation BCIs that supports increasingly complex interactions with the world.

Publication
bioRxiv