Probabilistic filtering methods such as the Kalman filter and unscented Kalman filter enable decoding of movement commands in brain-machine interfaces (BMIs). In most prior work, the tuning models defining the relationship between desired movements and neural recordings have been held fixed after initial model fitting. However, several studies of neuronal variability have shown that the modulation patterns of many neurons change over time, making filtering with fixed models sub-optimal. Updating tuning models without access to the desired movements (or assumed proxies for them) for re-fitting is a difficult machine learning problem and has not been previously demonstrated for real neural data.

In this study, we show that a Bayesian tuning model update method built upon the unscented Kalman filter can address this problem. Our method periodically updates tuning model parameters using its own filter output as training data ("self-training"). Our periodic updates are computationally light enough so that when amortized as a background computation, they can keep up with changing tuning in real time. To improve the accuracy of the filter output used to update the tuning model, our method smooths filter output using the Rauch-Tung-Striebel smoother. We keep track of the uncertainty in our tuning model parameters and perform Bayesian updates.

Our update method was tested on neuronal ensemble data recorded in two monkeys that performed joystick reaching tasks. The monkeys were chronically implanted with multielectrode arrays in multiple cortical areas, including primary motor cortex, dorsal premotor cortex, and supplementary motor area. We used recordings collected over 4-6 months to perform off-line reconstructions. Our update method significantly improved filtering accuracy versus an analogous filter with a fixed tuning model. The results suggest that our update method can be used to improve BMI performance for neuronal populations with time-varying properties.