When we analyze behavioral and neural data, most of the methods we use assume the data came from a stationary process - that is, the observations were all sampled from the same distribution. But how often is this true? Can we tell? In what ways could we be fooled if this assumption is false?
The presocratic philosopher Heraclitus is commonly credited for the saying "You can never step in the same river twice" (*). By much the same argument, you can never test the same neuron or behavioral subject twice. Brains are in constant flux. They are changed by experience on multiple time-scales, including: development, learning, short-term memory, adaptation to the statistics of the environment, sensitivity to context (location, social context, etc.), modulation by the animal's changing internal states (hunger, fatigue, stress, etc.), beavioral sequential dependencies, and interaction with other "task-irrelevant" neural and behavioral processes. Therefore we think it is more reasonable to assume that all neural and behavioral time series are non-stationary.
Systems neuroscience needs better ways of detecting and characterizing non-stationarity in our neural and behavioral data. Basic "stationarity tests" are both too narrow and too weak. We need to gain a better understanding of the types and timescales of non-stationarity present in our data, and the extent to which undetectable or unavoidable non-stationarity would confound the statistical methods we use. Finally we need to develop statistical methods that are more robust to non-stationarity, and formalize and extend approaches already in the literature. Such methods could also be useful for other non-stationary biological signals and non-biological non-stationary time series.
We are tackling these problems in collaboration with statistician Armin Schwartzman in a new NIH BRAIN Initiative project funded by the National Institute of Neurological Disorders and Stroke (NINDS).