First quarter of PhD program in Neurobiology at University of Chicago is finished. It was intense and exciting first step. There have been major changes in my thoughts from different aspects of neuroscience. Since I am in between quarters, I am here to report what I have done and learned through past 2.5 months.
- Cellular/molecular mechanisms of neuroscience are the starting point and should be always be the center of theory.
As my background tells, I have not taken any biology classes through my undergraduate educations. When I was first introduced the computational neuroscience as a subject of applied mathematics, I was so focusing on the function of the individual unit. Due to Hodgkin-Huxeley’s fascinating work on the giant squid axon, I conceived (so much) any compartment of the system in neuroscience as a micro-machinery that functions as it is designed by the evolution. As I read the literatures of so many different kinds of neurons from different part of the brain, I started to realize that deterministic description of the system is not sufficient to alternate the real unit of the system. It was the point of my study that I lost the interest about the classic dynamical system point of view in the neuroscience. As we write down the mathematical description to model the system or single unit, we lock the system, not allowing the variability of it. However, unlike the computer or modern machine that we have created, even single unit of neuron displays the great variability in its response. Even with the allowance of the noise, or the stochastic description of the system, it is the system itself that changes its response depending on the context, not only the input signal. This has been issue for many theoretical neuroscientists and it has been described as a reliability of the system.
How can we conquer the reliability of the neurons so that we can describe the true response and the plasticity of the neuron? As the single unit of the neuron is the physical existence and the response is the combination of cellular and molecular reaction of the even smaller units, there must be some range of the limit that describes the variability. The difficulties arise from the fact that there are so many different kinds of molecules and mechanisms involved even in the single action potential generation. It is almost impossible for us to use all reactions of the different ions, receptors, channels, and protein kinases (or so I think yet). The plasticity of the neurons happen in molecular levels i.e.) the intracellular or extracellular concentration of Ca2+ ions affect to the generation of LTD or LTP. And this plasticity affects to the population of the neurons and at the end, whole function of the brain. Thus, we should not lock the system up by writing up the set of equations that ultimately leads us the false description of the brain function.
In this point of view, the statistical description of the microsystem with some distribution is going to play a huge role. And we need to impose the stochasticity to not only to the input, but also to the system. In the previous study of ‘context dependent coding in single neurons’, we used the generalized linear model to describe the motoneuron’s coding which displays the medium duration of after-hyperpolarization effect. This internal feedback depends on the mean level of the input stimulus that evokes the calcium ions dynamics. As a class of simplified model, GLM alters the firing rate (or probability of the firing in the specific time window) depending on the projection of the input signal onto the stimulus filter + the projection of the recent history of the neural activity onto the spike history filter. This description was quite successful to predict the exact timing of the spike with the given input stimulus, with some restrictions of the input statistics.
However, as we discussed above, the system itself should alter its state depends on the recent activity of its own, not changing the probability of the response. In this sense, altering parameter values of the linear system is desirable instead of static linear filter that depends only on the temporal pattern of the input; that is, temporal pattern of the input and response should decide the state of the system and thus decide the probability of the firing.
These thoughts introduced me the notion of higher dimension-linear filter that takes the temporal pattern of the input and response as a domain and results the probability of the spike as an output i.e.) the functional k : Rn x Rm -> Rn
- Target motion predictability determines the predictability of gaze decisions from retina inputs.
In order to stabilize a moving target’s retinal image, the brain must make continuous visual estimates of target motion and evaluate the trade-off between smoothly modulating eye movement and issuing a saccade. Pursuit offers the advantage of uninterrupted visual information but is not able to compensate for large retinal errors; saccades, on the other hand, are able to reduce large errors quickly, but they are likely to degrade visual information during the process. If target motion is unpredictable, gaze behavior must be driven by delayed visual estimates. But if the target trajectory can be extrapolated into the future because its motion is predictable, then pursuit and saccades may be coordinated to maximize both visual information and tracking performance.
We investigate an existing formulation of the decision rule between pursuit and saccade introduced by Lefevre and colleagues (2002). This quantity, Eye-Crossing Time (TxE), is defined as the time it would take the eye trajectory to cross the target. We tracked eye movements in human subjects with a Dual Purkinje Image eye tracker (Fourward Technologies) and in monkeys with scleral coils. We used three experimental paradigms: a 1D and 2D double step-ramp experiment, and a single-player version of the video game, Pong. In the double step-ramp paradigm, randomized trial presentation and a large parameter space minimized predictability. In Pong, the target dynamics – a small spot target with constant speed and elastic collisions with the arena walls – were predictable.
We extend the existing definition of Eye-Crossing Time (TxE) to two-dimensions and show that there is a decision rule that captures gaze behavior across both experimental paradigms and both species. When conditioned on a saccade 125 ms in the future, TxE distributes equivalently for both the double step-ramp experiment and Pong, and is consistent between humans and non-human primates. Saccades are most likely when TxE is less than zero; pursuit is most likely when TxE is between 0-200 ms. That means that the occurrence of a saccade tells us something about the Eye- Crossing Time in the recent past.
But is the converse true? Can we predict gaze behavior from TxE? We find that the likelihood of observing a saccade given the occurrence of an appropriate TxE is peaked at ~130 ms, as expected. But this only held true in the double step-ramp experiment, not in Pong. We find that a TxE based decision rule holds when gaze behavior is driven by feed- forward visual estimates. When motion becomes predictable, gaze behavior is no longer captured by the same decision rule. We apply information theoretic analysis to quantify the interaction between target, gaze, and time.