Computational neuroscience

June 14, 2013

In this edition of the U.S. Army Research Laboratory Fellows Corner, ARL Fellow Dr. Piotr Franaszczuk, discusses the field of computational neuroscience from its roots to the present day and how this field of research can lead to a better understanding of the brain structure and function. Franaszczuk believes that advances across the field and on-going research efforts at ARL will eventually lead to models and measurements precise enough to predict how differences in individual brains will influence function and ultimately performance.

The field of computational neuroscience can trace its roots to a 1907 article by Louis Lapique1, who introduced the first biologically inspired neuron model: integrate-and-fire. This model captured the essential principle of neural processing, i.e., all-or-none response of neuron and muscle (discovered experimentally in the late 19th century) to changes in membrane potential. This simple model, as well as its modified versions, is used to this day in simulations of large neural networks.

The development of contemporary biophysical models of realistic neurons and neural networks started only in 1952, after the publication of the Hodgkin-Huxley model2 that was based on giant squid axon experimental measurements. There were many improvements and alternatives to this model, with two main directions. One of these directions included the use of more advanced geometries with more detailed descriptions of dendrites and axons, as well as including more ionic channels. This approach unfortunately increases demand for computational power, practically decreasing the number of neurons that can be connected into network models. The other direction of development in biophysical neuron modeling was to simplify the Hodgkin-Huxley model preserving the ability to model ionic currents but allowing larger scale neural network simulations.

My interest in modeling neural networks originated in the early 1990's. During the 1980's I was involved in developing and implementing signal processing methods to analyze brain activity measured with electroencephalogram (EEG). Since this is a measurement of the electrical potential generated by neural activity of billions of neurons, it can be modeled as a stochastic process. This approach proved very fruitful and many connectivity measures currently used in brain research (including my own) use this approach. As a postdoctoral fellow at the University of Maryland, I had an opportunity to make some single neuron measurements in neuron cultures. This enabled me to investigate how changing interactions between neurons by blocking inhibition results in hyper excitability and bursting, which may represent what happens during an epileptic seizure in humans. This prompted me to start developing a simulation model that initially would simulate network behavior in neural cultures, and eventually lead to an explanation of some of the patterns observed in EEG recordings. Our first study3 in 1997 included just a two-neuron excitatory loop that already provided some interpretation for patterns of bursting activity in neural cultures. This was done on a single personal computer, whereas larger studies performed at this time required big mainframe computers. Since then, we expanded our model to simulations of networks of thousands (on 16 node single core clusters) or even millions (on multicore servers) of neurons dependent on connection density and complexity of single neurons. Sometimes it is not necessary or even advisable to simulate all neurons in real brain tissue. In our recent study4 of abnormal epileptic network representation, 16x16 cortical minicolumns represented by 65,536 neurons was enough to simulate network activity patterns resembling the patterns observed in human epileptic seizures. Figure 1 shows a simplified representation of a modeled patch of cortex and snapshots of evolving bursting activity in the subset of modeled neurons. In our current but not yet published work, we were able to simulate cortex response to sensory input in agreement with experimental data using only 8,000 modeled cells. This significantly increased the speed of computations without losing the ability to interpret simulated results when compared with the experimentally recorded local field potential and firing rate of neurons.

Generation of an EEG-like signal from physiologically motivated models of neural networks also provides a tool for the validation of commonly used measures of EEG activity. Some of these measures, particularly various connectivity measures, were developed using a very strong assumption about EEG signal as a linear stochastic process, and it is not clear if these are justified for activity generated from nonlinear neural networks. In our current study we are investigating limitations of the applicability of connectivity measures on a signal simulated from networks with known simulated connectivity patterns. This provides a valuable tool for validating conclusions about connectivity patterns in the brain derived from experimental studies.

The major obstacles in current modeling efforts are the rather limited and widely diverse data available for model development. Most of the parameters for our modeling are based on data from neural cultures, slices of brain tissue or limited animal studies. Also, since not all brain regions were studied in the same level of detail, there is a need for a more systematic approach to collecting basic data. The recent White House Brain Research through Advancing Innovative Neurotechnologies (BRAIN) initiative provides seed funds for doing this. Hopefully, further development of recording methods, including novel optical imaging techniques, will provide more information needed for realistic computational models. At the same time continuous improvements in computational capabilities will allow for larger models capturing more detail. We are now at the exciting moment in the development of computational neuroscience that may lead to a better understanding of the brain structure and function. This needs an interdisciplinary effort of developing and refining recording methods, developing analysis and visualization tools for "big data," as well as solving numerical problems arising with the increasing number of interconnected neurons in neural network models. ARL is well positioned to make strong contributions in this field with both computational resources and the engineering and scientific expertise needed. We already have two ARL Director's Strategic Initiative (DSI) projects joining experts from different ARL directorates in the effort to better understand brain structure and function in the human brain as well as building an understanding of mild traumatic brain injury (mTBI), using measurements and computational modeling on different levels of neural structures. All these efforts will eventually lead to models and measurements precise enough to predict how differences in individual brains, either normal or pathological caused by damage, will influence function and ultimately performance.

1 Lapicque, L. (1907). "Recherches quantitatives sur l'excitation électrique des nerfs traitée comme une polarisation". J. Physiol. Pathol. Gen. 9: 620–635.

2 Hodgkin, A. L.; Huxley, A. F. (1952). "A quantitative description of membrane current and its application to conduction and excitation in nerve". The Journal of physiology 117 (4): 500–544. PMC 1392413

3 Kudela, P.; Franaszczuk, P. J.; Bergey, G. K. "A simple computer model of excitable synaptically connected neurons. Biol.Cybern. (1997) 77:71-77.

4 Anderson, W. S.; Azhar, F.; Kudela, P.; Bergey, G. K.; Franaszczuk, P. J. Epileptic seizures from abnormal networks: Why some seizures defy predictability. (2012) Epilepsy Res.99 (3):202-213.


Last Update / Reviewed: June 14, 2013