A Project Encephalon & The Science Paradox Collaboration article
Out of complexities, intense simplicities emerge- Winston Churchill
What is the brain?
A 3lb organ composed of nerves and tissue that integrates sensory and motor responses? Right, simplify that.
A mass of nervous tissue divided into lobes, that controls all behavior? Simplify that again.
A network of neurons that is connected to form trillions of synapses…
Perfect. Stripped down to its roots, the brain is simply a network. A “network algorithm” so beautifully configured that it earned itself the reputation of being the seat of intelligence itself.
Undeniably, we live in a complex world. From the cells we are made up of, to the ecosystems we make up, we form innumerable connections and interactions rendering the whole greater than the sum of its parts, and is somehow significantly different from its parts. These connections and interactions converge into a system of their own and this intricate clockwork is known as a complex system.
In essence, the complex systems theory in data science studies how the society is greater than the sum of its people or, how an online community is different from its isolated connections or even, how the brain is so much more than a network of neurons.
Walking down Memory Lane
Since time immemorial, scientists and philosophers have been trying to decipher the mystery behind how the brain does what it does. Back in 1669, Nicolaus Steno, a pioneering anatomist said “The brain, the masterpiece of creation, is almost unknown to us.” (Findlen & Bence, 2000) Unfortunately, the brain has been such a complex enigma, that the saying holds even today.
However, between the late 16th century and today, we have covered many incredible milestones, some of which have been instrumental in forging the path to where brain sciences stand today. So before we move into the story of the brain as a complex system, it would be imperative to ‘walk down memory lane’ (quite literally) and laud some brains for the progress they made in cracking the “brain-igma”!
Back in the time of the Renaissance, several ancient philosophers like Aristotle, Galen, and Leonardo da Vinci described the brain as the “seat of the Soul” and functioned to “regulate the temperature of the blood and spirit”. (Findlen & Bence, 2000)
Following the advent of the microscope in the 16th century and the proposition of the cell theory in 1853, the understanding of the brain further blossomed to bring out two opposing schools of thought - the reticular theory and the neuronal theory. The reticular theory conveniently postulated that the brain was one simple contiguous structure in which information flows freely and the neuronal theory postulated that the brain is composed of a network of functional units called cells, just like the rest of the body. The outstanding work done by Santiago Ramon y Cajal in proving this neuronal doctrine using the silver nitrate staining technique invented by Camillo Golgi was quite the proverbial birth of cellular and molecular neuroscience. It laid the foundation of modern neuroscience as we know it. (Cajal, 1906; Golgi, 1906)
Now that the presence of neuronal cells was explained through the neuronal doctrine, the next million-dollar question was, “If the brain is composed of a network of cells, how is there a continuous flow of information in it?”
Surprisingly, this question had already been answered by a prodigy Luigi Galvani. As far as neuroscience was concerned, Galvani and his wife, Lucia, worked way ahead of his time. Their discovery in the late 17th century (way before Cajal worked out his doctrine) that nerves cause muscles to contract and move through electrical signals, laid the foundation for the field of electrophysiology. This set into motion a cascade of electrophysiological experiments, whose description would require an entire book. Suffice to say that, the problem that started with Galvani discovering ‘animal electricity’ in frog legs in the 17th century, matured through the brains of experts like Emile du Bois-Reymond, Johannes Muller, Hermann von Helmholtz, Julius Bernstein, Ludimar Hermann, and several others to finally reach Alan Hodgkin and Andrew Huxley (the 1963 Nobel Prize-winning duo) for their work on measuring action potentials in squid axons. (Piccolino, 1998; Schwiening, 2012)
Around the same time, neuronal communication through synapses was discovered. Two centuries’ worth of rigorous work finally culminated in the proverbial “coming of age” of neuroscience with important insights into nerve conduction, membrane potential, and neuronal synapses.
All this time, the only way to study nervous function was through dissected animal brains. These dissections were useful, but studying human brains was still not feasible. The need of the hour was a method through which living brains could be studied and monitored, without using invasive surgical techniques. With this necessity, the era of neuroimaging was born.
A field that gradually developed over the 19th century has changed the course of neuroscience. Neuroimaging started in the early 1900s with “pneumoencephalography” where the cerebrospinal fluid (CSF) was drained out of the brain and replaced with air, following which an X-ray was done so the denser brain tissue was revealed. This primitive technique was soon taken over by Computerized Axial Tomography (CT or CAT) scanning, by Nobel-Prize worthy discovery by William H. Oldendorf, Godfrey Newbold Hounsfield, and Allan McLeod Cormack in 1961. The two latter scientists received the 1979 Nobel Prize for their work. Finally, around the 1970s, Magnetic Resonance Imaging came into vogue, with its variations of functional (fMRI) and structural MRI studies, and these are currently being used to visualize different aspects of the brain concerning their functional and structural roles. (Raichle, 2009)
Just like other scientific puzzles, for ten questions that we may solve about the brain, twenty more crop up. With a better understanding of the brain, we have realized that puzzles in neuroscience involve trillions of pieces linked to puzzles in other fields like genetics, biochemistry, immunology, physics, mathematics, and computer science, and the further we try to simplify this grand puzzle, the more complex it gets. However, owing to the rapid development of computer and data sciences, these complexities fall in our league. After traversing through the journey of neuroscience through the ages, we finally arrive in the digital age, where neuronal networks may be analogized to digital networks, and theories in data science may be applied to neural systems.
Fig 1 - A Complexity approach: From Imaging to Network Analysis in Brain (Telesford et al., 2011)
Complexity theory has diverse applications ranging from microbial ecosystems to social networks. With the advent of an interdisciplinary approach towards neuroscience research in the last decade, it finds valuable applications in modeling brain assembly and functions.
It’s All Complex
Complexity ironically is the most simply observed characteristic in nature. James Gleick, author of Chaos: Making a new science goes to the extent of calling simple shapes and systems inhuman. “They fail to resonate with the way nature organizes itself or with the way human perception sees the world,” he writes further. Complexity theory emerged as a paradigm shift from the reductionist approach to studying systems, i.e., analyzing in terms of their simple or fundamental constituents. Complex systems are to be examined by their emergent properties born out of interconnectedness and unpredictability. Their study shifts the conventional interest of prediction and control in science to understand better the conditions they are sensitive to and the observed dynamics.
The properties of a complex system underscore the existence of randomness in nature. Certainty is an illusion established by limiting the boundaries of the system. Werner Heisenberg’s quote puts this very well into perspective, “What we observe is not nature herself, but nature exposed to our method of questioning.”
Typically, the behavior of a complex system is non-linear, i.e., output not proportional to the change of input due to systemic interactions and interconnectivity. This makes the causality of the system circular rather than linear, which means effects become causes, and causes are the effects of other system dynamics. Non-linearity is studied using Dynamical Systems Theory. It characterizes the system’s state using a set of real numbers (say, a tuple or a vector), which evolves in time according to a nonlinear function. This could be deterministic; that is, only one future state follows the current state for a time interval, or stochastic, for which random events affect the state evolution. However, deterministic nature does not make the system predictable.
The emergence of complexity from simple local interactions is also through a spontaneous mechanism, independent of the system’s finely tuned details and parameters making the system self-organized into a critical state. This makes it possible for complexity to be described by certain ubiquitous mathematical laws that look for a pattern underlying the chaos in nature. (Self-organized criticality - Wikipedia, 2021)
Fig 2 - Subfields of Complex Systems Science
(A Brief History of Systems Science, Chaos and Complexity, 2021)
Looking into the Complexity in the Brain
Fig 3: An artist’s interpretation of the Butterfly effect and Chaos theory: Any perturbation at the molecular level will affect the neuronal level and manifest at the behavioural level
A fascinating effect that underlies the principle of chaos theory is the butterfly effect. ‘A butterfly flapping its wings in Texas can cause a hurricane in China,’ a metaphor that hits home while understanding complex systems. Similarly, a protein misfolding at the molecular level can lead to emotional imbalances at the behavioral level, becoming the brain’s butterfly effect. This propagation of a small-scale change to a much destructive larger-scale change later in the brain is stark evidence of complexity in the brain.
The brain is subdivided into phylogenetically differentiated parts with specialized areas. The basic unit defined as a neuron, which has further subunits like synapses, channels, etc. The temporal dynamics are equally rich where thought, emotions, and memory phenomena span timescales ranging from microseconds to years. Brain systems thus have intricate dynamics composed of diverse populations of interacting neurons forming networks that behave in highly non-trivial ways.
The brain encodes physiological processes encompassing these wide and intermingled ranges of spatiotemporal scales, unlike the separable macroscopic and microscopic scales of a gas in thermodynamics. This is because system size in a biological system like the brain isn’t large enough (in contrast to infinite particles of gas) to reduce the complexity at lower or microscopic levels, also because of its complicated anatomy. So, reduced state equations to “abstract away” the lower-level dynamics isn’t possible for studying emergent phenomena in the brain, as opposed to physical systems.
Only probing the main types of neurons in functional details doesn’t help understand the mental states and disorders due to mutual dependence. The Interdisciplinary Complexity Theory gives a multi-scale approach, which takes us closer to studying its emergent properties and the intricate organization when applied to the brain. (Olbrich, Achermann & Wennekers, 2011)
Manifestations of Complexity in the Brain
One such emergent property is the collective synchronization of oscillations in neural populations due to network interactions. For example, the STN-GPe (SubThalamic Nucleus - Globus Pallidus-externa) system of the basal ganglia (a structure in the forebrain that encodes movement) is known to exhibit synchronized limit cycle oscillations modulated by dopamine. It has behavioral effects such as the selection of suboptimal movements or exploration. (Chakravarthy & Balasubramani, 2015). Modeling it as a complex system paves for an efficient understanding of its manifestation in function and dysfunction by observing phase transitions through simulations and parameter control.
Complex network theory applied to neuroscience represents neurons as nodes in a network with universal features like degree of connections distribution, small-world property, etc. (Telesford et al., 2011). Properties like pattern classification for information flow, such as memory reconsolidation dynamics, have been studied using attractor network designs. This is useful in modifying existing memories shortly after their revival. (Siegelmann, 2008)
Fig 4 - Exemplar Patterns of Information Flow in Neuronal Networks (Telesford et al., 2011)
Even the brain during sleep can be considered a complex system as it exhibits dynamic oscillations over temporal scales and state transitions. An interaction between homeostatic and circadian (cellular-clock-like) processes has been found to generate sleep-wake patterns. The neural dynamics underlying sleep regulation are modeled using coupled non-linear ordinary differential equations. (Olbrich, Achermann & Wennekers, 2011)
Fig 5: Change that happens in the brain, eventually casts its shadow on society and manifests into the proverbial “social hurricane”
A plethora of such applications are making their way into neuroscience and the future looks bright (and complex!). The need to view the brain as a complex system is more than just a better approach - It makes us aware of the complex social structures that we are part of and how dysfunction in one neuronal system could be the cause of several social hurricanes, which are beyond an individual’s control. The aim is also to be able to comprehend our influence on nature’s complexity better so that our participation could be more appropriate.
West, G., 2013. Big Data Needs a Big Theory to Go with It. [Blog] Scientific American, Available at: <https://www.scientificamerican.com/article/big-data-needs-big-theory/> [Accessed 7 June 2021].
Findlen, P. and Bence, R., 2000. History of the Brain. [online] Web.stanford.edu. Available at: <https://web.stanford.edu/class/history13/earlysciencelab/body/brainpages/brain.html> [Accessed 7 June 2021].
Santiago Ramón y Cajal - Nobel Lecture: The Structure and Connexions of Neurons. http://www.nobelprize.org/nobel_prizes/medicine/laureates/1906/cajal-lecture.html
Camillo Golgi - Nobel Lecture: The Neuron Doctrine - Theory and Facts. https://www.nobelprize.org/prizes/medicine/1906/golgi/lecture/
Piccolino, M. (1998). Animal electricity and the birth of electrophysiology: the legacy of Luigi Galvani. Brain Research Bulletin, 46(5), 381-407. https://doi.org/10.1016/s0361-9230(98)00026-4
Schwiening C. J. (2012). A brief historical perspective: Hodgkin and Huxley. The Journal of physiology, 590(11), 2571–2575. https://doi.org/10.1113/jphysiol.2012.230458
Raichle, M. (2009). A brief history of human brain mapping. Trends In Neurosciences, 32(2), 118-126. https://doi.org/10.1016/j.tins.2008.11.001
Siegelmann, H., 2010. Complex Systems Science and Brain Dynamics: A Frontiers in Computational Neuroscience Special Topic. Frontiers in Computational Neuroscience, 4.
Medium. 2021. A Brief History of Systems Science, Chaos and Complexity. [online] Available at:<https://medium.com/age-of-awareness/a-brief-history-of-systems-science-chaos-and-complexity-d9198b1a198d> [Accessed 7 June 2021].
Telesford, Q., Simpson, S., Burdette, J., Hayasaka, S. and Laurienti, P., 2011. The Brain as a Complex System: Using Network Science as a Tool for Understanding the Brain. Brain Connectivity, 1(4), pp.295-308.
Olbrich, E., Achermann, P. and Wennekers, T., 2011. The sleeping brain as a complex system. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 369(1952), pp.3697-3707.
Chakravarthy V.S., Balasubramani P.P. (2014) Basal Ganglia System as an Engine for Exploration. In: Jaeger D., Jung R. (eds) Encyclopedia of Computational Neuroscience. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-7320-6_81-1