There is a popular belief in neuroscience that we are primarily data limited, and that producing large, multimodal, and complex datasets will, with the help of advanced data analysis algorithms, lead to fundamental insights into the way the brain processes information. These datasets do not yet exist, and if they did we would have no way of evaluating whether or not the algorithmically-generated insights were sufficient or even correct. To address this, here we take a classical microprocessor as a model organism, and use our ability to perform arbitrary experiments on it to see if popular data analysis methods from neuroscience can elucidate the way it processes information. Microprocessors are among those artificial information processing systems that are both complex and that we understand at all levels, from the overall logical flow, via logical gates, to the dynamics of transistors. We show that the approaches reveal interesting structure in the data but do not meaningfully describe the hierarchy of information processing in the microprocessor. This suggests current analytic approaches in neuroscience may fall short of producing meaningful understanding of neural systems, regardless of the amount of data. Additionally, we argue for scientists using complex non-linear dynamical systems with known ground truth, such as the microprocessor as a validation platform for time-series and structure discovery methods.
The mathematical structure of Sudoku puzzles is akin to hard constraint satisfaction problems lying at the basis of many applications, including protein folding and the ground-state problem of glassy spin systems. Via an exact mapping of Sudoku into a deterministic, continuous-time dynamical system, here we show that the difficulty of Sudoku translates into transient chaotic behavior exhibited by this system. We also show that the escape rate κ, an invariant of transient chaos, provides a scalar measure of the puzzle’s hardness that correlates well with human difficulty ratings. Accordingly, η = -log(10)κ can be used to define a “Richter”-type scale for puzzle hardness, with easy puzzles having 0 < η ≤ 1, medium ones 1 < η ≤ 2, hard with 2 < η ≤ 3 and ultra-hard with η > 3. To our best knowledge, there are no known puzzles with η > 4.
We present a novel formulation for biochemical reaction networks in the context of protein signal transduction. The model consists of input-output transfer functions, which are derived from differential equations, using stable equilibria. We select a set of “source” species, which are interpreted as input signals. Signals are transmitted to all other species in the system (the “target” species) with a specific delay and with a specific transmission strength. The delay is computed as the maximal reaction time until a stable equilibrium for the target species is reached, in the context of all other reactions in the system. The transmission strength is the concentration change of the target species. The computed input-output transfer functions can be stored in a matrix, fitted with parameters, and even recalled to build dynamical models on the basis of state changes. By separating the temporal and the magnitudinal domain we can greatly simplify the computational model, circumventing typical problems of complex dynamical systems. The transfer function transformation of biochemical reaction systems can be applied to mass-action kinetic models of signal transduction. The paper shows that this approach yields significant novel insights while remaining a fully testable and executable dynamical model for signal transduction. In particular we can deconstruct the complex system into local transfer functions between individual species. As an example, we examine modularity and signal integration using a published model of striatal neural plasticity. The modularizations that emerge correspond to a known biological distinction between calcium-dependent and cAMP-dependent pathways. Remarkably, we found that overall interconnectedness depends on the magnitude of inputs, with higher connectivity at low input concentrations and significant modularization at moderate to high input concentrations. This general result, which directly follows from the properties of individual transfer functions, contradicts notions of ubiquitous complexity by showing input-dependent signal transmission inactivation.
- Journal of the Royal Society, Interface / the Royal Society
- Published over 7 years ago
This paper presents a heuristic proof (and simulations of a primordial soup) suggesting that life-or biological self-organization-is an inevitable and emergent property of any (ergodic) random dynamical system that possesses a Markov blanket. This conclusion is based on the following arguments: if the coupling among an ensemble of dynamical systems is mediated by short-range forces, then the states of remote systems must be conditionally independent. These independencies induce a Markov blanket that separates internal and external states in a statistical sense. The existence of a Markov blanket means that internal states will appear to minimize a free energy functional of the states of their Markov blanket. Crucially, this is the same quantity that is optimized in Bayesian inference. Therefore, the internal states (and their blanket) will appear to engage in active Bayesian inference. In other words, they will appear to model-and act on-their world to preserve their functional and structural integrity, leading to homoeostasis and a simple form of autopoiesis.
Tensegrity structures with detached struts are naturally suitable for deployable applications, both in terrestrial and outer-space structures, as well as morphing devices. Composed of discontinuous struts and continuous cables, such systems are only structurally stable when self-stress is induced; otherwise, they lose the original geometrical configuration (while keeping the topology) and thus can be tightly packed. We exploit this feature by using stimulus responsive polymers to introduce a paradigm for creating actively deployable 3D structures with complex shapes. The shape-change of 3D printed smart materials adds an active dimension to the configurational space of some structural components. Then we achieve dramatic global volume expansion by amplifying component-wise deformations to global configurational change via the inherent deployability of tensegrity. Through modular design, we can generate active tensegrities that are relatively stiff yet resilient with various complexities. Such unique properties enable structural systems that can achieve gigantic shape change, making them ideal as a platform for super light-weight structures, shape-changing soft robots, morphing antenna and RF devices, and biomedical devices.
An important challenge in heart research is to make the relation between the features of external stimuli and heart activity. Olfactory stimulation is an important type of stimulation that affects the heart activity, which is mapped on Electrocardiogram (ECG) signal. Yet, no one has discovered any relation between the structures of olfactory stimuli and the ECG signal. This study investigates the relation between the structures of heart rate and the olfactory stimulus (odorant). We show that the complexity of the heart rate is coupled with the molecular complexity of the odorant, where more structurally complex odorant causes less fractal heart rate. Also, odorant having higher entropy causes the heart rate having lower approximate entropy. The method discussed here can be applied and investigated in case of patients with heart diseases as the rehabilitation purpose.
Resilience, a system’s ability to adjust its activity to retain its basic functionality when errors, failures and environmental changes occur, is a defining property of many complex systems. Despite widespread consequences for human health, the economy and the environment, events leading to loss of resilience–from cascading failures in technological systems to mass extinctions in ecological networks–are rarely predictable and are often irreversible. These limitations are rooted in a theoretical gap: the current analytical framework of resilience is designed to treat low-dimensional models with a few interacting components, and is unsuitable for multi-dimensional systems consisting of a large number of components that interact through a complex network. Here we bridge this theoretical gap by developing a set of analytical tools with which to identify the natural control and state parameters of a multi-dimensional complex system, helping us derive effective one-dimensional dynamics that accurately predict the system’s resilience. The proposed analytical framework allows us systematically to separate the roles of the system’s dynamics and topology, collapsing the behaviour of different networks onto a single universal resilience function. The analytical results unveil the network characteristics that can enhance or diminish resilience, offering ways to prevent the collapse of ecological, biological or economic systems, and guiding the design of technological systems resilient to both internal failures and environmental changes.
- Proceedings of the National Academy of Sciences of the United States of America
- Published almost 8 years ago
A quantitative description of a complex system is inherently limited by our ability to estimate the system’s internal state from experimentally accessible outputs. Although the simultaneous measurement of all internal variables, like all metabolite concentrations in a cell, offers a complete description of a system’s state, in practice experimental access is limited to only a subset of variables, or sensors. A system is called observable if we can reconstruct the system’s complete internal state from its outputs. Here, we adopt a graphical approach derived from the dynamical laws that govern a system to determine the sensors that are necessary to reconstruct the full internal state of a complex system. We apply this approach to biochemical reaction systems, finding that the identified sensors are not only necessary but also sufficient for observability. The developed approach can also identify the optimal sensors for target or partial observability, helping us reconstruct selected state variables from appropriately chosen outputs, a prerequisite for optimal biomarker design. Given the fundamental role observability plays in complex systems, these results offer avenues to systematically explore the dynamics of a wide range of natural, technological and socioeconomic systems.
Complex systems are characterized by many independent components whose low-level actions produce collective high-level results. Predicting high-level results given low-level rules is a key open challenge; the inverse problem, finding low-level rules that give specific outcomes, is in general still less understood. We present a multi-agent construction system inspired by mound-building termites, solving such an inverse problem. A user specifies a desired structure, and the system automatically generates low-level rules for independent climbing robots that guarantee production of that structure. Robots use only local sensing and coordinate their activity via the shared environment. We demonstrate the approach via a physical realization with three autonomous climbing robots limited to onboard sensing. This work advances the aim of engineering complex systems that achieve specific human-designed goals.
Social-ecological systems research suffers from a disconnect between hierarchical (top-down or bottom-up) and network (peer-to-peer) analyses. The concept of the heterarchy unifies these perspectives in a single framework. Here, I review the history and application of ‘heterarchy’ in neuroscience, ecology, archaeology, multiagent control systems, business and organisational studies, and politics. Recognising complex system architecture as a continuum along vertical and lateral axes (‘flat versus hierarchical’ and ‘individual versus networked’) suggests four basic types of heterarchy: reticulated, polycentric, pyramidal, and individualistic. Each has different implications for system functioning and resilience. Systems can also shift predictably and abruptly between architectures. Heterarchies suggest new ways of contextualising and generalising from case studies and new methods for analysing complex structure-function relations.