Recently we reported the development of prominent exostosis young adults' skulls (41%; 10-31 mm) emanating from the external occipital protuberance (EOP). These findings contrast existing reports that large enthesophytes are not seen in young adults. Here we show that a combination sex, the degree of forward head protraction (FHP) and age predicted the presence of enlarged EOP (EEOP) (n = 1200, age 18-86). While being a male and increased FHP had a positive effect on prominent exostosis, paradoxically, increase in age was linked to a decrease in enthesophyte size. Our latter findings provide a conundrum, as the frequency and severity of degenerative skeletal features in humans are associated typically with aging. Our findings and the literature provide evidence that mechanical load plays a vital role in the development and maintenance of the enthesis (insertion) and draws a direct link between aberrant loading of the enthesis and related pathologies. We hypothesize EEOP may be linked to sustained aberrant postures associated with the emergence and extensive use of hand-held contemporary technologies, such as smartphones and tablets. Our findings raise a concern about the future musculoskeletal health of the young adult population and reinforce the need for prevention intervention through posture improvement education.
How does network structure affect diffusion? Recent studies suggest that the answer depends on the type of contagion. Complex contagions, unlike infectious diseases (simple contagions), are affected by social reinforcement and homophily. Hence, the spread within highly clustered communities is enhanced, while diffusion across communities is hampered. A common hypothesis is that memes and behaviors are complex contagions. We show that, while most memes indeed spread like complex contagions, a few viral memes spread across many communities, like diseases. We demonstrate that the future popularity of a meme can be predicted by quantifying its early spreading pattern in terms of community concentration. The more communities a meme permeates, the more viral it is. We present a practical method to translate data about community structure into predictive knowledge about what information will spread widely. This connection contributes to our understanding in computational social science, social media analytics, and marketing applications.
Feelings of loneliness are common among young adults, and are hypothesized to impair the quality of sleep. In the present study, we tested associations between loneliness and sleep quality in a nationally representative sample of young adults. Further, based on the hypothesis that sleep problems in lonely individuals are driven by increased vigilance for threat, we tested whether past exposure to violence exacerbated this association.
Secondary use of electronic health records (EHRs) promises to advance clinical research and better inform clinical decision making. Challenges in summarizing and representing patient data prevent widespread practice of predictive modeling using EHRs. Here we present a novel unsupervised deep feature learning method to derive a general-purpose patient representation from EHR data that facilitates clinical predictive modeling. In particular, a three-layer stack of denoising autoencoders was used to capture hierarchical regularities and dependencies in the aggregated EHRs of about 700,000 patients from the Mount Sinai data warehouse. The result is a representation we name “deep patient”. We evaluated this representation as broadly predictive of health states by assessing the probability of patients to develop various diseases. We performed evaluation using 76,214 test patients comprising 78 diseases from diverse clinical domains and temporal windows. Our results significantly outperformed those achieved using representations based on raw EHR data and alternative feature learning strategies. Prediction performance for severe diabetes, schizophrenia, and various cancers were among the top performing. These findings indicate that deep learning applied to EHRs can derive patient representations that offer improved clinical predictions, and could provide a machine learning framework for augmenting clinical decision systems.
Standard theories of decision-making involving delayed outcomes predict that people should defer a punishment, whilst advancing a reward. In some cases, such as pain, people seem to prefer to expedite punishment, implying that its anticipation carries a cost, often conceptualized as ‘dread’. Despite empirical support for the existence of dread, whether and how it depends on prospective delay is unknown. Furthermore, it is unclear whether dread represents a stable component of value, or is modulated by biases such as framing effects. Here, we examine choices made between different numbers of painful shocks to be delivered faithfully at different time points up to 15 minutes in the future, as well as choices between hypothetical painful dental appointments at time points of up to approximately eight months in the future, to test alternative models for how future pain is disvalued. We show that future pain initially becomes increasingly aversive with increasing delay, but does so at a decreasing rate. This is consistent with a value model in which moment-by-moment dread increases up to the time of expected pain, such that dread becomes equivalent to the discounted expectation of pain. For a minority of individuals pain has maximum negative value at intermediate delay, suggesting that the dread function may itself be prospectively discounted in time. Framing an outcome as relief reduces the overall preference to expedite pain, which can be parameterized by reducing the rate of the dread-discounting function. Our data support an account of disvaluation for primary punishments such as pain, which differs fundamentally from existing models applied to financial punishments, in which dread exerts a powerful but time-dependent influence over choice.
To investigate cognitive operations underlying sequential problem solving, we confronted ten Goffin’s cockatoos with a baited box locked by five different inter-locking devices. Subjects were either naïve or had watched a conspecific demonstration, and either faced all devices at once or incrementally. One naïve subject solved the problem without demonstration and with all locks present within the first five sessions (each consisting of one trial of up to 20 minutes), while five others did so after social demonstrations or incremental experience. Performance was aided by species-specific traits including neophilia, a haptic modality and persistence. Most birds showed a ratchet-like progress, rarely failing to solve a stage once they had done it once. In most transfer tests subjects reacted flexibly and sensitively to alterations of the locks' sequencing and functionality, as expected from the presence of predictive inferences about mechanical interactions between the locks.
Use of socially generated “big data” to access information about collective states of the minds in human societies has become a new paradigm in the emerging field of computational social science. A natural application of this would be the prediction of the society’s reaction to a new product in the sense of popularity and adoption rate. However, bridging the gap between “real time monitoring” and “early predicting” remains a big challenge. Here we report on an endeavor to build a minimalistic predictive model for the financial success of movies based on collective activity data of online users. We show that the popularity of a movie can be predicted much before its release by measuring and analyzing the activity level of editors and viewers of the corresponding entry to the movie in Wikipedia, the well-known online encyclopedia.
Cooperative decision rules have so far been shown experimentally mainly in mammal species that have variable and complex social networks. However, these traits should not necessarily be restricted to mammals. Therefore, we tested cooperative problem solving in ravens. We showed that, without training, nine ravens spontaneously cooperated in a loose-string task. Corroborating findings in several species, ravens' cooperative success increased with increasing inter-individual tolerance levels. Importantly, we found this in both a forced dyadic setting, and in a group setting where individuals had an open choice to cooperate with whomever. The ravens, moreover, also paid attention to the resulting reward distribution and ceased cooperation when being cheated upon. Nevertheless, the ravens did not seem to pay attention to the behavior of their partners while cooperating, and future research should reveal whether this is task specific or a general pattern. Given their natural propensity to cooperate and the results we present here, we consider ravens as an interesting model species to study the evolution of, and the mechanisms underlying cooperation.
Correctly assessing a scientist’s past research impact and potential for future impact is key in recruitment decisions and other evaluation processes. While a candidate’s future impact is the main concern for these decisions, most measures only quantify the impact of previous work. Recently, it has been argued that linear regression models are capable of predicting a scientist’s future impact. By applying that future impact model to 762 careers drawn from three disciplines: physics, biology, and mathematics, we identify a number of subtle, but critical, flaws in current models. Specifically, cumulative non-decreasing measures like the h-index contain intrinsic autocorrelation, resulting in significant overestimation of their “predictive power”. Moreover, the predictive power of these models depend heavily upon scientists' career age, producing least accurate estimates for young researchers. Our results place in doubt the suitability of such models, and indicate further investigation is required before they can be used in recruiting decisions.
Following the demise of the polygraph, supporters of assisted scientific lie detection tools have enthusiastically appropriated neuroimaging technologies “as the savior of scientifically verifiable lie detection in the courtroom” (Gerard, 2008: 5). These proponents believe the future impact of neuroscience “will be inevitable, dramatic, and will fundamentally alter the way the law does business” (Erickson, 2010: 29); however, such enthusiasm may prove premature. For in nearly every article published by independent researchers in peer reviewed journals, the respective authors acknowledge that fMRI research, processes, and technology are insufficiently developed and understood for gatekeepers to even consider introducing these neuroimaging measures into criminal courts as they stand today for the purpose of determining the veracity of statements made. Regardless of how favorable their analyses of fMRI or its future potential, they all acknowledge the presence of issues yet to be resolved. Even assuming a future where these issues are resolved and an appropriate fMRI lie-detection process is developed, its integration into criminal trials is not assured for the very success of such a future system may necessitate its exclusion from courtrooms on the basis of existing legal and ethical prohibitions. In this piece, aimed for a multidisciplinary readership, we seek to highlight and bring together the multitude of hurdles which would need to be successfully overcome before fMRI can (if ever) be a viable applied lie detection system. We argue that the current status of fMRI studies on lie detection meets neither basic legal nor scientific standards. We identify four general classes of hurdles (scientific, legal and ethical, operational, and social) and provide an overview on the stages and operations involved in fMRI studies, as well as the difficulties of translating these laboratory protocols into a practical criminal justice environment. It is our overall conclusion that fMRI is unlikely to constitute a viable lie detector for criminal courts.