The experts of animal locomotion well know the characteristics of quadruped walking since the pioneering work of Eadweard Muybridge in the 1880s. Most of the quadrupeds advance their legs in the same lateral sequence when walking, and only the timing of their supporting feet differ more or less. How did this scientific knowledge influence the correctness of quadruped walking depictions in the fine arts? Did the proportion of erroneous quadruped walking illustrations relative to their total number (i.e. error rate) decrease after Muybridge? How correctly have cavemen (upper palaeolithic Homo sapiens) illustrated the walking of their quadruped prey in prehistoric times? The aim of this work is to answer these questions. We have analyzed 1000 prehistoric and modern artistic quadruped walking depictions and determined whether they are correct or not in respect of the limb attitudes presented, assuming that the other aspects of depictions used to determine the animals gait are illustrated correctly. The error rate of modern pre-Muybridgean quadruped walking illustrations was 83.5%, much more than the error rate of 73.3% of mere chance. It decreased to 57.9% after 1887, that is in the post-Muybridgean period. Most surprisingly, the prehistoric quadruped walking depictions had the lowest error rate of 46.2%. All these differences were statistically significant. Thus, cavemen were more keenly aware of the slower motion of their prey animals and illustrated quadruped walking more precisely than later artists.
Appropriate decisions involve at least two aspects: the speed of the decision and the correctness of the decision. Although a quick and correct decision is generally believed to work favorably, these two aspects may be interdependent in terms of overall task performance. In this study, we scrutinized learning behaviors in an operant task in which rats were required to poke their noses into either of two holes by referring to a light cue. All 22 rats reached the learning criterion, an 80% correct rate, within 4 days of testing, but they were diverse in the number of sessions spent to reach the learning criterion. Individual analyses revealed that the mean latency for responding was negatively correlated with the number of sessions until learning, suggesting that the rats that responded more rapidly to the cues learned the task more slowly. For individual trials, the mean latency for responding in correct trials (LC) was significantly longer than that in incorrect trials (LI), suggesting that, on average, long deliberation times led to correct answers in the trials. The success ratio before learning was not correlated with the learning speed. Thus, deliberative decision-making, rather than overall correctness, is critical for learning.
Re-evaluation of the VIDAS(®) cytomegalovirus (CMV) IgG avidity assay: Determination of new cut-off values based on the study of kinetics of CMV-IgG maturation.
- Journal of clinical virology : the official publication of the Pan American Society for Clinical Virology
- Published about 8 years ago
BACKGROUND: In case of cytomegalovirus (CMV) infection, differentiation between primary and non-primary CMV infection can be of major importance for the correct management of pregnant women or immunocompromised patients. Besides CMV-IgM and IgG, CMV-IgG avidity measurement is now commonly used to distinguish primary from non-primary infection. OBJECTIVE: To re-evaluate the performance of the VIDAS CMV-IgG avidity assay in comparison with 2 other techniques (Architect Abbott and Liaison DiaSorin) and to study the kinetics of CMV-IgG avidity maturation. STUDY DESIGN: A panel of 135 sequential samples collected from 31 patients with a proven primary infection (attested by very recent CMV-IgG seroconversion) was tested with VIDAS, Liaison and Architect CMV-IgG avidity assays. Moreover, 235 routinely collected samples, CMV-IgG and CMV-IgM positive, were analyzed with Liaison, VIDAS and an in-house CMV-IgG avidity assay. RESULTS AND CONCLUSIONS: The analysis of all the data allowed suggesting new VIDAS cut-off values of 0.40 for low avidity and 0.65 for high avidity, which significantly increase the test performance and enable better patient managements. Using these VIDAS new cut-off values, all of the 31 primary infections were correctly dated. Comparatively, 25 out of 31 were correctly dated with the Architect assay and 29 out of 31 with the Liaison assay. We also demonstrated that the VIDAS CMV-IgG avidity assay allows observing correctly the maturation of CMV-IgG avidity, which could be useful as an additional parameter for diagnosis of a recent CMV infection.
Distinguishing between the bones of sheep and goat is a notorious challenge in zooarchaeology. Several methodological contributions have been published at different times and by various people to facilitate this task, largely relying on a macro-morphological approach. This is now routinely adopted by zooarchaeologists but, although it certainly has its value, has also been shown to have limitations. Morphological discriminant criteria can vary in different populations and correct identification is highly dependent upon a researcher’s experience, availability of appropriate reference collections, and many other factors that are difficult to quantify. There is therefore a need to establish a more objective system, susceptible to scrutiny. In order to fulfil such a requirement, this paper offers a comprehensive morphometric method for the identification of sheep and goat postcranial bones, using a sample of more than 150 modern skeletons as a basis, and building on previous pioneering work. The proposed method is based on measurements-some newly created, others previously published-and its use is recommended in combination with the more traditional morphological approach. Measurement ratios, used to translate morphological traits into biometrical attributes, are demonstrated to have substantial diagnostic potential, with the vast majority of specimens correctly assigned to species. The efficacy of the new method is also tested with Discriminant Analysis, which provides a successful verification of the biometrical indices, a statistical means to select the most promising measurements, and an additional line of analysis to be used in conjunction with the others.
Sleep has been demonstrated to improve consolidation of many types of new memories. However, few prior studies have examined how sleep impacts learning of face-name associations. The recognition of a new face along with the associated name is an important human cognitive skill. Here we investigated whether post-presentation sleep impacts recognition memory of new face-name associations in healthy adults. Fourteen participants were tested twice. Each time, they were presented 20 photos of faces with a corresponding name. Twelve hours later, they were shown each face twice, once with the correct and once with an incorrect name, and asked if each face-name combination was correct and to rate their confidence. In one condition the 12-h interval between presentation and recall included an 8-h nighttime sleep opportunity (“Sleep”), while in the other condition they remained awake (“Wake”). There were more correct and highly confident correct responses when the interval between presentation and recall included a sleep opportunity, although improvement between the “Wake” and “Sleep” conditions was not related to duration of sleep or any sleep stage. These data suggest that a nighttime sleep opportunity improves the ability to correctly recognize face-name associations. Further studies investigating the mechanism of this improvement are important, as this finding has implications for individuals with sleep disturbances and/or memory impairments.
In recent years, segmental stable isotope analysis of hair has been a focus of research in animal dietary ecology and migration. To correctly assign tail hair segments to seasons or even Julian dates, information on tail hair growth rates is a key parameter, but is lacking for most species.
Three chimpanzees performed a computerized memory task in which auditory feedback about the accuracy of each response was delayed. The delivery of food rewards for correct responses also was delayed and occurred in a separate location from the response. Crucially, if the chimpanzees did not move to the reward-delivery site before food was dispensed, the reward was lost and could not be recovered. Chimpanzees were significantly more likely to move to the dispenser on trials they had completed correctly than on those they had completed incorrectly, and these movements occurred before any external feedback about the outcome of their responses. Thus, chimpanzees moved (or not) on the basis of their confidence in their responses, and these confidence movements aligned closely with objective task performance. These untrained, spontaneous confidence judgments demonstrated that chimpanzees monitored their own states of knowing and not knowing and adjusted their behavior accordingly.
- Scandinavian journal of medicine & science in sports
- Published about 4 years ago
We investigated the effects of supplement identification on exercise performance with caffeine supplementation. Forty-two trained cyclists (age 37 ± 8 years, body mass [BM] 74.3 ± 8.4 kg, height 1.76 ± 0.06 m, maximum oxygen uptake 50.0 ± 6.8 mL/kg/min) performed a ~30 min cycling time-trial 1 h following either 6 mg/kgBM caffeine (CAF) or placebo (PLA) supplementation and one control (CON) session without supplementation. Participants identified which supplement they believed they had ingested (“caffeine”, “placebo”, “don’t know”) pre- and post-exercise. Subsequently, participants were allocated to subgroups for analysis according to their identifications. Overall and subgroup analyses were performed using mixed-model and magnitude-based inference analyses. Caffeine improved performance vs PLA and CON (P ≤ 0.001). Correct pre- and post-exercise identification of caffeine in CAF improved exercise performance (+4.8 and +6.5%) vs CON, with slightly greater relative increases than the overall effect of caffeine (+4.1%). Performance was not different between PLA and CON within subgroups (all P > 0.05), although there was a tendency toward improved performance when participants believed they had ingested caffeine post-exercise (P = 0.06; 87% likely beneficial). Participants who correctly identified placebo in PLA showed possible harmful effects on performance compared to CON. Supplement identification appeared to influence exercise outcome and may be a source of bias in sports nutrition.
Formal verification is a computational approach that checks system correctness (in relation to a desired functionality). It has been widely used in engineering applications to verify that systems work correctly. Model checking, an algorithmic approach to verification, looks at whether a system model satisfies its requirements specification. This approach has been applied to a large number of models in systems and synthetic biology as well as in systems medicine. Model checking is, however, computationally very expensive, and is not scalable to large models and systems. Consequently, statistical model checking (SMC), which relaxes some of the constraints of model checking, has been introduced to address this drawback. Several SMC tools have been developed; however, the performance of each tool significantly varies according to the system model in question and the type of requirements being verified. This makes it hard to know, a priori, which one to use for a given model and requirement, as choosing the most efficient tool for any biological application requires a significant degree of computational expertise, not usually available in biology labs. The objective of this paper is to introduce a method and provide a tool leading to the automatic selection of the most appropriate model checker for the system of interest.
Behaviour change communication (BCC) can improve infant and young child nutrition (IYCN) knowledge, practices, and health outcomes. However, few studies have examined whether the improved knowledge persists after BCC activities end. This paper assesses the effect of nutrition sensitive social protection interventions on IYCN knowledge in rural Bangladesh, both during and after intervention activities. We use data from two, 2-year, cluster randomised control trials that included nutrition BCC in some treatment arms. These data were collected at intervention baseline, midline, and endline, and 6-10 months after the intervention ended. We analyse data on IYCN knowledge from the same 2,341 women over these 4 survey rounds. We construct a number correct score on 18 IYCN knowledge questions and assess whether the impact of the BCC changes over time for the different treatment groups. Effects are estimated using ordinary least squares accounting for the clustered design of the study. There are 3 main findings: First, the BCC improves IYCN knowledge substantially in the 1st year of the intervention; participants correctly answer 3.0-3.2 more questions (36% more) compared to the non-BCC groups. Second, the increase in knowledge between the 1st and 2nd year was smaller, an additional 0.7-0.9 correct answers. Third, knowledge persists; there are no significant decreases in IYCN knowledge 6-10 months after nutrition BCC activities ended.