SciCombinator

Discover the most talked about and latest scientific content & concepts.

Concept: Selection bias

45

Localization of objects and events in the environment is critical for survival, as many perceptual and motor tasks rely on estimation of spatial location. Therefore, it seems reasonable to assume that spatial localizations should generally be accurate. Curiously, some previous studies have reported biases in visual and auditory localizations, but these studies have used small sample sizes and the results have been mixed. Therefore, it is not clear (1) if the reported biases in localization responses are real (or due to outliers, sampling bias, or other factors), and (2) whether these putative biases reflect a bias in sensory representations of space or a priori expectations (which may be due to the experimental setup, instructions, or distribution of stimuli). Here, to address these questions, a dataset of unprecedented size (obtained from 384 observers) was analyzed to examine presence, direction, and magnitude of sensory biases, and quantitative computational modeling was used to probe the underlying mechanism(s) driving these effects. Data revealed that, on average, observers were biased towards the center when localizing visual stimuli, and biased towards the periphery when localizing auditory stimuli. Moreover, quantitative analysis using a Bayesian Causal Inference framework suggests that while pre-existing spatial biases for central locations exert some influence, biases in the sensory representations of both visual and auditory space are necessary to fully explain the behavioral data. How are these opposing visual and auditory biases reconciled in conditions in which both auditory and visual stimuli are produced by a single event? Potentially, the bias in one modality could dominate, or the biases could interact/cancel out. The data revealed that when integration occurred in these conditions, the visual bias dominated, but the magnitude of this bias was reduced compared to unisensory conditions. Therefore, multisensory integration not only improves the precision of perceptual estimates, but also the accuracy.

Concepts: Scientific method, Critical thinking, Sample size, Bias, Selection bias, Inductive bias

37

The fossil record is a rich source of information about biological diversity in the past. However, the fossil record is not only incomplete but has also inherent biases due to geological, physical, chemical and biological factors. Our knowledge of past life is also biased because of differences in academic and amateur interests and sampling efforts. As a result, not all individuals or species that lived in the past are equally likely to be discovered at any point in time or space. To reconstruct temporal dynamics of diversity using the fossil record, biased sampling must be explicitly taken into account. Here, we introduce an approach that uses the variation in the number of times each species is observed in the fossil record to estimate both sampling bias and true richness. We term our technique TRiPS (True Richness estimated using a Poisson Sampling model) and explore its robustness to violation of its assumptions via simulations. We then venture to estimate sampling bias and absolute species richness of dinosaurs in the geological stages of the Mesozoic. Using TRiPS, we estimate that 1936 (1543-2468) species of dinosaurs roamed the Earth during the Mesozoic. We also present improved estimates of species richness trajectories of the three major dinosaur clades: the sauropodomorphs, ornithischians and theropods, casting doubt on the Jurassic-Cretaceous extinction event and demonstrating that all dinosaur groups are subject to considerable sampling bias throughout the Mesozoic.

Concepts: DNA, Evolution, Fossil, Paleontology, Charles Darwin, Bias, Dinosaur, Selection bias

33

Previous studies suggest that factual learning, that is, learning from obtained outcomes, is biased, such that participants preferentially take into account positive, as compared to negative, prediction errors. However, whether or not the prediction error valence also affects counterfactual learning, that is, learning from forgone outcomes, is unknown. To address this question, we analysed the performance of two groups of participants on reinforcement learning tasks using a computational model that was adapted to test if prediction error valence influences learning. We carried out two experiments: in the factual learning experiment, participants learned from partial feedback (i.e., the outcome of the chosen option only); in the counterfactual learning experiment, participants learned from complete feedback information (i.e., the outcomes of both the chosen and unchosen option were displayed). In the factual learning experiment, we replicated previous findings of a valence-induced bias, whereby participants learned preferentially from positive, relative to negative, prediction errors. In contrast, for counterfactual learning, we found the opposite valence-induced bias: negative prediction errors were preferentially taken into account, relative to positive ones. When considering valence-induced bias in the context of both factual and counterfactual learning, it appears that people tend to preferentially take into account information that confirms their current choice.

Concepts: Regression analysis, Critical thinking, Experiment, Machine learning, Selection bias

24

The glucose view of self-control posited glucose as the physiological substrate of self-control “resource”, which results in three direct corollaries: 1) engaging in a specific self-control activity would result in reduced glucose level; 2) the remaining glucose level after initial exertion of self-control would be positively correlated with following self-control performance; 3) restoring glucose by ingestion would help to improve the impaired self-control performance. The current research conducted a meta-analysis to test how well each of the three corollaries of the glucose view would be empirically supported. We also tested the restoring effect of glucose rinsing on subsequent self-control performance after initial exertion. The results provided clear and consistent evidence against the glucose view of self-control such that none of the three corollaries was supported. In contrast, the effect of glucose rinsing turned out to be significant, but with alarming signs of publication bias. The implications and future directions are discussed.

Concepts: Enzyme, Evidence-based medicine, Systematic review, Effect size, Meta-analysis, Publication bias, Andrew Martin, Selection bias

23

In recent years, researchers have attempted to provide an indication of the prevalence of inflated Type 1 error rates by analyzing the distribution of p-values in the published literature. De Winter & Dodou (2015) analyzed the distribution (and its change over time) of a large number of p-values automatically extracted from abstracts in the scientific literature. They concluded there is a ‘surge of p-values between 0.041-0.049 in recent decades’ which ‘suggests (but does not prove) questionable research practices have increased over the past 25 years.’ I show the changes in the ratio of fractions of p-values between 0.041-0.049 over the years are better explained by assuming the average power has decreased over time. Furthermore, I propose that their observation that p-values just below 0.05 increase more strongly than p-values above 0.05 can be explained by an increase in publication bias (or the file drawer effect) over the years (cf. Fanelli, 2012; Pautasso, 2010, which has led to a relative decrease of ‘marginally significant’ p-values in abstracts in the literature (instead of an increase in p-values just below 0.05). I explain why researchers analyzing large numbers of p-values need to relate their assumptions to a model of p-value distributions that takes into account the average power of the performed studies, the ratio of true positives to false positives in the literature, the effects of publication bias, and the Type 1 error rate (and possible mechanisms through which it has inflated). Finally, I discuss why publication bias and underpowered studies might be a bigger problem for science than inflated Type 1 error rates, and explain the challenges when attempting to draw conclusions about inflated Type 1 error rates from a large heterogeneous set of p-values.

Concepts: Scientific method, Critical thinking, Statistics, Type I and type II errors, Academic publishing, Statistical hypothesis testing, Selection bias, Counternull

19

In this review, the author discusses several of the weak spots in contemporary science, including scientific misconduct, the problems of post hoc hypothesizing (HARKing), outcome switching, theoretical bloopers in formulating research questions and hypotheses, selective reading of the literature, selective citing of previous results, improper blinding and other design failures, p-hacking or researchers' tendency to analyze data in many different ways to find positive (typically significant) results, errors and biases in the reporting of results, and publication bias. The author presents some empirical results highlighting problems that lower the trustworthiness of reported results in scientific literatures, including that of animal welfare studies. Some of the underlying causes of these biases are discussed based on the notion that researchers are only human and hence are not immune to confirmation bias, hindsight bias, and minor ethical transgressions. The author discusses solutions in the form of enhanced transparency, sharing of data and materials, (post-publication) peer review, pre-registration, registered reports, improved training, reporting guidelines, replication, dealing with publication bias, alternative inferential techniques, power, and other statistical tools.

Concepts: Scientific method, Academic publishing, Science, Experiment, Empirical, Theory, Falsifiability, Selection bias

19

Conspiracist beliefs are widespread and potentially hazardous. A growing body of research suggests that cognitive biases may play a role in endorsement of conspiracy theories. The current research examines the novel hypothesis that individuals who are biased towards inferring intentional explanations for ambiguous actions are more likely to endorse conspiracy theories, which portray events as the exclusive product of intentional agency. Study 1 replicated a previously observed relationship between conspiracist ideation and individual differences in anthropomorphisation. Studies 2 and 3 report a relationship between conspiracism and inferences of intentionality for imagined ambiguous events. Additionally, Study 3 again found conspiracist ideation to be predicted by individual differences in anthropomorphism. Contrary to expectations, however, the relationship was not mediated by the intentionality bias. The findings are discussed in terms of a domain-general intentionality bias making conspiracy theories appear particularly plausible. Alternative explanations are suggested for the association between conspiracism and anthropomorphism.

Concepts: Scientific method, Critical thinking, Hypothesis, Cognitive bias, Novel, Bias, Selection bias, Conspiracy theory

19

In this study, we analyzed age variation in the association between obesity status and US adult mortality risk. Previous studies have found that the association between obesity and mortality risk weakens with age. We argue that existing results were derived from biased estimates of the obesity-mortality relationship because models failed to account for confounding influences from respondents' ages at survey and/or cohort membership. We employed a series of Cox regression models in data from 19 cross-sectional, nationally representative waves of the US National Health Interview Survey (1986-2004), linked to the National Death Index through 2006, to examine age patterns in the obesity-mortality association between ages 25 and 100 years. Findings suggest that survey-based estimates of age patterns in the obesity-mortality relationship are significantly confounded by disparate cohort mortality and age-related survey selection bias. When these factors are accounted for in Cox survival models, the obesity-mortality relationship is estimated to grow stronger with age.

Concepts: Regression analysis, Statistics, Death, Proportional hazards models, Age, Actuarial science, Bias, Selection bias

8

The NHS needs valid information on the safety and effectiveness of healthcare interventions. Cochrane systematic reviews are an important source of this information. Traditionally, Cochrane has attempted to identify and include all relevant trials in systematic reviews on the basis that if all trials are identified and included, there should be no selection bias. However, a predictable consequence of the drive to include all trials is that some studies are included that are not trials (false positives). Including such studies in reviews might increase bias. More effort is needed to authenticate trials to be included in reviews, but this task is bedevilled by the enormous increase in the number of ‘trials’ conducted each year. We argue that excluding small trials from reviews would release resources for more detailed appraisal of larger trials. Conducting fewer but broader reviews that contain fewer but properly validated trials might better serve patients' interests.

Concepts: Critical thinking, Need, Bias, Conducting, Selection bias, Orchestra

7

When a series of studies fails to replicate a well-documented effect, researchers might be tempted to use a “vote counting” approach to decide whether the effect is reliable-that is, simply comparing the number of successful and unsuccessful replications. Vohs’s (2015) response to the absence of money priming effects reported by Rohrer, Pashler, and Harris (2015) provides an example of this approach. Unfortunately, vote counting is a poor strategy to assess the reliability of psychological findings because it neglects the impact of selection bias and questionable research practices. In the present comment, we show that a range of meta-analytic tools indicate irregularities in the money priming literature discussed by Rohrer et al. and Vohs, which all point to the conclusion that these effects are distorted by selection bias, reporting biases, or p-hacking. This could help to explain why money-priming effects have proven unreliable in a number of direct replication attempts in which biases have been minimized through preregistration or transparent reporting. Our major conclusion is that the simple proportion of significant findings is a poor guide to the reliability of research and that preregistered replications are an essential means to assess the reliability of money-priming effects. (PsycINFO Database Record

Concepts: Scientific method, Critical thinking, Replication, Bias, Selection bias, 2005 albums