Concept: Systemic bias
Humans possess a remarkable ability to discriminate structure from randomness in the environment. However, this ability appears to be systematically biased. This is nowhere more evident than in the Gambler’s Fallacy (GF)-the mistaken belief that observing an increasingly long sequence of “heads” from an unbiased coin makes the occurrence of “tails” on the next trial ever more likely. Although the GF appears to provide evidence of “cognitive bias,” a recent theoretical account (Hahn & Warren, 2009) has suggested the GF might be understandable if constraints on actual experience of random sources (such as attention and short term memory) are taken into account. Here we test this experiential account by exposing participants to 200 outcomes from a genuinely random (p = .5) Bernoulli process. All participants saw the same overall sequence; however, we manipulated experience across groups such that the sequence was divided into chunks of length 100, 10, or 5. Both before and after the exposure, participants (a) generated random sequences and (b) judged the randomness of presented sequences. In contrast to other accounts in the literature, the experiential account suggests that this manipulation will lead to systematic differences in postexposure behavior. Our data were strongly in line with this prediction and provide support for a general account of randomness perception in which biases are actually apt reflections of environmental statistics under experiential constraints. This suggests that deeper insight into human cognition may be gained if, instead of dismissing apparent biases as failings, we assume humans are rational under constraints. (PsycINFO Database Record
Federal funding for basic scientific research is the cornerstone of societal progress, economy, health and well-being. There is a direct relationship between financial investment in science and a nation’s scientific discoveries, making it a priority for governments to distribute public funding appropriately in support of the best science. However, research grant proposal success rate and funding level can be skewed toward certain groups of applicants, and such skew may be driven by systemic bias arising during grant proposal evaluation and scoring. Policies to best redress this problem are not well established. Here, we show that funding success and grant amounts for applications to Canada’s Natural Sciences and Engineering Research Council (NSERC) Discovery Grant program (2011-2014) are consistently lower for applicants from small institutions. This pattern persists across applicant experience levels, is consistent among three criteria used to score grant proposals, and therefore is interpreted as representing systemic bias targeting applicants from small institutions. When current funding success rates are projected forward, forecasts reveal that future science funding at small schools in Canada will decline precipitously in the next decade, if skews are left uncorrected. We show that a recently-adopted pilot program to bolster success by lowering standards for select applicants from small institutions will not erase funding skew, nor will several other post-evaluation corrective measures. Rather, to support objective and robust review of grant applications, it is necessary for research councils to address evaluation skew directly, by adopting procedures such as blind review of research proposals and bibliometric assessment of performance. Such measures will be important in restoring confidence in the objectivity and fairness of science funding decisions. Likewise, small institutions can improve their research success by more strongly supporting productive researchers and developing competitive graduate programming opportunities.
Low-grade systemic inflammation associated to obesity leads to cardiovascular complications, caused partly by infiltration of adipose and vascular tissue by effector T cells. The signals leading to T cell differentiation and tissue infiltration during obesity are poorly understood. We tested whether saturated fatty acid-induced metabolic stress affects differentiation and trafficking patterns of CD4(+) T cells. Memory CD4(+) T cells primed in high-fat diet-fed donors preferentially migrated to non-lymphoid, inflammatory sites, independent of the metabolic status of the hosts. This was due to biased CD4(+) T cell differentiation into CD44(hi)-CCR7(lo)-CD62L(lo)-CXCR3(+)-LFA1(+) effector memory-like T cells upon priming in high-fat diet-fed animals. Similar phenotype was observed in obese subjects in a cohort of free-living people. This developmental bias was independent of any crosstalk between CD4(+) T cells and dendritic cells and was mediated via direct exposure of CD4(+) T cells to palmitate, leading to increased activation of a PI3K p110δ-Akt-dependent pathway upon priming.
Meta-research is research about research. Meta-research may not be as click-worthy as a meta-pug-a pug dog dressed up in a pug costume-but it is crucial to understanding research. A particularly valuable contribution of meta-research is to identify biases in a body of evidence. Bias can occur in the design, conduct, or publication of research and is a systematic deviation from the truth in results or inferences. The findings of meta-research can tell us which evidence to trust and what must be done to improve future research. We should be using meta-research to provide the evidence base for implementing systemic changes to improve research, not for discrediting it.
Basic research has shown that the motoric system (i.e., motor actions or stable postures) can strongly affect emotional processes. The present study sought to investigate the effects of sitting posture on the tendency of depressed individuals to recall a higher proportion of negative self-referent material. Thirty currently depressed inpatients either sat in a slumped (depressed) or in an upright (non-depressed) posture while imagining a visual scene of themselves in connection with positive or depression related words presented to them on a computer screen. An incidental recall test of these words was conducted after a distraction task. Results of a mixed ANOVA showed a significant posture x word type interaction, with upright-sitting patients showing unbiased recall of positive and negative words but slumped patients showing recall biased towards more negative words. The findings indicate that relatively minor changes in the motoric system can affect one of the best-documented cognitive biases in depression. Practical implications of the findings are discussed. Copyright © 2014 John Wiley & Sons, Ltd.
High-throughput, ‘omic’ methods provide sensitive measures of biological responses to perturbations. However, inherent biases in high-throughput assays make it difficult to interpret experiments in which more than one type of data is collected. In this work, we introduce Omics Integrator, a software package that takes a variety of ‘omic’ data as input and identifies putative underlying molecular pathways. The approach applies advanced network optimization algorithms to a network of thousands of molecular interactions to find high-confidence, interpretable subnetworks that best explain the data. These subnetworks connect changes observed in gene expression, protein abundance or other global assays to proteins that may not have been measured in the screens due to inherent bias or noise in measurement. This approach reveals unannotated molecular pathways that would not be detectable by searching pathway databases. Omics Integrator also provides an elegant framework to incorporate not only positive data, but also negative evidence. Incorporating negative evidence allows Omics Integrator to avoid unexpressed genes and avoid being biased toward highly-studied hub proteins, except when they are strongly implicated by the data. The software is comprised of two individual tools, Garnet and Forest, that can be run together or independently to allow a user to perform advanced integration of multiple types of high-throughput data as well as create condition-specific subnetworks of protein interactions that best connect the observed changes in various datasets. It is available at http://fraenkel.mit.edu/omicsintegrator and on GitHub at https://github.com/fraenkel-lab/OmicsIntegrator.
Are individuals responsible for behaviour that is implicitly biased? Implicitly biased actions are those which manifest the distorting influence of implicit associations. That they express these ‘implicit’ features of our cognitive and motivational make up has been appealed to in support of the claim that, because individuals lack the relevant awareness of their morally problematic discriminatory behaviour, they are not responsible for behaving in ways that manifest implicit bias. However, the claim that such influences are implicit is, in fact, not straightforwardly related to the claim that individuals lack awareness of the morally problematic dimensions of their behaviour. Nor is it clear that lack of awareness does absolve from responsibility. This may depend on whether individuals culpably fail to know something that they should know. I propose that an answer to this question, in turn, depends on whether other imperfect cognitions are implicated in any lack of the relevant kind of awareness. In this paper I clarify our understanding of ‘implicitly biased actions’ and then argue that there are three different dimensions of awareness that might be at issue in the claim that individuals lack awareness of implicit bias. Having identified the relevant sense of awareness I argue that only one of these senses is defensibly incorporated into a condition for responsibility, rejecting recent arguments from Washington & Kelly for an ‘externalist’ epistemic condition. Having identified what individuals should - and can - know about their implicitly biased actions, I turn to the question of whether failures to know this are culpable. This brings us to consider the role of implicit biases in relation to other imperfect cognitions. I conclude that responsibility for implicitly biased actions may depend on answers to further questions about their relationship to other imperfect cognitions.
Background Classification using class-imbalanced data is biased in favor of the majority class. The bias is even larger for high-dimensional data, where the number of variables greatly exceeds the number of samples. The problem can be attenuated by undersampling or oversampling, which produce class-balanced data. Generally undersampling is helpful, while random oversampling is not. Synthetic Minority Oversampling TEchnique (SMOTE) is a very popular oversampling method that was proposed to improve random oversampling but its behavior on high-dimensional data has not been thoroughly investigated. In this paper we investigate the properties of SMOTE from a theoretical and empirical point of view, using simulated and real high-dimensional data.Results While in most cases SMOTE seems beneficial with low-dimensional data, it does not attenuate the bias towards the classification in the majority class for most classifiers when data are high-dimensional, and it is less effective than random undersampling. SMOTE is beneficial for k-NN classifiers for high-dimensional data if the number of variables is reduced performing some type of variable selection; we explain why, otherwise, the k-NN classification is biased towards the minority class. Furthermore, we show that on high-dimensional data SMOTE does not change the class-specific mean values while it decreases the data variability and it introduces correlation between samples. We explain how our findings impact the class-prediction for high-dimensional data.Conclusions In practice, in the high-dimensional setting only k-NN classifiers based on the Euclidean distance seem to benefit substantially from the use of SMOTE, provided that variable selection is performed before using SMOTE; the benefit is larger if more neighbors are used. SMOTE for k-NN without variable selection should not be used, because it strongly biases the classification towards the minority class.
The methods and results of health research are documented in study protocols, full study reports (detailing all analyses), journal reports, and participant-level datasets. However, protocols, full study reports, and participant-level datasets are rarely available, and journal reports are available for only half of all studies and are plagued by selective reporting of methods and results. Furthermore, information provided in study protocols and reports varies in quality and is often incomplete. When full information about studies is inaccessible, billions of dollars in investment are wasted, bias is introduced, and research and care of patients are detrimentally affected. To help to improve this situation at a systemic level, three main actions are warranted. First, academic institutions and funders should reward investigators who fully disseminate their research protocols, reports, and participant-level datasets. Second, standards for the content of protocols and full study reports and for data sharing practices should be rigorously developed and adopted for all types of health research. Finally, journals, funders, sponsors, research ethics committees, regulators, and legislators should endorse and enforce policies supporting study registration and wide availability of journal reports, full study reports, and participant-level datasets.
Single-pixel interior filling function approach for detecting and correcting errors in particle tracking
- Proceedings of the National Academy of Sciences of the United States of America
- Published about 4 years ago
We present a general method for detecting and correcting biases in the outputs of particle-tracking experiments. Our approach is based on the histogram of estimated positions within pixels, which we term the single-pixel interior filling function (SPIFF). We use the deviation of the SPIFF from a uniform distribution to test the veracity of tracking analyses from different algorithms. Unbiased SPIFFs correspond to uniform pixel filling, whereas biased ones exhibit pixel locking, in which the estimated particle positions concentrate toward the centers of pixels. Although pixel locking is a well-known phenomenon, we go beyond existing methods to show how the SPIFF can be used to correct errors. The key is that the SPIFF aggregates statistical information from many single-particle images and localizations that are gathered over time or across an ensemble, and this information augments the single-particle data. We explicitly consider two cases that give rise to significant errors in estimated particle locations: undersampling the point spread function due to small emitter size and intensity overlap of proximal objects. In these situations, we show how errors in positions can be corrected essentially completely with little added computational cost. Additional situations and applications to experimental data are explored in SI Appendix In the presence of experimental-like shot noise, the precision of the SPIFF-based correction achieves (and can even exceed) the unbiased Cramér-Rao lower bound. We expect the SPIFF approach to be useful in a wide range of localization applications, including single-molecule imaging and particle tracking, in fields ranging from biology to materials science to astronomy.