Concept: Source criticism
Implicit biases involve associations outside conscious awareness that lead to a negative evaluation of a person on the basis of irrelevant characteristics such as race or gender. This review examines the evidence that healthcare professionals display implicit biases towards patients.
Scientific misconduct has been defined as fabrication, falsification, and plagiarism. Scientific misconduct has occurred throughout the history of science. The US government began to take systematic interest in such misconduct in the 1980s. Since then, a number of studies have examined how frequently individual scientists have observed scientific misconduct or were involved in it. Although the studies vary considerably in their methodology and in the nature and size of their samples, in most studies at least 10% of the scientists sampled reported having observed scientific misconduct. In addition to studies of the incidence of scientific misconduct, this review considers the recent increase in paper retractions, the role of social media in scientific ethics, several instructional examples of egregious scientific misconduct, and potential methods to reduce research misconduct. Expected final online publication date for the Annual Review of Psychology Volume 67 is January 03, 2016. Please see http://www.annualreviews.org/catalog/pubdates.aspx for revised estimates.
- Proceedings of the National Academy of Sciences of the United States of America
- Published over 3 years ago
To provide social exchange on a global level, sharing-economy companies leverage interpersonal trust between their members on a scale unimaginable even a few years ago. A challenge to this mission is the presence of social biases among a large heterogeneous and independent population of users, a factor that hinders the growth of these services. We investigate whether and to what extent a sharing-economy platform can design artificially engineered features, such as reputation systems, to override people’s natural tendency to base judgments of trustworthiness on social biases. We focus on the common tendency to trust others who are similar (i.e., homophily) as a source of bias. We test this argument through an online experiment with 8,906 users of Airbnb, a leading hospitality company in the sharing economy. The experiment is based on an interpersonal investment game, in which we vary the characteristics of recipients to study trust through the interplay between homophily and reputation. Our findings show that reputation systems can significantly increase the trust between dissimilar users and that risk aversion has an inverse relationship with trust given high reputation. We also present evidence that our experimental findings are confirmed by analyses of 1 million actual hospitality interactions among users of Airbnb.
Meta-research is research about research. Meta-research may not be as click-worthy as a meta-pug-a pug dog dressed up in a pug costume-but it is crucial to understanding research. A particularly valuable contribution of meta-research is to identify biases in a body of evidence. Bias can occur in the design, conduct, or publication of research and is a systematic deviation from the truth in results or inferences. The findings of meta-research can tell us which evidence to trust and what must be done to improve future research. We should be using meta-research to provide the evidence base for implementing systemic changes to improve research, not for discrediting it.
Much diagnostic error is caused by cognitive bias. More than 100 biases affecting clinical decision making have been described, and many medical disciplines acknowledge their pervasive influence on our thinking. Training in critical thinking may ameliorate the problem.
Geraghty in the year 2016, outlines a range of controversies surrounding publication of results from the PACE trial and discusses a freedom of information case brought by a patient refused access to data from the trial. The PACE authors offer a response, writing ‘Dr Geraghty’s views are based on misunderstandings and misrepresentations of the PACE trial’. This article draws on expert commentaries to further detail the critical methodological failures and biases identified in the PACE trial, which undermine the reliability and credibility of the major findings to emerge from this trial.
Honesty is a crucial aspect of a trusting parent-child relationship. Given that close relationships often impair our ability to detect lies and are related to a truth bias, parents may have difficulty with detecting their own children’s lies. The current investigation examined the lie detection abilities (accuracy, biases, and confidence) of three groups of participants: non-parent group (undergraduates), parent-other group (parents who evaluated other peoples' children’s statements), and parent-own group (parents who evaluated their own children’s statements). Participants were presented with videos of 8- to 16-year-olds telling either the truth or a lie about having peeked at the answers to a test and were asked to evaluate the veracity of the statement along with their confidence in their judgment. All groups performed at chance in the accuracy of their veracity judgments. Furthermore, although all groups tended to hold a truth bias for 8- to 16-year-olds, the parent-own group held a much stronger truth bias than the other two groups. All groups were also highly confident in their judgments (70%-76%), but confidence ratings failed to predict accuracy. These findings, taken together, suggest that the close relationship that parents share with their own children may be related to a bias toward believing their children’s statements and, hence, a failure to detect their lies.
The persuasive power of brain images has captivated scholars in many disciplines. Like others, we too were intrigued by the finding that a brain image makes accompanying information more credible (McCabe & Castel in Cognition 107:343-352, 2008). But when our attempts to build on this effect failed, we instead ran a series of systematic replications of the original study-comprising 10 experiments and nearly 2,000 subjects. When we combined the original data with ours in a meta-analysis, we arrived at a more precise estimate of the effect, determining that a brain image exerted little to no influence. The persistent meme of the influential brain image should be viewed with a critical eye.
Publication bias is a form of scientific misconduct. It threatens the validity of research results and the credibility of science. Although several tests on publication bias exist, no in-depth evaluations are available that examine which test performs best for different research settings.
- Health psychology : official journal of the Division of Health Psychology, American Psychological Association
- Published about 9 years ago
Objective: Information about risks is often contradictory, especially in the health domain. A vast amount of bizarre information on vaccine-adverse events (VAE) can be found on the Internet; most are posted by antivaccination activists. Several actors in the health sector struggle against these statements by negating claimed risks with scientific explanations. The goal of the present work is to find optimal ways of negating risk to decrease risk perceptions. Methods: In two online experiments, we varied the extremity of risk negations and their source. Perception of the probability of VAE, their expected severity (both variables serve as indicators of perceived risk), and vaccination intentions. Results: Paradoxically, messages strongly indicating that there is “no risk” led to a higher perceived vaccination risk than weak negations. This finding extends previous work on the negativity bias, which has shown that information stating the presence of risk decreases risk perceptions, while information negating the existence of risk increases such perceptions. Several moderators were also tested; however, the effect occurred independently of the number of negations, recipient involvement, and attitude. Solely the credibility of the information source interacted with the extremity of risk negation: For credible sources (governmental institutions), strong and weak risk negations lead to similar perceived risk, while for less credible sources (pharmaceutical industries) weak negations lead to less perceived risk than strong negations. Conclusions: Optimal risk negation may profit from moderate rather than extreme formulations as a source’s trustworthiness can vary. (PsycINFO Database Record © 2013 APA, all rights reserved).