A list of puns related to "Publication bias"
I started writing this as a reply to a comment but it got so goddamned long I decided to make a thread.
It is a case study of a case study. The paper at issue: Kratom, an Emerging Drug of Abuse, Raises Prolactin and Causes Secondary HypogonadismΒ (2018).
I look forward to the peer review.
Background
Kratom is a plant used for both medicinal and recreational purposes. In recent years, it has both become more popular and faced serious attacks from the american FDA with encouragement from the NPA. Of salience here, the FDA has been found to be actively misusing available information to pursue a ban of one sort of another.^11 This has lead to a sense of persecution among kratom users, some of whom had preexisting critiques of government, science or medicine. Some kratom users whose medical providers learned of their kratom use have had resultant negative experiences that have led to poor health outcomes. These accounts are shared online and have led to exacerbated mistrust of medical workers.
The article here was extremely annoying to one /r/kratom community member due to a perception that the authors were really banging on about how shitty and illegal kratom is, or might be, the relevance of which to endocrinology could be questionable.
Methods
I used the color highlight tool in my PDF reader to label text of the Abstract, Keywords and main body of the article according to the subject. I did not account for Conflict of Interest or the other mandatory disclosures, nor the References. Then I copied the text into an editor to count the words of each category. Where possible, sentences were counted as wholly in one category or another.
Categories used were mostly self explanatory however just to clarify a couple of them:
Results
Here is a screenshot of the paper, color coded and a nicely formatted table
Just the table in text (percentages rounded):
subject | words | share |
---|---|---|
expected background info | 408 | 40% |
direct cas |
There are many common stories that happen to most people in the church. Many people have had their tithing miracle where they pay tithing and miraculously get money for their other needs. Or there's the story where someone gets lost or loses something important and finds it after praying.
I've been thinking what if the miracle didn't happen? People wouldn't share that story in sacrament meeting talks or in church meetings because it wouldn't be as interesting. Maybe they just ignore that experience or even have some doubt.
Do you have a story that didn't happen how it was supposed to happen? What did you learn from it? Did you see a different kind of miracle than what usually happens?
Hi all,
I am wondering if there is a way to convert publication bias statistics into some common metric. For instance, is possible, for example, that the various Egger tests and their metrics (e.g., Z, t, etc.) are akin to a standard Z-test or t-test and so can be easily converted to an r effect size. Some of the other ones might be harder than that (e.g., I am not sure whether a fail-safe n can be converted to an effect size).
Let me know if anything like this exists!
The article: https://www.pjp.psychreg.org/wp-content/uploads/2020/12/nuzzo-120-150.pdf
Part of male psychology network publications. Excellent to see MRAs successful, they have created a non feminist ideology mens psychology section with publications etc.
Summary:
Summary:
ABSTRACT: Males fair worse than females on many health outcomes, but more attention, particularly at a national level, is given to womenβs issues. This apparent paradox might be explained by gamma bias or a similar gender bias construct. Such potential biases require exploration. The purpose of the current paper is to present six streams of evidence that illustrate a bias against menβs issues within the United Nations (UN) and World Health Organization (WHO). First, the UNβs sustainable development goal on βgender equalityβ is exclusive to females. Second, the UN observes nine International Days for womenβs issues/achievements and one day for menβs issues/achievements. Third, the UN operates 69 Twitter accounts dedicated to womenβs issues, culminating in 328,251 tweets since 2008. The UN does not operate a Twitter account for menβs issues. Fourth, female words (e.g., βwomenβ) appear more frequently than male words (e.g., βmenβ) in documents archived in the UN and WHO databases, indicating more attention to womenβs issues. Fifth, in WHO reports where simi
... keep reading on reddit β‘https://www.vox.com/future-perfect/2019/5/17/18624812/publication-bias-economics-journal
This was a piece about a novel economics journal [literally called βSeries of Unsurprising Results in Economicsβ or SURE] thatβs based on just publishing null results that would otherwise be seen as unremarkable and not interesting enough to be published in traditional scientific journals , such as conclusions that certain variables ultimately have no significant effect on each other or that certain interventions had no significant effects in their designated setting, or just confirming conclusions that had been reached prior thereby proving they were βreplicableβ, and just generally any dead ends to relevant questions in their field that other researchers now know to avoid. The goal of the journal, which is open access, is to help combat the replication crisis by reducing the publication bias toward more novel, eye-catching research that can come from questionable methodology and turn out to be difficult to replicate.
For scientists of Reddit, do you think journals with a similar set-up to SURE in your specific fields could serve a similar role of reforming the incentive structures in your specialties to help combat publication bias, whether they already exist, and if there are any unique quirks to your research community that would make them more receptive to or more skeptical of such journals?
I just watched a talk from Ben Goldacre to Ted Med where he declaims publication basis as 'a cancer at the heart of evidence-based medicine' and rails against all the 'fake fixes' such as the registers of trials where researchers were supposed to record their protocol and hypothesis before going ahead.
Goldacre said that people simply didn't bother much of the time and that journals continued to publish regardless of whether they did.
But the talk was given in 2012 and I wondered if things have gotten any better since then?
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.