Fun with Math, and Muslim Suicide
Torture Data, and Data Will Start Getting Uncomfortably Dark
You are reading a note from the Working Towards Ehsan Newsletter, covering Muslim Non-Profits and Leadership, a service of Islamic Estate Planning Attorney Ahmed Shaikh. Subscribe to this newsletter below:
In a previous post (“The Suicide Trick”, I described JAMA (Journal of the American Medical Association) Psychiatry’s peer-reviewed “research letter” where the claim that American Muslims are twice as likely to attempt suicide is often accepted as fact by Muslim leaders, especially Imams, the target of Maristan’s marketing, and Zakat donors. I had described that the numbers as found by ISPU, were all within the margin of error and meaningless. It was a trick that caused the numbers to appear to look like Muslims were twice as likely to attempt suicide. This is how it was done:
When Adjusting For Demographic Factors
The JAMA letter authors include an important qualifier that carries mountains of weight here, their conclusion that American Muslims are twice as suicidal as everyone else only happens “when adjusting for demographic factors.” The authors took otherwise numbers with no statistical significance and ran them through a statistical computer program to run a “regression analysis,” a technique to show relationships between variables. Only after the authors did this they got the results that would otherwise be impossible to obtain.
Let’s Frankenstein the Data
Yaqeen Institute Scholar and statistician Dr. Osman Umarji (writing through his University of California, Irvine affiliation) wrote to JAMA Psychiatry (his long-form reanalysis is here) and then posted a comment to the research letter after re-running the data. In it, he describes two principle critiques, which I will endeavor to simplify:
The first is that the authors started with nothing of value. There was nothing to report regarding the number of Muslims who attempt suicide and other faith groups, at least not from the ISPU data. There was no justification for doing a regression analysis in the first place. From the analogy in my previous article, if you flip several coins once and find that there is no greater chance you would get heads than tails, there is no reason to start doing regression analysis to find something different.
The second criticism is that the result of this data torture is a “suppressor effect”- an error known to statisticians for over 100 years. Umarji contended that the authors were wrong to run such an analysis with what they had and that the result was false mainly because of the way the authors treated race. Going back to the coin flip example in my prior post, the statistician added the year engraved as a variable, but the coins that got you tails were from different years than the coins that got you heads, so the data on the year engraved does not tell you anything useful. The results of regression analysis would be predictably worthless. Just like the results of a confession during torture are worthless. If you torture data enough, it will tell you what you want it to say.
Fun with Race
The ISPU data combined with the JAMA author’s work portrayed an American Muslim community as more Non-Arab White than Asian.
The US Muslim Community, according to the JAMA authors (the ISPU report showed this somewhat differently) in 2019, was 26% black, 26% white, 24% Asian, 10% other, and 14% Arab.
The factors leading to why the JAMA author’s regression led to erroneous results were:
The reference group (regression analysis needs them) was white, a group with a high rate of suicide attempts and also likely overrepresented in the American Muslim Survey.
ISPU, in its 2019 report, excluded Asians completely from their “general population” survey because their sample size was too small for them to have enough confidence in their numbers. The Muslim survey did include Asians.
According to ISPU data from 2019, “Arab” is a race in the United States that is exclusively Muslim. It’s not that ISPU had a small sample of non-Muslim Arabs in their data; they had nothing. The lack of any non-Muslim Arabs created a “correlation” between “Arab” and “Muslim”- when all Arabs are Muslim it’s hard to compare it with anything. A high correlation is known to cause inaccurate results in the kind of analysis the authors were doing. This part of emphasized by Dr. Umarji.
It should not be surprising that nobody has yet come up with a way to generate accurate conclusions from data that does not exist. No software program is that good.
You don’t need a degree in statistics or understand terms like “suppressor effect” or multicollinearity to understand this kind of game-playing would go disastrously wrong, or exactly right if you were fishing for a specific result.
Race is Important, Right?
Rania Awad, the lead author of the JAMA Psychiatry study, responded in her comment stating about race:
[I]ncluding race in the model improves the predictive validity of the model and may provide a more accurate representation of the relationship between religion and suicide attempt. Race is a fundamental control variable that must be considered when studying any suicide epidemiologic study.
She further explains:
We can’t simply remove the variable because Arab Christians were underrepresented in this sample.
Awaad was overly generous to herself here. Arab Christians were not merely “underrepresented” - they did not exist. If race were a “fundamental control variable,” she should have obtained adequate data on the races she figured were important.
The JAMA Psychiatry authors mangled the data to arrive at their desired conclusion. There was no other way to arrive at this conclusion without running these numbers through Dr. Awaad’s gratuitous regression analysis blender.
Phantom Biostatisticians, Phantom Response, A Phantom JAMA “No Error” Certification
ISPU’s Dalia Mogahed, a credited co-author of the study, publicly claimed after Dr. Umarji’s critique that the authors had three independent biostatisticians review the work. The identity of these biostatisticians appears to be a secret, even from co-authors (including a couple of authors I spoke to who publicly bragged about their supposed existence) and JAMA itself.
Mogahed also claimed JAMA reevaluated the author’s work and confirmed the accuracy of their findings. The lead author, Dr. Rania Awaad, took to Twitter to make a similar claim:
One might wonder: what 10-page detailed response? Like the phantom biostatisticians, Dr. Awaad appeared content only appealing to the authority of a document she would not share. JAMA editors never concurred there were no errors in the author’s analysis (naturally Dr. Awaad never produced this certification), and Dr. Awaad does not accurately state what “standard protocol” was at JAMA. I checked on these things with JAMA editors.
Separately, the JAMA “Muslim Suicide” authors publicly accused Dr. Umarji of disinformation, of being a liar and a troll.
As it happens, I obtained the “detailed response” the authors provided to JAMA. As I will discuss, it makes perfect sense why the authors kept it from the Muslim community and why the reaction to Dr. Umarji’s legitimate critique was so strikingly unscholarly and ad hominem. It will also be evident that it does not matter if the three “biostatisticians” were real, and if they are, it does not matter what they said. The “detailed response” will tell us plenty.
This Zakat-Suicide grift in the Muslim community is about to get worse. You will see how in the final part of this series.
Please be sure to subscribe to this newsletter for more.
Maybe you should not care how stats work, but that is where mischief takes place. Stats are where a lot of lying and cheating happens. I will have another article shortly that should fill the gap for you. Unfortunately, I am doing this in three parts and maybe that was a mistake, but one that can be remedied.
After reading both parts, I feel I’m missing the point here. As a layman, I don’t see how I should care how the stats work. So far it’s been one person who reviewed the stats and disagreed with the conclusion. There is zero reason to believe the authors are trying to deceive the public. But more importantly…what exactly is the issue? Are these mental health organizations doing something mischievous? What are they doing with the money? As far as I can tell, they have a just cause and trying to make a positive difference. None of that is really explained in these articles.