In London I regularly walk past people sharing or promoting garbage – it might be why we’re going to hell, why we need to invest in X or why we need to buy this product. I even get automated phone calls selling me things. Nobody listens to them but, for some reason, the amount of respect people give to random ideas increases dramatically when it’s in the form of an article or website. Particularly so when it’s dressed up to closely resemble a scientific review.
For some reason the written word has always been held in higher regard than the spoken one. In our culture written opinions, published anywhere, still come with connotations of authority and trustworthiness. However since internet opinions aren’t anymore regulated than the verbal ones your mate might give you in the pub, we need to invite the same scepticism to reading as we do to listening to people who aren’t qualified to give advice.
In fact, whenever I see a patient receiving a diagnosis, the doctor explaining it never fails to warn the patient about the dangers of ‘googling’ their illness.
Articles that say a particular symptom means inevitable death, or articles where ‘experts’ explain how a disease is actually the result of a deficiency to the product they’re selling, far outnumber legitimate, informed opinions.
The internet is awash with fallacies and it isn’t always easy to tell what can be trusted. Health gurus regularly misuse science to back up their ideas and enthusiastic amateurs share conclusions from studies without being able to assess their quality.
Common types of internet health fallacies:
1) The accidental, well-meant fallacy. Normally from people who are careless and share their knowledge of science they acquired from reading articles and abstracts on PubMed. They lack understanding of how to asses the reliability of a paper and are often well-meaning enthusiasts.
2) The intentional, research-selective fallacy. This is often from people selling products that have some basis of proof in scientific literature, but as the research is overwhelming against their product, they select the research they present.
3) The intentional, misrepresentation fallacy. This is similar to the last point, but worse. Rather than finding some papers that agree with them, there are no papers on the topic as it is so bizarre. So they use huge leaps of logic and create a theory that is extrapolated from some grains of truth and uses technical terms so many non-scientists can’t see the obvious lies.
Intentional fallacies are a much more malicious and manipulative activity – these people are the parasites that live within the belly of scientific research. They scour scientific journals and pick out the 1 or 2 very weak, positive results from 100 negative ones and use them to sell a product.
Science education at university has a huge emphasis on the importance of criticizing other scientists work. It’s not something that many people seem to be aware of but the conclusion of an abstract in a poorly ranked science journal means almost nothing. I wrote my undergraduate dissertation in Neuroscience on the molecular mechanisms behind a certain brain tumor, yet nearly half the marks came from pointing out the flaws and limitations in respected academic papers. The scientists who wrote them weren’t even trying to deceive, it was just that they often didn’t have the ability to carry out the required tests or didn’t have the best samples. It’s accepted that studies aren’t perfect and can lead to erroneous results – that’s why they’re re-tested again and again. But imagine how easy it is to prove a point by intentionally finding the really low-quality abstracts in the haystack of preliminary studies and poorly tested ideas?
This doesn’t mean science is flawed, it just means it can be manipulated by the insincere and passed on to the naive.
Legitimate medical doctors and scientists don’t ever take action on single papers, even in the best journals. They’re based on a thorough review of ALL the studies – in what is called a meta-analysis. And even then, many studies will be excluded because of how poor their methods were.
Many of the scientific fallacies are an example of what in philosophy is called a False Syllogism.
If all A’s are B’s, then all B’s have to be A’s.
The above statement is false. For example all scientists are human, but not all humans are scientists. This may be a patronisingly obvious statement but it is a simplified version of how many health gurus sell ideas and products. For example:
X can prevent cancer and X is found in supplement Y, therefore supplement Y prevents cancer.
A study will find that a certain substance at super high-concentrations has a small positive effect in some people; companies then sell a supplement containing a fraction of the dosage mentioned in the study because it is expensive.
However, even if you do check the research first and find out the dose that is required, the supplement company normally doesn’t tell you how much is in their product. They hide behind using a ‘proprietary formula’, whereby they’re no longer legally required to share the dosage of ingredients.
There are many high-quality products and writers out there though, and even the ones who can’t always back up their theories can be ok to listen to providing it’s obvious that what they’re talking about is their own totally unproven idea. Far too many bloggers and gurus have this written in the small print on their pages, but in articles and podcasts they state things in a way one naturally associates with fact.