A pocket guide to the Internet’s reddest flags

Features Stories

It’s 9:48 p.m., and you’ve just finished the second half of your PSYC 100 readings. After capping highlighters and tucking your notes into your textbook, you pull your laptop over. You close Word, turn up the music, and check out who’s putting what on Facebook.

A couple posts later, you furrow your brow. A new FB friend from two res buildings over posted a FoodBabe photo of a Starbucks to-go cup:  “Dissecting Starbucks ‘Pumpkin’ Spice Latte: Think Before You Drink,” listing a number of ingredients, and accusing the beverage of being a toxic mess.

You scroll on, but between Instagram pics and watching a suuuuuper cute piglet bouncing through grass for all of six seconds (three times in a row) you can’t help but worry, and scroll back up to the FoodBabe’s post. Is the Pumpkin Spice Latte toxic? How would you find out? After all, you’re not studying toxicology, and don’t know how to scientifically assess the claims Vani Hari, the FoodBabe, is making.

red flags title

Unlike the textbook clapped shut and pushed aside minutes earlier, or the carefully selected readings in class coursepacks, online materials vary wildly in their quality, accuracy, and the evidence used to support their claims. Hari’s latte meme is a good example: lots of claims with little explanation to back them up. We’ll walk through a couple of red flags in Hari’s post as an example. It’s not exhaustive (she makes a lot of assertions), but we’ll give you a good idea of how to spot some flags.

What we’re really talking about is critical thinking, which requires self-reflection, being aware of our own shortcomings, and being willing to admit when we are wrong. Indeed, the real key to critical reading is figuring out when it’s important to read something thoroughly, since we can’t do it all the time. Psychologist Daniel Kahneman calls this challenge “thinking fast and slow,” in his 2011 book of the same name. It means that we usually “think fast” (i.e. based on short-cuts and intuition), but only occasionally ‘think slow’ (i.e. carefully and critically).

How would you start this process? Generally speaking, we advocate for asking this approximate series of questions: What evidence supports the information being presented? How are these arguments made? What are the sources, and where do they come from? Was the evidence used correctly or not? And who is making the argument in the first place?

Now, being able to identify assertions is an important skill for a critical reader. An assertion is any claim that is unsupported (i.e. has no explanation or evidence), and they are everywhere. For instance, “veganism is unhealthy,” “corporations are out to get us,” and “Katy Perry is terrible” are all assertions. In Hari’s case: “Starbucks’ Pumpkin Spice Latte is toxic.”

Identifying a statement as an assertion doesn’t mean that it is untrue; it means we need more information to determine whether the assertion is valid. Assertions are common because writers and speakers often assume that their audience already agrees with them, or think their claim is so obvious that it doesn’t need support. But any given group should have to work to gain support through explanation and evidence, instead of presuming we will be persuaded by rhetorical statements. This is where asking a lot of questions is really important.

[pullquote]In short: assertions on their own should not be enough to convince us.[/pullquote]

Sometimes we’ll see people defend assertions not by providing additional explanation and evidence, but by trying to shift the burden of proof onto their critics. They might say, “Prove me wrong,” or, “Well, my claim hasn’t been disproven.” This should be a red flag, since it suggests that they don’t have any real way to support their assertion, or that they’re unwilling to share additional evidence. Thankfully, this doesn’t appear in Hari’s meme, but such tactics are used by Internet trolls worldwide.

Of course, it’s ideal if evidence is also used in challenging assertions (e.g., “Do you have any evidence for your claim? Because here’s some evidence that says the opposite”), but ultimately the burden of proof should fall on the person making the initial claim. We should always demand evidence from those who seek to persuade us.

Because many arguments on the Internet are now quickly communicated in the form of brief memes, quotes, or jokes, they don’t usually include any real reference to evidence or explanation. They expect us (the reader) to take them at face value. Hari’s latte meme is a good example: a claim with little explanation to back it up. Because there’s little information accompanying the meme, we realize that if we accept anything from the meme, it’ll essentially be on good faith.

At this point, we don’t know anything about Hari—who she is, or how she came to this conclusion—so we shouldn’t just trust her by default. At the very least we should click through to the FoodBabe’s website, where we can look into the longer discussion she’s posted, check out some of her sources, and generally find out more. In short: assertions on their own should not be enough to convince us.

While there are many objectionable characterizations in her post, including claims about GMOs and “Monsanto milk,” we’ll focus on just one assertion in particular: one about the ingredient Caramel Colouring Level IV and its associated byproduct, 4-MEI. Hari tries to argue that 4-MEI is carcinogenic, and cites research from the US Food and Drug Administration (FDA) and the Centre for Science in the Public Interest (CSPI), concluding that Starbucks is poisoning people with their seasonal latte, and so we shouldn’t drink it.

Hari does seem to support her claim with evidence: the FDA and CSPI. But we should go a bit further to discover whether the evidence was used correctly. Depending on the discipline, there are a number of resources for helping us figure this kind of thing out. A website like scienceornot.net identifies a wide range of science-related assertions and misuses (or fallacies) that people make—a convenient way to identify major red flags.

By working through some of the documents she cites and reading other FDA sources, we crinkle our brows when we find that Hari cherry-picks some of her data: she cites the FDA, but ignores other evidence where the FDA (as well as the European Food Safety Authority and two other major health organizations) has plainly stated that Caramel Colouring IV is safe. That’s definitely a red flag. And in this case, we’re more inclined to trust the FDA than someone who takes only a part of their research and runs with it.

At this point, we can assess the quality of the evidence, as we’ll do with CSPI. Ideally we want writers and speakers to use top-notch evidence to support their claims. When we’re looking at understanding sources outside of our field, we need to do a bit of extra poking around to establish whether a source is credible or not.

Peer review is an excellent way to get a sense of a source’s credibility and accuracy, and is the backbone of all academic disciplines. If an individual or group has been negligent in their reporting of facts, or has a political bias in their arguments, or simply misrepresents information, peer review will often let us know these things, as Wikipedia alerts us above.

red flags sidebarWhat about Hari herself? She uses science-y words, and investigates chemical ingredients… Is this a case of “looks like science, smells like science, is science?” On the contrary, scientists and doctors (in this case, a proxy for peer review) have criticized Hari for her positions on flu vaccines and microwaves—another red flag.

How do we weigh these out? Well, doctors and scientists at least have specialized training in the field. From a browse on Hari’s website, we realize she is a blogger—not a bad thing—but she has not provided any evidence of any post-secondary training in dietetics, chemistry, toxicology, or any related field, which might contribute to why she has people with science backgrounds criticizing her seemingly scientific claims.

Taking this more broadly: maybe we don’t want a blogger to pose as a dietician or a scientist without the appropriate training, especially when her arguments don’t mesh with any evidence. Hari makes some major mistakes with even the one example we looked into. It seems that the issue here lies within her speaking from a place of authority that she doesn’t have, with qualifications she lacks.

This may seem like a fairly involved process to go through for one online claim, so do you really have to do this for every piece of information you encounter? In an ideal world, yes, but for those without the time or mental energy, you can gather a list of trusted sources after examining them critically, while being careful of the dangers of shortcuts.

Whether we’re conscious of it or not, we constantly take mental shortcuts when processing information. For example, the “halo effect” makes people seem more reasonable if you already like something else about them. Some people consider Hari very telegenic and attractive; this might sway them to trust her, and consider her reasonable on this account.

Similarly, the “affect heuristic” causes us to process information based on our emotional response to a given person or thing, so it’s no wonder that writers often use overly dramatic language. But, as shown above, there are big red flags with just the first major point Hari makes in her post, which makes us seriously question using her as a source for dietary considerations, and the affect heuristic doesn’t help us out in this case. Are there any good shortcuts?

One useful shortcut is to trust certain experts and sources once you’ve evaluated them initially. After you’ve read a few articles critically, you will probably find that there are a number of authors or organizations that have (unlike Hari) met your standards for evidence and seem to be fairly objective in their claims. In the future, you can apply less scrutiny to these sources, deferring to their judgment on the other perspectives that arise in the same field. Similarly, if we find that a certain source embodies bias, misinformation, and rhetoric, it is probably safe to not give it much weight in the future. Of course, these perceptions should be viewed as tentative—we should always be open to the idea that we were initially mistaken, or that our trusted source is reasonable on some issues, but not others.

So as we acknowledged, it might not be a FoodBabe post about a seasonal latte drink; find out what issues matter to you most, and apply your critical capacity there. For other issues, be willing to say “I’m not sure about that yet,” and listen to those you trust (ideally experts) to have thought critically about that issue instead.

There’s a lot of good and a lot of wacky and a lot of plain wrong stuff on the Internet. As university students, we learn to develop these analysis strategies and apply them in academia, and as importantly, outside of it too. By finding red flags, we can learn to distinguish good arguments from bad, and good sources from bad (most of the time), and start to decide (tentatively) who and what we can trust.

While we recognize it can be difficult to use this kind of approach all the time, we think it’s definitely worth it, because sometimes, it really can make the difference between being duped or seeing through a terrible argument. Asking the right questions is something everyone can do.

red flags sidebar

Editor’s note: This article previously stated that Hari had made an error in referring to 4-MEI as 4-MEL; in fact, both names are acceptable for the chemical product described. We regret the error.