What do we trust when research and lived experience conflict? (Part 1)

Mosaic of Minds and Other Musings
11 min readMay 17, 2023

--

Edits, mostly to alt text and captions, were made June 15, 2023.

What Research Can Tell Us

Until recently, scientific researchers and clinicians were considered the sole authorities on developmental disabilities. Sometimes, they disagreed with the people they were researching. For example, some autism researchers claimed autistic people don’t understand that other people have thoughts and feelings different than their own. When autistic people or their parents described evidence to the contrary, researchers (and the media, and the general public) dismissed this testimony out of hand. After all, the researchers were the Experts, so their pronouncements must be right.

No more.

We are witnessing a revolution. Autistic people are claiming the right to speak about the nature of their disability, and questioning whether researchers are as accurate or objective as they claim.

Starting in the 2000s, non-autistic researchers like Morton Ann Gernsbacher and Laurent Mottron lent their support. Gradually, more followed suit. The media started taking notice: first by writing think pieces, later by praising Steve Silberman’s Neurotribes, a deliberate re-writing of autism history influenced by autistic people’s own narratives.

The cover of Steve Silberman’s book Neurotribes: The Legacy of Autism and the Future of Neurodiversity, with a foreword by Oliver Sacks.
Image from Emily Willingham’s book review in Forbes.

Meanwhile, psychology, psychiatry and neuroscience have come under fire. These fields struggle with publication bias, questionable statistical practices, and continual failure to replicate findings. These issues exploded into public awareness with reports of finding BOLD activation in a dead salmon’s brain, and John Ioannidis’s claim that “most published research studies are false”. Many educated people probably know that one can design a study that supports almost any opinion. [1]

An MRI of a dead salmon with red dots showing statistically significant “brain activity,” accompanied by the headline, “Scanning Dead Salmon in fMRI Machine Highlights Risk of Red Herrings” (fishy pun presumably intended). Sourced from an Odyssey article,
“The Dead Salmon Problem in AB Testing,” by Robin Fisher

Psychology, psychiatry, and neuroscience aren’t uniquely flawed: any field that refuses to publish most null results and fails to correct for multiple comparisons will have a similar problem. Medicine, neuroimaging, and genetics are probably especially affected. Medicine lacks control over a lot of what it seeks to measure, and neuroimaging and genetics make so many statistical comparisons.

To their credit, psychologists and neuroscientists responded by publicly soul-searching and debating how to do better on blogs and Twitter. They also created guidelines for every imaginable type of study, which can be found on the EQUATOR Network website. Many scientific journals adopted these as requirements for publication, which pressures researchers to make their work more transparent and replicable.

Given the challenges for research in general, how far should skepticism about disability research go? What do psychology and neuroscience have to offer that lived experience does not? Should researchers be considered an authority? If so, on what topics, and to what extent?

I occasionally see disability activists who seem “anti-science,” ignoring or dismissing research unless it supports their beliefs. Is that going too far?

How can disabled researchers navigate tensions between the assumptions of their discipline and their understanding of their own disability?

In short: who should we believe when research and lived experience come into conflict?

That’s an especially tough challenge when both the research and the lived experience are your own.

Illustration of a dark-skinned man in glasses and a lab coat writing on a notepad while looking in the mirror.
Illustration by Sam Chivers from an article in Spectrum called “Meet the autistic scientists redefining autism research.”

My answer: experimental research can help us understand disabilities, but it tells us different kinds of things than lived experience. We need both. Which one should be taken as the authority depends on the claim being made.

I’m going to deal with this issue by talking about the strengths and weaknesses of research, and not so much about those of lived experience. Research is just easier to pin down and talk about.

Also, when I say “research” or “science,” I’m talking about quantitative research: experiments plus statistical analysis.[2] In my opinion, what makes something “science” is that the results aren’t a foregone conclusion: your hypothesis can be falsified.

In this post, I’ll describe what we gain from scientific research on disabilities. In the next, I’ll discuss the limitations of research and why lived experience fills in these gaps. Ultimately, research and lived experience can complement each other.

What Research Does Well

Strength #1: Research can tell you what’s true of a whole group of people.

Researchers have an advantage most people do not. They can provide evidence of whether what’s true of one person is likely true of a whole category of people. An autistic person might notice she jumps at sudden loud noises. A researcher can find out whether any random autistic person you might meet on the street does the same thing.

Researchers can draw such conclusions for several reasons.

First, they have observed many autistic people over a long period of time.

The more autistic people you meet, and the longer you spend with them, the more you can learn about their characteristics. The more you compare them to people with other disabilities, the more you can say about what is true about autism, beyond what is true of people with disabilities in general. The wider the variety of age, socioeconomic status, and other characteristics you see, the more sure you can be that you are noticing something about autism, and not about people of a certain age, socioeconomic status, etc.

Researchers are not the only people with such expertise. Autistic people who have been part of disability communities for decades and interacted with hundreds of peers also know a lot about what autistic people are like. So do people who have taught autistic students for years. So do clinicians who have spent their careers diagnosing autistic people, comparing them both to neurotypicals and people with other disabilities.

However, each sort of expert has limits to their knowledge. Autistic people in disability communities tend to meet the sorts of people who participate in disability advocacy: those who are willing, able, and allowed to participate. Teachers may not know what autistic people are like when not in a student role. Clinicians tend to encounter people in their geographic area, and can only guess what autistic people are like in other countries.

By contrast, researchers deliberately search for a mix of participants as close as possible to the whole autistic population. Experiments are designed to test people who are as representative of the population of interest as possible.

They can potentially work with people around the world.

Most importantly, researchers have statistics. They can test whether an observation would still be true if you “took away” the effects of age, socioeconomic status, etc.

In practice, researchers are limited by practicalities of funding, scheduling, time, language barriers, participants’ willingness to volunteer, and so on. Still, of all the types of experts, they come closest to being able to observe a representative population.

Strength #2: Research lets you define and measure things precisely

Researchers can describe and measure people’s behavior more precisely than anyone else.

Here’s what I mean by precisely. Imagine you’re at a party and someone asks you how tall a guest is. Like most people, you don’t carry a tape measure or ruler with you. What would you answer?

You can see, for example, that some people seem “tall” and others “short.” You can compare heights. Given enough time, you could rank everyone in order. You could also say, for example, that Bob is taller than Mary.

However, you could not say how much taller Bob is than Mary, or how that difference compares to the size of the difference between Billie and Jean. You also could not say how tall Bob is or how tall Mary is.

Photo of 11 teenagers and 2 children wearing casual clothes, standing roughly in order of height, tallest on the left and shortest on the right. They vary in gender and ethnicity.
This image from a CBS article shows people of both sexes and varying ages standing roughly in order of height (tallest on the left, shortest on the right). How much taller is the man on the left than the girl on the right?

To get that information, you need a ruler or tape measure. Having a standard unit of height, and a device to measure it, gives you information.

The units themselves are arbitrary. Why is a centimeter the size that it is? It doesn’t really matter how big a centimeter is, only that we know how big it is, and it is always the same size. What matters is not the unit itself, but the fact that one exists.

There are different units you could choose to measure height, which seem equally informative. Do we use inches, centimeters, or something else? Which unit we choose is influenced by cultural, historical, and political factors. However, that does not mean all claims about height are suspect or meaningless.

Most things we want to know about people are not as straightforward as height, and it’s less obvious how to measure them. How do you measure happiness, anxiety, or introversion? What about the intensity and focus of someone’s interests? How about someone’s charisma?

Psychology and neuroscience are about finding standard, observable measures of these processes (“operationalizing” them). These fields develop ways of measuring, debate which to use, and choose a few as the “gold standard.” (For example, the “go-no go task” is a standard measure of “response inhibition,” the ability to stop yourself from performing an action).

Researchers develop measures of psychological characteristics the way historical cultures developed measures of height. Just as measuring height gives us information we would otherwise lack, so do psychological measures.

Psychological characteristics are a little more complicated than height. For height, we just get out a ruler and hold it from the ground to the top of a person’s head, then read the highest number. For psychological characteristics, we conduct an experiment. In both cases, though, we go through a predetermined process to get a number.

An Example Experiment

How does all of this apply to understanding disabilities?

Suppose you want to know how well autistic people recognize faces. How do they compare to “the average person” or to people with prosopagnosia (inability to recognize faces)? You’re a regular person without access to a psychology lab. How would you try to find out?

You might first think of finding a bunch of autistic people, a bunch of neurotypical people, and a bunch of people with prosopagnosia and asking them, “how good are you at recognizing faces?”

The accuracy of the results would depend on their self-knowledge and ability to judge how they compare with other people. You will probably find that people answer based on how hard it feels to recognize a face. That tells you something interesting, but doesn’t tell you how skilled they actually are.

Let’s say neurotypical people rate themselves as better at recognizing faces than autistic people do. Do autistic people actually have difficulty recognizing faces, do they lack confidence in their abilities, or both?

There’s a lot else you can’t tell just by asking people. How much more trouble do the autistic people have recognizing faces than the neurotypical people do? Answering that question is like trying to measure height without a ruler: you have no consistent unit for comparing people.

Fortunately, you can answer such questions by doing an experiment. You can watch the same people while they recognize faces and measure their accuracy: the number or percent of faces named correctly.[3]

A 10x8 grid of photos of the faces of men and women. Each box shows a different individual.
This image is a 10x8 grid of photos of faces used in a face recognition experiment. Image from Autism Eye. The test these images come from can be accessed here.

Doing this experiment gives you more information than you had before.

You now understand more precisely how well your participants can recognize faces. You might find that some autistic people recognized all the faces, others recognized none, and most were in the middle. You might also find that most autistic people recognized more faces than people with prosopagnosia, but fewer faces than most neurotypical people.

You can now see how large the difference is between the group of autistic people and the group of neurotypical people. You can also see the size of the difference between the group of autistic people and the group of people with prosopagnosia. You can even compare whether the gap between the autistic group and the prosopagnosia group is as big as the gap between the neurotypical group and the prosopagnosia group.

This experiment is more precise than everyday observation in two ways.

  1. We have a unit that allows us to talk in terms of quantities, not just “more” or “less.”
  2. We have an agreed-upon method for obtaining this unit (showing faces and asking people to identify them later).

Real experiments are a more complex version of this example. The units of measure and measurement process are less intuitive, and reasonable people can disagree on whether researchers are really measuring what they want to understand. However, in principle, real experiments work the same way.

Researchers can talk about amount and size where the rest of us have only “more” or “less.”

Strength #3: Research lets you eliminate alternative explanations, so long as you’ve thought of those alternatives in advance.

When researchers design an experiment, they consider alternatives to their hypotheses. Well-designed experiments will test and eliminate at least one of these.

For example, autistic people often avoid looking directly into other people’s eyes while interacting. Different explanations have been proposed for why they look away. Some non-autistic researchers in the twentieth century thought it was because autistic people lack interest in the emotions that can be inferred by looking at a person’s eyes. Autistic people said that instead, they found eye contact uncomfortable — even painful.

A cartoon shows two girls talking while sitting cross legged face to face. The one on the right has a speech bubble over her head showing she is talking. There is a dotted line from the eye of the girl on the left to the eye of the girl on the right. The girl on the left is thinking, “Am I making enough eye contact? Is this too much or too little? What did she just say? I can’t concentrate, this makes me anxious.”
Cartoon by Autism_sketches, from the Autism Sketches Twitter account.

You might think it would be easy to decide who was right. All non-autistic researchers have to do is add a condition measuring how (un)comfortable eye contact felt to autistic participants. If eye contact was uncomfortable, then that could be at least part of the explanation for looking away. The researchers could rule out autistic people’s explanation only if most or all participants said eye contact was comfortable.

To my knowledge, such experiments didn’t happen. (Please correct me if I’m wrong!) These researchers failed to use one of the great advantages of their profession.

In short, quantitative research gives us information about large groups of people that no other field can. However, it also has limitations.

There are some things research can’t do — and lived experience can do well. Find out what those things are in the next post.

Footnotes

[1] Well, almost any claim. Only Andrew Wakefield ever found evidence that vaccines cause autism. Meanwhile, even though the James Randi Educational Foundation has offered a million-dollar prize for decades for evidence of psychic powers, it has yet to be claimed.

[2] Qualitative research is a whole other issue, which I’m not addressing here. I think it falls between lived experience and quantitative research in various ways.

[3] There are different ways you could set up the experiment. Whose faces do you show? Do viewers know the people pictured in real life? You could show everyone pictures of celebrities whose names they recognize. You could introduce them to strangers and test for recognition later. You could even show people pictures of friends and acquaintances they know. Some of these tasks are easier than others, and people’s accuracy will rise or fall accordingly.

Loved this story? Hated this story? Got tales of your own to share? Tell me all about it at Mosaic of Minds’ current home on Substack.

Further Reading:

Michelle Dawson’s excellent critiques of autism research, especially researchers’ definitions of “high functioning” and “low functioning” and “severity.”

Mel Baggs (2016). Don’t ever assume autism researchers know what they’re doing.

Morton Ann Gernsbacher’s APS Presidential columns on bias in autism research, including interpretation based on group rather than the actual results; interpreting all results (even opposite ones) as bad if they come from a disabled group and good if they come from a neurotypical group; and dehumanization.

--

--

Mosaic of Minds and Other Musings

Emily Morson explains research on neurodivergent brains through the lens of cognitive neuroscience, SLP, & lived experience.