In 1968, a group of teachers and students participated in a psychology study. The students were given an IQ test by the researchers. The results were not disclosed to the teachers. The researchers shared with the teachers that five students exhibited unusually high IQ scores and could be expected to outperform that year. The twist: those five students weren’t the highest scorers but rather entirely chosen at random.
At the end of the year, the group of students that were selected at random to have “unusually high IQ” statistically outperformed their peers. Maybe the teachers spent more time with those students or treated them differently. The teachers might have subconsciously behaved to boost the children’s performance. The finding became known as the observer-expectancy effect — that performance can be positively or negatively impacted by the expectations of others.
The Clever Hans Effect is another example of expectations affecting behavior. Clever Hans was a horse that could spell, do math, and answer questions by stomping his hooves in a pattern. Except the horse was keenly aware of the subtle cues of his trainer or audience when he landed on the correct answer.
For a fictional example — In the first season of One Piece, the protagonists face an enemy that can hypnotize his minions into believing they are stronger — and then they actually get stronger. It’s taken to the extreme in the show, but sometimes, just believing is enough to make things happen.
I think about observer-expectancy as it relates to generative AI. We now have image models that can convincingly create images of ourselves in any scenario. Like how representation matters in media for children, could we achieve similar effects with AI? To help us visualize ourselves on successful paths?
Or with AI Biographers that could likewise tell us encouraging narratives about ourselves — past, present, or future ones. We might all have a teacher who can treat us like we’re in the experimental group of “unusually high IQ” students who are expected to outperform.
There’s rightly a lot of focus on the opposite — how AI can generate negative narratives and nudge us to believe them. Or how AI and recommendation algorithms can feed on our insecurities or fears and serve us more. But I think there’s a flip side that can be even more powerful if we learn how to harness it.
And there’s some criticism of the Rosenthal study — social science research faces a replication crisis (a majority of studies don’t have results that can be replicated). But sometimes, believing it is enough to make it true.