I’ve noticed that I can be slow to switch contexts, compared to other people. For example, I’m at my desk and a coworker comes up, makes eye contact, and asks me a question. I hear the words they say, but I haven’t disengaged from whatever I was working on before. So, I mentally buffer the audio while the previous context drains, and then try again to parse it. But by this point, I can’t remember the audio clearly enough for this to work, so I have to ask them to repeat the question. I haven’t really noticed other people struggling like this.
I am terrible at multitasking in general. When my attention bandwidth is extra narrow, like if I haven’t slept enough, sometimes my thoughts can’t make it past the event horizon: I may forget what I wanted to write down halfway through taking a note.
But my superpower is that I can concentrate on one thing for hours, even days. My favorite activity as a kid was to spend long afternoons into evenings reading books or tinkering with my computer, and years later, if anything has changed, it is only that I have stopped feeling self-conscious about it (…as much).
When I encountered the idea of monotropism, I realized that traits like these could have a single underlying source with a name. Monotropism and polytropism are two extremes on a theoretical continuum of ways to distribute scarce attention, either by putting more intense attention into fewer things at once, or shallower attention into more things at once. It turns out that no one actually “multitasks;”1 when we think we’re multitasking, we’re really doing rapid task switching, which also happens to be how a computer runs multiple applications. In the case of computers, a natural specialization arises: long-running, intensive jobs are more efficiently handled by “back end” systems or dedicated threads, and the computation results can be surfaced by “front end” systems which are better adapted to handle real-time interfaces with users or downstream systems. The idea of monotropism suggests that humans have this kind of specialization, too. Maybe the ability to task switch easily vs. the ability to concentrate deeply is a characteristic like being left handed or right handed.
Monotropism is the basis of one theory of autism.2 After studying this a bit I still don’t know if I have (or had and maybe “got better” from) Asperger Syndrome, and I don’t have the background to know if there is a consensus that monotropism is a credible explanation for Asperger’s,3 and honestly I’m not even sure I properly understand Asperger’s,4 but I do think monotropism can account pretty well for some of the classic features. Encyclopedic knowledge about a few subjects of interest—obvious fit. Then there’s the difficulty in social situations, and I think that can be explained as follows. If it’s true that communication is 80% nonverbal (body language and tone of voice) and only 20% content (choice of words), then when you speak with someone in person, you need to pay attention to several channels at once. But in a real-time situation, being monotropic can be kind of like having tunnel vision: when you focus on one aspect, the others disappear, and as soon as you try to recover one of the other aspects, the first one disappears. So you are forced to choose a single aspect to lock focus on, in order to get any traction. That aspect is most likely to be the content, because that is the one thing a listener is liable to be quizzed on. This could explain the phenomenon of “missing social cues”—it’s an intentional triage. Monotropic theory could also explain other confusing and sometimes contradictory ways that AS presents, for example, physical coordination. Penelope Trunk (who says she has AS) wrote that she is clumsy most of the time, bumps into things and so on, yet she is very coordinated when she concentrates on doing something physical, and in fact was almost an olympic-level volleyball player.5 And finally, the fact that it’s possible to eventually learn even complex behaviors well enough to put them on autopilot—such as driving—suggests that tasks that are originally difficult for a monotropic person, like face-to-face communication, may eventually get easier. Sure enough, there’s evidence that autism improves in adulthood.6
I suspect, but can’t prove, that garden-variety introversion is also driven by monotropism, perhaps in milder form. The current consensus definition of introversion seems to be a lower preference or tolerance for stimulation.7 That’s compatible with an explanation based on monotropism—what is stimulation but exposure to novel attentional objects? The more traditional definition of introversion is a preference for less social interaction. Explaining this in terms of monotropism also seems straightforward: when you are focused on something, interfacing with another human is a distraction; so if you are a person who feels most comfortable when focusing on something, you may be unlikely to seek out social interaction.
In software algorithms, there is often a choice to search a branching structure depth-first or breadth-first. In the diagram at right, searching depth-first might mean you would look at the entire subtree under the “7” before moving on to examine the “5” or anything under it. Searching breadth-first would mean evaluating both the “7” and the “5” before moving on to look at any of the nodes beneath either of them (and then you would look at all of “2”, “10”, “6”, and “9” before getting around to the last tier).
In my own life I see a depth-first pattern play out. I don’t have data points across multiple individuals, but maybe my story is worth sharing to illustrate the idea.
I was precocious in both language and math. I wasn’t a prodigy or anything, but one or two standard deviations out. I could read at age three, and I started first grade at age five. I’ll always remember my first grade teacher’s generous effort to assuage my boredom by allowing me write worksheets for the other kids and use the purple mimeo machine to run them off. Around sixth grade, I was already in honors math but they wanted to advance me further so I joined the honors class at the next higher grade level. I applied to college as a high school junior and got into one of my top choices, so I skipped my senior year, and never did get my high school diploma.
However, per the stereotype, I had trouble fitting in. In early grades, I was comfortable playing by myself, and when I did talk with other kids, I later learned that I was perceived as conceited because I used words they didn’t know. So I was teased, and occasionally had kickballs thrown at me during recess. In junior high and high school, I did my level best to fit in, by copying how my classmates dressed and how they styled their hair with hairspray and curling irons. I think I developed an unconscious habit of smiling a lot, because one day as I entered a classroom the teacher greeted me as “smiley,” which felt kind of like looking in the mirror and discovering that you’ve had something on your face all day. When I arrived at college, I was delighted to discover the existence of other nerdy types who I could relate to, but after an inital, unsustainable binge of all-night one-on-one conversations, I settled into a tendency to spend hours on end hanging out with familiar groups and not saying much. Over the next few years, from time to time I would latch on to someone outgoing and follow them like a shadow, and I think a lot of the folks I came into contact with during this period weren’t sure if I had a distinct personality. In hindsight, I think in this phase I was collecting data and trying to build up missing depth.
At sixteen, I had an epiphany that music lyrics were often trying to express something. It was like one of those “stereogram” images, which looks like noise, but if you cross your eyes just enough, a figure pops out. Until that moment I literally had no idea why there were words in songs. So I went through my music collection and listened to all my favorite songs to see if there was a story in the lyrics. I made a mix tape with the ones that had the most prominent or interesting stories, and I enthusiastically gave the tape to one of my newer friends (I was a college freshman at the time) and said to him, “Listen to the words!” But I didn’t tell him why, and thinking back on it now, the poor guy probably listened to that tape thinking I was trying to send him a personal message, and he must have been very confused.
Shortly after this, I had the experience of rewatching one of my favorite childhood films, “Transformers: The Movie.” Watching the movie now as a teenager, I realized that it had a plot. The experience was like seeing something in color when it had only been in black and white before. The scenes and voices were familiar, but I remembered them as a jumble of rote images and speech fragments. I now saw that the characters were trying to achieve things and there was a progression from before to after. This layer of meaning had been invisible to me at whatever age I originally watched and re-watched this movie (nine or ten?). It’s like someone had been holding up a finger, and it took me years to realize that the person was not doing an arbitrary movement like a dance, but had been pointing at something. I don’t know at what age it is considered typical development to start being able to grasp movie plots (it’s pretty hard to keyword search for, and I don’t know any developmental psychologists), but everyone I have described this experience to thinks it’s strange to remember a time before having the ability to perceive story or plot.
So in summary, over the first 20 years of my life, the theme was being unusually smart and unusually dumb at the same time.
When I look back at my progression of mental development, it seems that my inner world consisted of islands of clarity surrounded by an ocean of arbitrary. It’s not that I didn’t have the cognitive tools to put pieces together and realize what they mean; it was a matter of scope. I could understand what was happening in a movie scene, but not an entire movie. So what seemed to be different between me and my peers—what I’m hypothesizing to be a monotropic characteristic—was that I was faster than average in grasping discrete, self-contained subjects, but slower than average in lacing together the bigger picture. If human mental development were like developing a photograph in a darkroom, the typical progression might be for the entire picture to gradually emerge with consistent sharpness throughout, whereas in my case it was more like a number of pinholes appeared, grew into spotlights of full detail, and continued expanding until they merged together and the picture was complete. So, part way through this process, I was lucid enough to realize that I had some deficiency compared to others, but couldn’t understand what it was or how to fix it. When the other kids were at an equivalent level of big picture apprehension, they probably weren’t lucid enough to see that they were missing anything. As a result, my self-confidence curve was set back a fair bit.
“Breadth first” as an approach would seem to have some real advantages. Isn’t it better to progress in a balanced way? What if it turns out something waiting to be discovered in branch Y will make all the depth sounding in branch X irrelevant? Perhaps, but the trouble is relying on the top level node in a branch to accurately represent everything below it. This is known as cache invalidation, one of the two hardest problems in computer science (along with naming things and off-by-one errors).8 Meanwhile, there are use cases where “depth first” as an approach stands out. One example is in the Gallup / “StrengthsFinder” research,9 which concluded that the old-school career advice to remedy your shortcomings may be misguided. If you’re great at programming and bad at marketing, a “breadth-first” attitude would say you should learn marketing, but the StrengthsFinder advice is that you are better off doubling down on your programming and hiring someone to do your marketing.
With respect to a slow developer “catching up” to the rest of the world in breadth, I still wonder what the bigger picture is that I’m still not seeing, and which maybe almost no one is able to see, because maybe it takes unusually strong scope of awareness. And maybe it’s rare to have the chance to develop this because the human lifespan is so short, witnessing only a tiny fragment of history, being exposed to just a speck of total available data. What is the next layer of meaning that we are too immature to grasp, because we operate on reactive autopilot and our spotlight of attention isn’t big enough to connect the dots?
When I was growing up, forgetfulness was a problem for me. In fourth grade, which was when I switched from public school to a private Catholic school, the new school required homework, but I wasn’t used to that. It’s not that I actively refused to do homework, I just wasn’t accustomed to it and didn’t have any tools to carry that context across to a non-school domain. I wrote down the assignments in a notebook, but when I got home, did I have any reason to look in my notebook? So it became a recurring real-life nightmare that I would come to school and we would be asked to take out our homework and I wouldn’t have it. Every time I would self-flagellate and insist to my teacher that I meant to do it. I got sent to the principal’s office somewhat frequently that year. I was also constantly leaving my belongings in restaurant booths or on the bus. I knew I shouldn’t forget things, but it kept happening. Or on math problem sets, the recurrent teacher comment was “careless”—I knew how to solve the problems, but on these repetitive assignments I would make “careless” mistakes.
But I cared. A lot. And I was sometimes driven to tears in frustration.
In 2015, Tony Attwood gave a lecture on Asperger’s in females10 in which he said the gender ratio for AS is considered to be 4:1 male to female, but the true ratio is closer to 2:1 because “girls with Asperger’s are smarter and more creative than the boys in coping with their social confusion.” He explained that boys with AS usually respond to challenging social situations in one of two ways: they either withdraw and isolate themselves, or they intrude upon and dominate others. But girls with AS often show a third coping mechanism which is not in the diagnostic criteria. From the talk:
And the girls will go, “Wow. I don’t get it. I don’t understand it. But who’s popular? Rachel. Okay, how does she talk? I’ll talk like Rachel. I’ll move like Rachel. What’s popular? Pink. Right, I’ll make sure I wear pink. What are they playing with? Barbies. I’ll get a hundred Barbies.” And so what she does is observe, analyze, and imitate. To “fake it till you make it.” She wears a mask, a facade, that makes her highly successful at what she does, but it’s a fake. It’s done by intellect, not intuition.
The only thing that surprises me about this picture is the suggestion that this behavior would be restricted to females, or people with Asperger’s. Imitation is a heuristic we all use. When there is no one in a particular checkstand or toll booth line, it is likely a waste of time to go try to start that line because it is probably not open. Of course, imitating others’ choices is a shortcut, not a substitute for validating the reasons for those choices. The trouble for someone with difficulty in certain domains, to employ imitation to get by in those domains, is that it may take them a long time to understand the underlying reasons—or even, if they start young enough, to realize that there are underlying reasons.
In hindsight, a basic point of confusion for me as a child was that sometimes when I performed a behavior (like, say, solving an equation), I was praised and given high marks on tests, but other times executing a trained behavior didn’t get good results. For example, I saw others writing their homework assignments in their notebooks, so I wrote the assignments in my notebook too, but imitating what I saw was not enough in that case (didn’t result in the homework getting done), and it was not immediately clear why.
I remember the moment when I realized that remembering things didn’t have to be a mysterious and magical talent that you either did or didn’t have. It was when I was late high school age, and I found out how a computer implements a reminder feature. The computer has a record somewhere that the reminder has to be raised at X time. So then the computer needs to repeatedly check: is it time yet? Is it time yet? Until eventually, it is. Right away, I realized that I could use this approach, as long as I set up a primitive to check a reminder system at a regular interval (say, looking at a calendar or notepad twice a day). Remembering could become something systematic, something as reliable as a machine, not subject to the whims of human distraction and subconscious. And I realized I could stop losing my belongings if I trained myself to check my surroundings before I move from one place to another. And I could remember to take care of something in the morning if I left the thing (like an envelope to mail) in front of the door before going to sleep. At Amazon, devices like these are called mechanisms.11 It turns out that business operations can have a certain similarity to an autistic childhood, in which important things can get forgotten because the islands of awareness aren’t cohering well (the islands of awareness are called employees).
Imitative behavior writes an assignment in a notebook because others do, and depends on an external cue to make that action useful (other kids probably had parents who hovered and told them to do homework; my parents did not). Leveraging a mechanism, on the other hand, means taking action intentionally, for the sake of a result. Starting to use mechanisms gradually wore away my long-reinforced behavior to imitate first and ask questions later. It took a long time, but I eventually stopped assuming by default that other people know better than I do. I ultimately transitioned from one extreme to the other, from a chronic space cadet to a professional organizer (more or less what a Technical Program Manager is). As Susan Cain, author of “Quiet,” pointed out in her appearance on the Tim Ferriss podcast, “So often, when you see someone who’s really good at almost anything, it’s because they actually started out exactly the opposite—and then they cared so much about fixing that problem.”
Revenge of the Monotropes
I used to have a pet theory of monotropism and Asperger’s. Since humans are herd animals, who band together for resource sharing and mutual protection, there is evolutionary reason for human minds and brains to be optimized for interaction with other humans. And some amount of computation power helps with this; for example, it’s suggested that the reason human irises are smaller than in other animals like horses, such that the whites of the eyes are visible, is so that it is conspicuous to other humans which direction the owner of the eyes is looking. The ability to reason about what others are thinking and feeling can be important for interacting effectively and building relationships. So, evolutionary pressure would start to afford the human brain some general computation power, but this would naturally show up more like a game console than a full computer. Hear me out: A game console has a processor, but the platform is so adapted to the purpose of gaming that it is rarely used for something else. A general-purpose computer can play games, and do it quite well, but this takes somewhat more effort—both in development, and in the playing—because a lot of the capabilities and peripherals that are provided as standard on the game console have to be built up or adapted on the computer. If you only wanted to play games, you would be needlessly complicating your life to get a full computer, but if you wanted to do more than play games, it might be a worthwhile tradeoff.
So the theory was that “normal” (polytropic, neurotypical) people have adaptations that give them automatic, innate abilities in social situations, like how horses can stand up immediately after they are born. But in autism, the native social abilities are attenuated or absent. These abilities can be emulated in software, given enough processing power and training, but it takes time, and thus may look like delayed social development. But this tradeoff allows an evolutionary local maximum to be avoided, which means the system overall is capable of more. After all, the time period of human gestation and immaturity is already among the longest of all animals, which seems to be necessary to support greater complexity, so maybe this represents another step in that direction. Similar to the fictional world of *X-Men*, in which some minority of people are born with mutations that give them superpowers, maybe Aspies represent the emergence of a more flexible cognition platform, capable of thought that is more rational and less influenced by in-built social biases.
It’s certainly a flattering picture, if you are on the spectrum, but the problem is, it doesn’t reflect what we see in reality. In his article “Nerd culture is destroying Silicon Valley,”12 Pete Warden points out how technologists—many of whom were once socially marginalized—do not seem to always display the rational behavior one would expect from an advanced cognition platform. It seems that once they gain power, they often use it in the same manipulative and shallow ways that they previously resented from others. What this behavior seems to actually demonstrate is tunnel vision and naïve mimicry. And maybe that isn’t so surprising.
So I have come to believe that monotropism and polytropism are simply two different, equally valid attention distribution strategies with their own strengths and weaknesses. In fact, I think both neurotypes need each other, just as the white matter and gray matter of the brain need each other.
But I think monotropes have had a bit more of an uphill climb. When my father, who almost certainly has Asperger’s, was a kid, the word for what he is was “shy.” When I was young, the term for it was “Attention Deficit Hyperactivity Disorder, inattentive type” (a giggle-worthy oxymoron, akin to “hyperactivity without the hyperactivity”). It seems clear how a trait that makes you run deep instead of broad can have side effects of isolation and even maybe invisibility. Yet in a world increasingly orchestrated by computers, monotropic talents of concentration are more relevant than ever.13 Here as in other contexts, understanding the nature of our differences is the key to mutual respect and to identifying the kind of support that each of us needs to develop our strengths.
I like to imagine that in the future, we will identify (as opposed to diagnose) monotropism. And from a young age there will be dedicated social groups, guidance and coaching, and tailored learning environments… well, a girl can dream.
Jon Hamilton, “Think You’re Multitasking? Think Again,” NPR news, 2008.
“People can’t multitask very well, and when people say they can, they’re deluding themselves,” said neuroscientist Earl Miller. And, he said, “The brain is very good at deluding itself.” ↩
Wendy Lawson, in “The Passionate Mind” (Kindle edition, 2011), promotes the idea of Asperger Syndrome as a difference in attention distribution strategy. She quotes Dr. Dinah Murray’s definition: “Monotropism is an innate tendency toward ‘having a few interests highly aroused,’ compared to a polytropic tendency of ‘many interests less highly aroused.’” ↩
Here is a list of seven theories of autism and Asperger Syndrome, of which monotropism is only one. ↩
In 2013, with the release of DSM-5 (Diagnostic and Statistical Manual of Mental Disorders version 5), the diagnosis of Asperger Syndrome was removed in favor of grouping the symptoms into Autism Spectrum Disorder. However, I’m skeptical of this, since according to my understanding (mostly shaped by reading Temple Grandin), classic Kanner’s autism has trouble with language and identifying categories (for example, telling cats from dogs), while Asperger’s tends to be hyperlexic and not only fluent with categories but even fixated on them. So it seems possible that the resemblance is more superficial than meaningful. In any case, many choose to continue to use the term “Asperger Syndrome” and in this article, I will too. ↩
I was always great at sports. In grade school, I was the only girl the boys let play kickball. In middle school, I was a regional figure skating champion. After college, I played professional volleyball.
But if I’m not focusing on the sport at hand, I lose track of my body. I bump into so many things that I almost always have bruises on my thighs, shins, and shoulders. This happens so routinely to me that it wasn’t until the past few years that I realized that not everyone bumps into each other, and people think I’m being inconsiderate. ↩
Daniel J. DeNoon, “Autism Improves in Adulthood,” WebMD, 2007. ↩
Two popular books on introversion:
Elaine N. Aron, “The Highly Sensitive Person: How to Thrive When the World Overwhelms You,” 1996
Susan Cain, “Quiet: The Power of Introverts in a World That Can’t Stop Talking,” 2012
The book “Now, Discover Your Strengths” by Marcus Buckingham and Donald Clifton was named to flow as a sequel to the book “First, Break All the Rules.” But the topic has become its own series with follow-ups like “Strengths Finder 2.0” and “Strengths Based Leadership.” ↩
Tony Attwood, “Autism in Females,” delivered at the Annual Women’s Health Update in Sydney, AU, 2015 (31 minutes). Linked is a Vimeo version on healthed.com.eu. Tony’s website also features a page on girls and women with Asperger’s. ↩
“Maintaining a Culture of Builders and Innovators at Amazon,” a Gallup interview with Beth Galetti, SVP HR at Amazon, year unknown. Galetti:
You may have heard [Amazon CEO] Jeff Bezos say, “Good intentions don’t work, but mechanisms do.” A lot of people have great intentions, but at Amazon, we work to build mechanisms so that we can take those intentions and turn them into complete processes that we implement and inspect. ↩
Pete Warden, Nerd culture is destroying Silicon Valley, 2014:
When I look around, I see the culture we’ve built turning from a liberating revolution into a repressive incumbency. We’ve built magical devices, but we don’t care enough about protecting ordinary people from harm when they use them. We don’t care that a lot of the children out there with the potential to become amazing hackers are driven away at every stage in the larval process. We don’t care about the people who lose out when we disrupt the world, but just about the winners (who tend to look a lot like us).
I’d always hoped we were more virtuous than the mainstream, but it turns out we just didn’t have enough power to cause much harm. Our ingrained sense of victimization has become a perverse justification for bullying. ↩
Well known Albert Einstein quote: “It’s not that I’m so smart, it’s just that I stay with problems longer.” ↩