Category: Articles

These posts aren’t associated with a particular podcast episode. They’re just good, old-fashioned writing on a variety of nootropic, neuroscience, or personal optimization topics.

Stumped on what to get your favorite smarty pants this holiday season?

Fear not!  We’ve put together a short list of “favorite things” that help us stay smart.

For Your Parents…

Cognitive enhancers for brains that have been around the block.

Creatine Monohydrate

Creatine’s not just for bodybuilders!  Check out Episode 75, Creatine: Brains and Brawn? to learn more about the cognitive benefits of creatine.

Phosphatidylserine

Studies show that phosphatidylserine supplementation results in significant improvement in cognition for older brains. Listen to Episode 79, Phosphatidylserine: Extra Oomph, Even For Young Brains to learn more about the benefits of phosphatidylserine.

For Teen or College Student…

Give the gift of sleep. Teenagers and young adult students are among most sleep deprived populations. Sleep is critical for memory consolidation and systemic repair of both body and brain.

Natural Calm

DIY Sleep Stack

Get crafty and put together the perfect stack to promote a great night’s sleep.
L-tryptophan: 500 – 1000mg
5HTP: 50 – 100mg
Vitamin B6: 50mg
Sustained Release Melatonin: 0.3mg

For more on the importance of sleep and tips on how to maximize your snoozing, Dr. Christopher Winter and Dr. Kirk Parsley both have a lot to say on the subject.

For Your Lady Friend…

Because holiday stress is real.

Sensory Deprivation Tank

Instead of a massage, treat her to a sensory deprivation float experience. Tip: You can usually find a great deal on Living Social or Groupon.

Check out Episode 49 for a discussion on the benefits of sensory deprivation tanks.

For Your Beau…

Give him a night he’ll never forget…with a supplement designed to promote Lucid Dreaming.

Galantanmine

Galantanmine is often used by Lucid Dreaming hobbyists to make intense, vivid dreams more likely.  (Don’t know about Lucid Dreams?  Check out Episode #59 with Dr. Deirdre Barrett.)  This can be strong stuff, and can disrupt your sleep if you take it at the start of a full night’s sleep.  The “best practice” approach is to set your alarm to wake you after you’ve gotten a few good sleep cycles in the bag — maybe 5-6 hours after your bedtime — and take a quick dose of Galantanmine and go back to sleep.  Then buckle up for some action-packed dreams!

For Your Best Friend…

Everyone’s busy these days, especially around the holiday season, so give the gift of energy to keep them going.

Methylated B Vitamin

Research shows that as much as 40 percent of the population is deficient in Vitamin B12. B12 is integral to how your body produces energy, keeping your cells fed, happy, and healthy. Low energy and mood are common symptoms of a B12 deficiency. B12 is also very neuro-protective – specifically protecting your brain and nerve cell myelin.

Kimera Koffee

What could be better than coffee and nootropics?

The world loves coffee.

To say “people love coffee” is a little like saying “people love sex.” In fact, going by the straight numbers — two billion cups of coffee drunk per day vs. slightly fewer fornications — people may like coffee more than sex.

Muslims, Christians, Jews, Atheists, Europeans, Africans, Americans, Flat-earthers, scientists, terrorists, and Red Cross volunteers… Coffee-drinking cuts across all social divisions and unites us all in one big human family.

Homo Sapiens caffeinophilus, you might call us.

Coffee is everywhere. The thought of caffeine-abstinence for the 90% of adults worldwide who partake daily is headache-inducing — both figuratively and often literally.

And so drastically reducing the world’s coffee supply — as unpopular as that would surely be — sounds less like reality than it does a bad plot device by a villain in a James Bond movie financed by Starbucks.

The coffee bean just doesn’t fit well with our preconceptions about endangered species.

But this grim prospect may be closer to reality than we’d like to think — and without a mustache-twirling mastermind as a convenient scapegoat. If we lose coffee, the perpetrator will be climate change.

And that means that ultimately, the villain will be us.

When Canaries Aren’t Enough.

We all know the canary in a coal mine expression, right?

Just in case, here it goes: In the bad ol’ days, in deep underground mines, sometimes noxious gases would seep up and kill miners. Canaries breathe fast and are apparently more cardiovascularly fragile than your average coal miner. So in the days before high-tech early warning systems, a dead canary — assuming you’d had the presence of mind to enter your coal mine carrying a live one — was a helpful clue that you should get the hell out of there.

Canaries (and similar warning bells) work well in cases of acknowledged and immediate mortal peril.

But they don’t work so well in situations where the thing that kills you takes a while to do so. You’ve never noticed canaries smoking Marlboros or eating McDonald’s Extra Value Meals alongside consumers of those products, and there’s a good reason for this. It’s not because canaries probably wouldn’t die from these things, given enough exposure; it’s because they wouldn’t die fast enough to merit the annoyances of long-term bird ownership.

Canary systems are great when the terror groundwork has already been laid. For nineteenth-century coal miners, the inciting fear wasn’t about the dead canaries; the birds were just a disposable commodity.

What made them care in the first place were graveyards full of dead coal miners.

Sometimes, It’s Gotta Hurt.

It is said there are two ways people can learn not to do things.

  1. The first way is to put your hand on a hot stove.
  2. The second way is to watch someone else put his hand on a hot stove.

The latter method is considered preferable.

The problem is, these two options are only for learning by individuals. When it comes to whole societies, there seems to be only one of the options available… And it’s not the preferred one.

Societies learn by hands-on experience (pardon the pun). It’s not that history fails to provide national-level cautionary tales — very much the opposite. It’s just that regardless of relevant historical warning signs, nations and their leaders always seem able to cook up plausible-sounding dismissals that “things were different then/there” and sweep aside the warnings.

After all, history is endlessly debatable, and the lead-ups to most grand-scale disasters aren’t as clear-cut as hand on a hot stove.

Today, global climate change is probably the most looming, and least acknowledged, societal train wreck in progress. And true to form, humanity seems determined to burn its own damned hand on its own damned stove.

Environmental changes have spelled doom for human societies before — at limited, regional levels. (Jared Diamond’s 2003 book Collapse contains sufficiently chilling examples for whose who want to ruin a few nights’ sleep.)

But these stories aren’t well-known enough, they aren’t scary enough, and they aren’t connected enough in the popular imagination to our species-wide bad habits. They’re not an immediate, whack-you-in-the-face threat like the dead bodies of your coal mining buddies.

And so we continue to reach toward the environmental stove with our own damned hand. (In fact, it’s a better analogy to say both hands. We’ve got exactly one planet; there is no back-up hand.)

Luckily, there is a middle-ground method of learning, which societies do seem capable of. It’s the inoculation model — akin to burning your fingertip, but recoiling in time to save your hand.

“That Which Does Not Kill Us…”

…had still better scare the bejeezus out of us.

Getting these historical fingertip-burns right is a tricky thing. Societies are fickle, disperse in their attention, and easily distracted. Death and dismemberment are almost always part of the recipe necessary to catch a whole society’s interest to the point where its self-defense instincts are triggered.

Horrible as each of them was to the victims, I would cite both the bombings of Hiroshima and Nagasaki and the recent Ebola scare as successful historical inoculations.

With Hiroshima and Nagasaki, the world saw immediately just how serious these atomic weapons technologies were. The smoky silhouettes of incinerated civilians on still-standing walls were as motivating to strangers half a world away as dead coal miners had been to their small-town coworkers three-quarters of a century before.

And despite some close calls, this terrifying debut has kept our collective hand off the nuclear stove for over 70 years since.

With the recent Ebola virus outbreaks — though some have criticized the health care community’s response as “panic-mongering” — the fearful prospect of a widespread epidemic has marshaled public safety to a far higher level today than it would have been at otherwise. “False alarms,” when it comes to biological pandemics, are something we should be cheering for, not complaining about.

(By contrast, the resources brought to bear against a distinctly regional threat like Isis are way disproportionate to the risk. Isis’ methods are designed to be newsworthy, but the organization is not a threat to people living in New Brunswick, or Johannesburg, or Detroit. For a pandemic virus, the opposite is true. And Global Climate Change, in this sense, is a lot more like Ebola than it is like Isis.)

“Remember the Cappuccino!”

The sinking of the Maine, a U.S. warship parked in the Havana harbor, sparked the Spanish-American War (Cuba, back then, was a Spanish colony) and led to an unmitigated American ass-kicking of the formerly first-rate global power that was Spain.

The Maine exploded and sunk under debatable circumstances — and may have been an honest-to-goodness technical accident rather than foul play— but nevertheless, “Remember the Maine!” was trumpeted on newspaper headlines as a rallying cry for American patriotism.

It met the 1890s standards for a fingertip-burn that merited an immediate, dramatic response.

All the examples that I’ve given so far — Hiroshima, Ebola, the sinking of the Maine — had a human body count. Maybe that’s a prerequisite for societies to wake up and smell the coffee.

But maybe, just maybe, times have changed?

Could the tragic death of something non-human (but well-loved) cross the fingertip-burn threshold?

The Bitter End.

If you’re not a coffee snob, chances are good that you know one. And any self-respecting coffee snob will tell you there are two primary strains of commercial coffee:

  • Coffea Arabica — More aromatic and more popular, accounting for over 70% of global consumption.
  • Coffea Robusta — A more bitter runner-up variety.

If you’re drinking coffee as you read this, odds are good that it’s Arabica.

This also means that your cup-of-joe has been bred from a very distinct lineage, originating in the mountains of Ethiopia. These recent, confined origins mean that our commercial coffee has very little genetic diversity and is particularly vulnerable to climate change. Put simply — a disruption that kills one of ’em is likely to kill all of ‘em.

Arabica plants grow best in a narrow temperature range between 18 and 22 degrees Celsius, and they require gentle, regular rainfall.

With global weather patterns destabilizing, Arabica is in jeopardy. Researchers predict that agricultural lands capable of supporting Arabica could fall by half in the coming decades. To make matters worse, short-sighted efforts to improve coffee yields from the diminishing acreage could speed soil depletion and further accelerate shortfalls in production.

As Arabica’s availability dips, will we switch to the bitter Robusta alternative? Will we grit our teeth and just keep paying more for the dwindling supply, until we long for the days of free refills and coffees that cost only $3.50?

And to make matters worse, some climatologists predict that the world’s prime coffee-producing regions — places such as Vietnam, India and Central America — will be among those hardest-hit by climate change.

Are coffee’s days numbered?

Ghosts of Bananas Past

To some, all this may sound like panicked fear-mongering. After all, the world drinks three-quarters of a trillion cups of coffee annually. We couldn’t just run out…

It’s happened before.

Not with coffee, but with something almost as unthinkable.

Bananas.

I can hear you scoffing. “You’re not fooling me,” you say. “We’ve still got bananas.”

Well, yes and no.

The bananas we eat today are not, it turns out, our great-grandparents’ bananas. It wasn’t long before the aforementioned Spanish-American War that bananas got globally popular. In fact, a worldwide banana craze led to the initial development of refrigerated shipping. But I digress.

At the time, the world was hooked on a strain of banana known as the Gros Michel. (There are many, many strains of bananas — thousands of them. Most, you wouldn’t want anything to do with.)

But in the 1950s, a banana-blight began ravaging the Gros Michel strain. It got worse, and worse, and, well… worse.

Devoted horticultural scientists, bullet-sweating banana plantation owners, and all the financial might of companies like Chiquita were unable to save the Gros Michel.

Season after season, the world watched helplessly as the beloved banana strain went extinct. We were powerless to stop it.

The stopgap was a second strain, the Cavendish. It’s the one that we know today, simply, as “bananas.” The Cavendish is somewhat smaller than the Gros Michel. They rot faster, they bruise easier, and according to those who lived at a time when Gros Michels were available to taste-test and compare, our modern Cavendishes taste worse. Gros Michels were sweeter, with a creamier texture.

They were, apparently by unanimous consent, a better banana.

But it was our modern Cavendish that was immune to the banana blight. And so, like the rat-ancestors who inherited the earth when the dinosaurs died out, by the early 1960s the Cavendish inherited the title of “banana” to a dispirited but option-less global consumer public.

The Gros Michel banana blight was bad. If you owned a banana plantation back then, it was downright disastrous.

But it was an isolated disease, affecting a solitary crop. An important crop, sure — but it wasn’t a domino poised to tumble an entire ecosystem or undercut global food production.

In short, it wasn’t worth freaking out about.

It didn’t qualify as a fingertip-burn.

“Never let a good crisis go to waste.”

These were the words of Winston Churchill, but it encapsulates an idea as old as politics.

Bad news is a great motivator. A society amped up on adrenaline, fear, righteous indignation — is a society ready to get stuff done.

So imagine the crisis of a Coffee-less Future.

We are a world of addicts. (Let’s be honest, okay?)

And Climate Change — the red-handed culprit should this happen — is a villain who won’t stop at one crop, or one industry, or one continent…

The loss of coffee wouldn’t just be a disaster to breakfast, an existential threat to baristas, or a stock sell-off for SBUX

It could be a fresh batch of dead coal miners.

A culinary mushroom cloud.

I like coffee. I drink it. I’ve even been known to photograph it on occasion. But if a few billion caffeine-withdrawal headaches could snap us all to attention on the inadvisability of playing chicken with the global environment…

I’ll be happy to switch to tea.

If we can keep our collective hand off the stove for the comparatively low price of an extinct beverage, we should count ourselves lucky.

Maybe a tomorrow without coffee is exactly the rude awakening we all need.

There’s something it’s hard not to notice when you speak with people about psychedelics.

Most pop culture portrayals of psychedelics have discussions that begin (and all too often, end) with the word “Duuuuuuuude.”  This may originate with the character “Shaggy” from Scooby Doo, the cultural progenitor of cartoon druggies.  Something about Shaggy clearly struck a chord; he’s been spliced and cloned into dozens of equivalent, well-meaning imbeciles across all media ever since.

But with all due respect to mystery-solving dogs and their human sidekicks, when you talk with real users of psychedelics, the topic expands well beyond “Duuuuude.”  People are eager to talk about their experiences.  And it’s very rarely just “I was so fucked up” or “I partied like it was 1999.” 

Well, sometimes it is that, but those things are the jumping-off point, not the follow-through.

People who become psychedelics aficionados — those who maintain an interest after their last rave or beyond their first bad trip — don’t want to talk about shiny colors or how their house cat suddenly turned telepathic.  They want to talk about what their psychedelic experiences have taught them about themselves.

It’s a weaker punchline than “Duuuuude.”

And often a lot more confusing, long-winded, and deeply personal.

But as these are what real (i.e. non-cartoon) psychedelics users find comment-worthy about their experiences, it seems worth paying attention to.

Half-Way Between You and Them

What follows is my pet theory on why psychedelic experiences can be so transformative for people.  But first, a question:

Why do people go to psychologists — or even to friends, family members, and others who know them well — to get advice on their own lives?

With all due respect to mystery-solving dogs and their human sidekicks, the topic of psychedelics goes well beyond “Duuuuude.”

First, because — whether we’re narcissists or self-haters — we’re all deeply interested in ourselves.  And it’s always fun to get other people to discuss this best-loved of topics with us.

And second, because we’re extremely biased when it comes to ourselves.  We are not good judges of our own behavior, or recognizers our own idiosyncrasies.  We are the water we swim in — and we are thus both omnipresent and invisible in our lives.  With less freedom than Peter Pan’s shadow, we follow ourselves around 24 hours a day.

Self-Blind: All Our Mirrors Are Wacky

At some point, you’ve done the optical illusion where you stare at a high-contrast image for 30 seconds, then look at a white wall, and you can see the “burned-in” negative image of whatever you’d been previously looking at.  (If you haven’t done this, were you never a kid?)

Your brain’s optical system — even in that short time span — had constructed a sort of overlay to “balance out” the strong contrasts in your visual field.  This is similar to how a camera automatically controls for exposure, so overly-bright parts of an image don’t “white out” and lose detail.  In your own optical system, this contrast-reduction helps sensitize you to variations in your visual field.  “Variations,” in this case, meaning movement.  (Maximizing awareness of nearby moving objects probably needs no justification.  Think: predators, prey, pies-in-face, etc.)

So here’s the analogy: The nuances of our own behavior are the constant, unchanging elements in our own experiential world.  From our point of view, that is.

That’s why someone else’s opinion — the friend, the psychologist, even the stranger who tells you that your fly is open — can be so incredibly valuable.  We are a moving, high-contrast object in the perceptual experience of that person’s life… so we show up to them with greater clarity.  We “pop off the background” in a way we can’t do for ourselves.

Just like they do for us.  And just like they can’t do for themselves.

The Things Only We Know

However, for all the upsides of getting an outside perspective, there is undeniably value in self-reflection as well.  Although it’s seldom without effort, we can identify things about ourselves that others can never tell us, because we’ve got a huge advantage over them…

We have access to a far greater data-set about our own world, our own behavior, and our own experiences than any outside observer has.  (At least, this was true in the era before smart phones.  Nowadays, my iPhone may know more about me than I do — in a facts-and-figures sort of way.)

The conjunction of these two tool-sets — the memory library we store about ourselves, and the perspective offered by someone else, watching from the outside — is ripe with possibilities for new revelations about our behavior.  What’s effective, what’s ineffective, what as-yet-untested strategies may prove to be effective, and why.

Psychedelics — in my theory — cuts out the middle-man.

During a psychedelic experience, the user’s view of reality is profoundly affected, like looking through a prism, or a kaleidoscope, or (to keep with the idea of an outside perspective) someone else’s eyeglasses.

And yet — here’s where the magic happens… Looking from that outside perspective, the psychedelics user gets to page through his or her whole catalog of self-knowledge.  The smorgasbord of memories and details even close friends don’t know — whether because these things are too private to admit, or too mundane to ever come up in conversation.

This fertile blend of outside perspective plus inner knowledge is the essential recipe for the insights that psychedelics can sometimes provide.  Of course, a psychedelically-skewed perspective could also be so confusing as to be useless.  Your pet goldfish will see you with a perspective that’s even more alien than your psychologist’s — but your goldfish’s perspective is less likely to be instructive when optimizing your behavior.

Know Thyself.  One Way or Another.

“Know thyself” is a quote attributed to Socrates — and to nine or ten other long-dead thinkers.  Self-interested as humans are, there’s no reason to think that just one person, or one culture, came up with this idea.  That’s probably what makes it such good advice.

The psychedelics user gets to page through his or her whole catalog of self-knowledge…

Psychedelics are one means for people to know themselves better.  Maybe not the best means, certainly not the only means, and for some people, not even a safe means.  My comparison of psychedelic insight to psychological counseling is no more or less serious than my comparison of psychedelic insight to talking with a goldfish.

To put that another way: some psychedelic “insights” might not make it past the Duuuuuude threshold.  But the same will be true of talking with a psychologist, or of any other path to self-knowledge.

If knowing one’s self were easy, everyone would be doing it.

Psychedelics aren’t easy. 

And neither are they a direct path to meaningful insight, any more than the discovery of fire was a direct path to the steam engine.  My point isn’t to evangelize, or even to recommend – it’s just to propose a mechanism behind the age-old, cross-cultural claims of the value of psychedelic “visionary” experiences.

Of course, there are also probably epochs-old, cross-cultural versions of people saying Duuuuuuude and their friends laughing at them.

But ultimately, the punchline is the process. 

It’s the mental discombobulation of psychedelic states that gives them their utility.

Somewhere on the biochemical middle-ground between sobriety and being “completely fucked up,” a psychedelics user may just find himself on an optimal cognitive plateau, offering an unexpected view toward self-discovery.

“It is an unsettling fact that we can manufacture, wholesale and out of pure nothingness, whole events and pasts that never occurred.” – Elizabeth Loftus

Elizabeth Loftus, cognitive psychologist and memory manipulation expert, has spent the last 40 years studying long term, episodic memory — the memories of the episodes of our lives.

Our memories, it turns out, are pretty complicated constructs. So, before we start this little walk down the memory lane in your brain, here’s a brief tutorial:

Memory 101

Long term memory is divided into two subcategories: implicit and explicit.

Implicit memories do not require conscious thought, they allow us to do things automatically or by rote — like ride a bike, type, button a shirt, etc. This is also called procedural memory.

Episodic memories, the kind Elizabeth Loftus studies, are explicit memories. These memories require conscious thought — like remembering what you had for dinner last night or what you wore two days ago. The memories are autobiographical — any memory in which we personally play a part in the episode or scene of life is an episodic memory.

Another type of explicit memory is called semantic memory, which is the general knowledge we learn throughout our lives (math facts, the capitals of the states, etc…). These memories, like implicit memories, are not dependent on episodic memory.  However, semantic and episodic memories are inter-dependent.  Our semantic memories provide the framework for our episodic memories.  For example, our semantic memory contains the information that the Twin Towers fell on Sept. 11, 2001 and our episodic memory contains the specific memory of where we were, what we were doing, who we were with and how we felt when it happened.

Your Autobiographical Memory — Can You Trust It?

Most people implicitly trust their episodic memories. It’s hard not to; after all, we were there. However, Loftus’ work has revealed that human episodic memory is not nearly as robust as we’d like to believe, even in the healthiest of brains. In fact, episodic memory is both fragile and malleable.

One of the most startlingly findings is the ability to “implant” memories into people — so that they are convinced they have experienced events that did not, in fact, happen.

False memory implantation is not the only way people’s memories can be changed. It turns out we do it all the time, all on our own. Loftus compares this process to the creation of a Wikipedia page… to which we can make edits to our own memories, and others can make their own contributions.

This has tremendous repercussions:

From the frustrated parent who is convinced that their child is lying; to the perhaps wrongly convicted criminal, whose incarceration is based on “eyewitness” testimony.

In fact, the primary cause of wrongful convictions, according to Loftus, is faulty human memory leading to eyewitness error.

Some have even speculated that NBC’s Brian Williams may have been a victim of his own “mis-remembering”  —  experiencing a sort of memory hijacking in which people develop personal memories about events they have heard about. With repetition, over time, the borrowed memories can become part of the autobiographical memory.

But Why?

Over the course of her career, Loftus has often wondered why humans are built with a memory that is so malleable. She theorizes that our memory flexibility — in spite of its fallibility, is not a bad thing. In fact, it may help us to navigate the world more effectively.

One benefit to memory distortions is that they may help us to feel better about ourselves. We may remember that we did better in school, got better grades than we really did. If there is any merit to the self-fulfilling prophecy — which research suggests there is –then this could work to our benefit.

Sometimes, our personal memories are incomplete, they may be missing certain facts and particulars. Our flexible memory allows us to fill in the gaps — updating a specific memory with details, ideas and suggestions from the actual event, as well as with information acquired later — creating a smooth, seamless memory… a memory which becomes a reference point to inform subsequent life experiences.

This is perhaps the most interesting facet of our memory malleability — how it enables us to plan for the future. Endel Tulving, one of the first cognitive neuroscientists to distinguish semantic and episodic memories, likens the human, episodic memory to a time machine that allows us to consciously re-experience past experiences, which we can then use to travel into the future, informing current thoughts and future actions.

The same neuronal circuitry that enables us to remember and almost viscerally re-experience past historical events also allows us to leap into the future, planning for an as-yet-unexperienced event — anticipating likely encounters based on previous experiences.

We are able to forecast, to imagine an alternative reality that is different from the way things currently are. It seems that episodic, neuronal flexibility is not (and perhaps cannot be?) unidirectional – allowing only for forward thinking. Those pliable pathways work bidirectionally, thus creating memory cooptation opportunities.

Party Parlour Tricks

So how does it work? How does a false memory become implanted? Is this something we could do to our unsuspecting friends at a party?

According to Loftus, the trick is to ground the false memory in a bit of reality and then lead the individual through some suggestive questioning about another entirely false event, simultaneously implanting that event as a memory.

A person is primed for memory manipulation by being asked to recall events from the past… true events. Loftus and other researchers have conducted several studies that test memory implantation, following a similar protocol whereby three real-life past events are discussed with study participants — the facts of the events are obtained from family members. Then, another completely fictional account is introduced as an historical experience.

In one study, through suggestive questioning and other techniques like guided imagery and false feedback, Loftus reported she and her colleagues have been able to implant “rich false memories” into about a quarter of ordinary, healthy study participants. Participants were implanted with the memory of being lost at a shopping mall when they were children — an event that never actually happened.

“We can visualize things, we can draw inferences of what could [have] happened or might have happened and sometimes those visualizations can get converted into something that feels like a genuine memory.”

Other studies have had an even higher false memory implantation rate — with as many as 50 percent of participants “owning” untrue events.

But it Feels so Real…

But surely, false memories would not evoke the same sort of emotional response as real ones — providing us with a tip-off as to what is real and what isn’t… right?

Wrong.

According to Loftus, emotion associated with a memory is no guarantee that the memory is real.

Recently, some of Loftus’ graduate students examined this theory by planting false memories in subjects and compared their emotionality to that of people who truly did experience the event. On most emotional dimensions, true and false memories were indistinguishable.

Okay, so what about people with exceptional memories? Can their robust synapses withstand the suggestive implantation? Loftus says that, yes, people who typically have lapses in memory and attention are more susceptible to manipulation.

But bizarrely, people known for having exceptionally good autobiographical memories, for being able to remember the most minute details of their everyday lives, were found to be just as susceptible to memory contamination as those with more run-of-the-mil memory recall — even when accounting for age.

So, How Can We Know For Sure?

In a world where information can go viral with a click of a button, how do we protect ourselves from “lies” — if we can’t even rely on our memories?

Loftus suggests that the first step is education. This is especially important in the judicial system, where life changing decisions are often based on eyewitness testimony.

People in general — and judges and jurors in particular — need to be aware of the fallibility of memories. Expert testimony, like that which Loftus has provided in the past, should be made available, presenting scientific information regarding memory malleability.

Currently, a debate is raging as to whether to allow police officers to review video records of criminal incidents prior to writing incident reports or providing statements and/or court testimony … in an effort to prevent mis-remembering.

Proponents suggest that just as eyewitnesses are allowed to refer to documents, notes, etc. to corroborate their statements, police officers would also benefit from viewing a video recording of the event.

Naysayers are concerned that misbehaving officers might use the video to help them create a report that benefits them or supports their case — whether it be by providing background information that might allow them to dupe a review board and cover up questionable actions or misdeeds — or flat out fabricate details of an event and yet still be substantiated by video.

The video recordings themselves may possess several limitations — they generally provide footage of only a portion of the incident, and thus may lack context; they are two-dimensional and so may not accurately represent distance or depth of field of the event; and light levels as shown in the recording may be different than those experienced in the actual event.

However, just as an eyewitness is vulnerable to the malleability of memory, so is the reporting officer. Some have suggested that it would be difficult for an officer involved in a “fluid, complex, dynamic, and life-threatening encounter to remember peripheral details beyond the one which he or she was focused.” Anyone — police officers included — could potentially miss a large portion of the action in a stressful event, and be completely unaware of what they did not pay attention to.

Beneficial Brainwashing with Manufactured Memories

Accuracy aside, can mis-remembering actually be used “for good?” Is it a kind of superpower that can be developed to help people overcome obstacles? Loftus suggests it may have that potential. Her recent work investigates using implanted memories to influence and modify future behavior.

She and her colleagues have been successful in implanting subjects with the memory of becoming ill after eating particular foods as children, so that later they are less likely to eat those foods. Similarly, after convincing people that they became sick drinking vodka, they decline to drink as much of it later. Loftus suggests it would be possible to implant “warm fuzzy” feelings toward eating healthy food.

The ethical questions associated with memory manipulation are profound. The idea that our identities — themselves so intimately entangled with our memories — are at risk of contamination is quite unsettling. But it’s a broad social conversation that should begin now, while memory-manipulation technologies remain in their infancy.

In the meantime, what should we take away from all this?

Loftus herself has become a “memory skeptic,” which she says has helped her to become more tolerant of others’ memory mistakes…

So, maybe we should cut Brian Williams some slack, after all.

Special thanks to Elizabeth Loftus, for generously sharing her “semantic” knowledge of our “episodic” memories with Smart Drug Smarts.  Wait…that Skype interview really did happen, didn’t it?  We didn’t just imagine it?

What are Nootropics?

There’s a lot of confusion surrounding nootropics and cognitive enhancers.  The two terms are often used interchangeably, but if you want to get really technical about it — and let’s face it, we do — then nootropic refers only to a narrow category of cognitive enhancing supplements.

A substance must meet five tough criteria to be considered an honest-to-goodness nootropic.

Read on for the official definition of a true nootropic, the difference between nootropics and “cognitive enhancers,” and our homegrown Smart Drug Smarts definition.  (Hat-tip to Abelard Lindsay for stating this nicely in Episode #85.)

what-are-nootropics

Psst!  Want to share the lowdown on nootropics? You can embed the infographic using the code below.  Don’t forget to link back to us!

Add this image to your site:

It took me a while to realize that I was the crazy guy.

There’s a saying among poker players — I assume for good reason — that goes like this: “If you can’t spot the dumbest guy at the table… It’s you.”

I’m starting to think that this may be a special case of a broader rule that goes well beyond poker.

I was flattered to wake up yesterday to a request to join a radio show panel, the nationally-syndicated “To The Point” produced by KCRW Radio out of Los Angeles. They were doing an episode about smart drugs — specifically, “moda” — and wanted to know if I would join their expert panel, which would include three others?

The producer implied (impressively, without ever quite saying it) that I wasn’t supposed to ask who the other panelists were. The set-up would be a little like Roman gladiators at the Coliseum, not knowing in advance what would come out from behind the arena doors. This makes for a livelier show for the audience.

Needless to say, I jumped at the opportunity.

KCRW is the radio big-leagues; I hadn’t just heard of them, I’ve listened to them. They’re probably the only radio station in Los Angeles I can find on a dial. Plus, this subject was right up my alley; I’ve used Modafinil on-and-off for years.

So this morning I dialed in to KCRW and was put into their digital bullpen, where they keep call-in guests on hold until the producer signals “it’s time,” and then suddenly the host is addressing you with questions.

(If you’ve ever called in to a radio show and been queued to ask the deejay to play a song for your sweetheart or to win concert tickets – it’s exactly like that.)

The first panelist introduced was a health reporter for VICE News, Sydney Lupkin. KCRW broadcasts to a general audience, many of whom would never have heard of smart drugs — and Sydney, along with host Barbara Bogaev, did a great job of opening the topic and implying a simmering hotbed of controversy around the use of “moda.” (The half-clandestine use of this abbreviated term was presented almost as a counterculture nod, like calling marijuana “weed” or Barack Obama “Barry.”)

And before I knew it, I was up next, answering a question about “how Modafinil feels when you’re on it.” I said my piece and then passed the mic, unsure if I’d said too much or not enough — it’s tough in these audio-only situations with multiple parties and no eye contact. You never know if you’re blabbing too long or if the host is praying for you to fill space.

But in this case, they needed to move on to get from me to the real Smart Drugs Wild Man. Certainly, with the undertones of “Modafinil running amok on our campuses,” one of the remaining two guests was sure to be a strung-out 19-year-old with 500 milligrams of Modafinil in his veins, who hadn’t slept since Tuesday.

However, the next guest proved to be Professor James Giordano, from the Georgetown University Medical Center. His speech and manner and credentials were all impeccable, and I wiped sweat off my brow when he backed up some points I’d made in my earlier monologue: 1. Smart drugs are out there. 2. Some, like the racetams, have strong safety and efficacy records and a multi-decade pedigree. 3. Probably the major concern for would-be users is identifying good providers in a “gray market” retail landscape.

We went to a commercial break, and for about a minute the audio went dead; I had time to google my two unveiled co-panelists, and to wonder about the third. The show had such an expectant feeling to it, an undercurrent that something shocking is happening here – prepare to be shocked! I was expecting Johnny-the-University-Kid-Who-Never-Sleeps. Or maybe Otto-the-Online-Modafinil-Retailer, coming on with a digitally-garbled voice, hinting at the value of his product while slinging accusations at “The Man” for keeping his business underground.

But soon the commercial break ended. We were back.

The next voice was a familiar one: Dr. Jeremy Martinez, from the Matrix Institute on Addictions — whom I’d interviewed previously on Episode 80 of my podcast. Dr. Martinez is a leading expert on addictions and addictive behavior, practicing in Los Angeles — which is also the big leagues, if you’re a doctor specializing in addiction. Like Professor Giordano before him, Dr. Martinez was well-spoken, straight-laced, and (befitting an addiction specialist) probably a bit conservative in his approach to the modulation of human brain chemistry.

One of the remaining guests was a strung-out 19-year-old with 500 milligrams of Modafinil in his veins, who hadn’t slept since Tuesday.

But wait a minute… Were we at four panelists already?

Had I gotten it wrong? Had the producer whom I’d spoken with said it would be me with four other panelists?

I was pretty sure the answer was no, but it hardly made sense to have a panel-discussion where everyone on the panel seemed to be in such agreement. “To The Point” isn’t Family Feud or some faux-news fight-bait show… But still, this is American mass media; there are rules that must be obeyed.

And then I felt a sinking feeling, as the verbal baton was passed back to me for another question…

It suddenly hit me.

Just like the poker player realizing there’s no one dumber at the table…

I was Johnny-the-University-Kid-Who-Never-Sleeps. I was Otto-the-Online-Modafinil-Retailer.

I was the Cognitive-Enhancement Wild Man, the one whom the conservative members of the KCRW audience were giving dirty looks through their radios, while I waved my pom-poms for these so-called smart drugs.

But I was the weirdest guy they could find?

I was the far edge of the lunatic fringe, pro-cognitive-enhancement spectrum?

I was — dare I put it so bluntly? — the cautionary warning of what your college kid might turn into?

I consoled myself with the thought that maybe there’d been an accident, and that Johnny-the-Non-Sleeper was unavailable on account of pan-hemispheric cognitive over-stimulation. I readied myself for the task. If someone needed to hold the line for the pro-enhancement crowd, I would do my part.

Luckily, the next question posed to me was one that’s always seemed as trivial to answer as it is amazing that it gets asked in the first place…

The “Cheating Question”

Should we be “worried” about the use of smart drugs?

Is it like “cheating in sports, with steroids”?

If there is one question where I am willing to let my freak flag fly high, this is it. I came out of the gates swinging. I probably frothed at the mouth a bit. (Mouth-froth-concealment is one great upside of both radio and podcasting over television.) My answer — constrained for the radio — was necessarily bite-sized, but I’d like to riff on it at greater length here, because this is the question that won’t die.

I was the cautionary warning of what your college kid might turn into.

It seems to me so absurdly mis-applied, and yet it’s an entrenched part of the public discussion. “Are smart drugs like steroids?” With the implications: “Is using them ‘unfair’ to the other ‘competitors,’ irrespective of the risks to the user himself?”

But to pretend that this analogy holds is to pretend that we live in society where muscles are more than a mating display or where intelligence is only a nifty parlor trick, essentially no big deal.

This could not be further from the truth.

If a Barry Bonds type takes steroids and balloons his athletic ability, maybe he hits a few more home runs. Records are broken; next year’s baseball cards and tonight’s ESPN highlight reel will look slightly different. But real effects on people’s lives? Zilch. Nada. With all due respect to physical performance, we no longer live in a world of blacksmiths and rickshaw operators. Physical musculature is of great use to the individual, but none to society.

Now let’s look at the corresponding situation in intelligence. If the intellectual equivalent of Barry Bonds — maybe this is Stephen Hawking, Elon Musk, or Ray Kurzweil (pick your favorite genius) — if he or she is able to boost his cognitive performance by the equivalent of “a few home runs,” this translates into a greater chance of a Unified Theory of Physics, or of colonizing Mars sooner, or of getting closer to mind-uploading. This isn’t about baseball cards; these are outcomes that fundamentally alter the trajectory of our entire species and its possibilities in the universe.

To equate this with “cheating, like steroids” is not in the same ideological ballpark.

It’s not in the same league.

It’s not even the same sport.

No, we should categorically not question the ethics of people voluntarily using cognitive enhancement to “get ahead.”

Not any more than we should question the ethics of a woman who uses perfume to smell better, or a man who squints on the golf course so he can see a little better. We all use the best tools available to us, constantly — and for good reason.

Life is not a zero-sum game, and the first people to adopt an effective new tool may indeed gain an advantage that later adopters resent… But in the end, the leaders in a field push the whole field forward. Barry Bonds, like it or not, made baseball better. He pushed the envelope, and even if it was cheating, he established new horizons.

But as I said: The horizons of baseball, they don’t matter that much.

The horizons of human cognition, though… They matter as much as anything we know about, or could even conceive of. From our current vantage point as the sole thinking species on the only known inhabited planet in the universe, the horizons of human cognition are literally insurmountable in importance.

So yeah, okay…

Maybe I am the Lunatic Fringe.

If you are reading this at an inopportune time, you need to keep reading.

It might be the middle of the night.

You might be procrastinating while at work.

But either way, the last thing you should be doing now is having clicked on a completely optional blog post and started reading.  (Despite the relative awesomeness of the blog; but I digress…)

Maybe you’re reading this at an appropriate moment for you to be killing some time online.  Sunday afternoon on your iPad, for example.  Or maybe you’re bored on a subway commute.  If so, this article is not for you.  You have an appropriate relationship with Internet time management.

But this post is for people like me.

It’s for people who default to online.  Internet Addiction, they call it.  You’ve probably heard of this condition.  And even if you haven’t, the name kind of says it all.

I am not a textbook-case Internet Addict.  I don’t even have a Facebook account.  (This is partially because I know that having a Facebook account would turn me from a functional addict into the Internet’s version of a wakes-up-in-the-gutter-with-needles-sticking-out-of-his-arms addict.)

The things I do online are not necessarily representative of most Internet Addicts.  But despite that, I do share one defining characteristic with my addicted brothers and sisters…

I keep coming back to the online world.  By default.

Sometimes even when the physical world dangles very worthwhile carrots.

The Lost Continent of My To-Do List

As an Internet Person, I’ve got my obligatory to-do list.  In fact, a couple different to-do lists, in different formats.  (For me, it’s Asana, Trello, and Workflowy — dependent upon the project.)

And one thing I’ve noticed with increasing regularity for the past few months is that when I’m organizing my days, the to-do’s involving the physical world…  They tend to get schucked to the “optional” section at the end of the to-do list.

Meaning that they’ll bounce to tomorrow.  And then the next tomorrow.  And then the tomorrow after that.

(Does anyone ever finish their full daily to-do list?  If so, please don’t answer.  I hate you.)

I keep coming back to the online world.  By default.

So it turns out, I’m ignoring physical reality.

Exactly what kind of to-do’s are these things?  Nothing all that fancy.  Some of them would be easy kills.  Trips to the grocery store.  Scanning physical papers that would be so easy to digitize if I’d take the 15 minutes and just be done with it.  Going to my storage locker and pulling stuff out of boxes that I’ve wanted-but-not-really-needed for going on three months now.

The digital world is just so friggin’ convenient.  And getting moreso.  Amazon Prime is the ultimate enabler.  TaskRabbit doesn’t help either.

The things I find myself actually doing in the physical world are — this is embarrassing — the bare minimum requirements of human physicality.

Eating.  Sleeping.  Bathing.  Exercising.  Sex.  Full stop.

If you think I’m exaggerating, let me stress:  I’m writing this blog post instead of doing the physical-world to-do’s on my list for today.

Hash-tag: #iSuck

Starting Next Week, I’ve Got A New Strategy

I’m calling it…

(Yes, it’s got a catchy name…)

Physical World Phriday

Fridays will be my day off-the-laptop.  All those never-quite-gotten-to to-do’s in the Physical World… Friday will be their day to rise front-and-center, and get the attention they deserve.

And hopefully, to get mercilessly done, like the virtual to-do’s on my list eventually are.

I anticipate that the laptop-less-ness of next Friday will be brutally difficult.  I’m so strapped to it, normally, that I rarely use my smart phone as an Internet device, which will make me a bit more digitally isolated than most people nowadays.

But that’s the idea, isn’t it?

#PhysicalWorldPhriday

I’ll be hash-tagging it on Twitter at 11:59 on Thursday.  And then…

I’ll be gone.

On November 28th, 2012, I published the very first episode of Smart Drug Smarts, interviewing Dr. Ward Dean — a doctor who had literally written the original book on Smart Drugs. I figured then — and looking back, I can’t really fault my logic — that as a computer programmer with no particular medical background, if I was going to do a podcast about smart drugs, I’d better have some unimpeachable guests come on as experts.

In the time since then, over two-and-a-half years, I’ve been lucky enough to conduct over 80 interviews with some of the world’s top experts on some of the world’s coolest stuff.

And sometime in the past year — I never really stopped to notice when it happened, but by now it’s definitely true — Smart Drug Smarts has become the longest-running single project I’ve ever worked on, period.

I’ve got to say, I’m very proud of that… and I have every intention to continue building from here.

One question I’ve gotten asked a lot is…

“So why did you start the podcast?”

I feel like people expect one overriding answer, but it was more a smorgasbord of semi-related upsides…

  1. I love media production and was looking for a creative outlet.
  2. I’ll take any excuse to talk with smart folks.
  3. I’ve had a lifelong interest in brains, physical health, and psychology.
  4. This felt like a way for me to participate in science fiction.

Smart Drug Smarts has ticked all of these boxes for me.

And of course: I was, and I am, a fan of cognitive enhancers.

More broadly, I’m a fan of cognition.

That’s either an obvious or a profound statement, depending on how charitable you’re feeling.  But I’ve personally found that the moments in my life I’ve enjoyed the most — contrary to what we’re taught to expect — weren’t often moments of public praise or physical pleasure…

They were instead moments of intellectual insight.

  • Wow, is that really true?
  • I think I figured it out!
  • Wait, this changes everything…

I wrote about this in my post The Physical Sensation of Epiphany — and these types of internal thrills are still the primary carrots I find myself chasing.

It’s funny, because I’d pay good money for a moment of new insight.  But what actually happens is — when I have a moment of insight, that’s often something people pay me for.  Talk about having your cake and eating it too.

For me, smart drugs are a booster rocket along that course.

They’re a multiplier on my odds-of-insight on a given day.

There are those who will tell you that such-and-such chemical will triple your IQ, allow you to see through walls, or rewire your hippocampus with a direct feed to Google while you sleep. I’m not that guy.  And I haven’t yet seen, or taken, such a drug.

What I have experienced are a variety of chemicals that allow me to fine-tune my state of mind… to consistently direct myself into ways of thinking, seeing, feeling, and behaving in line with what I’m trying to accomplish.  Sometimes that is enhanced focus.  Sometimes it’s expanded creativity.  Sometimes it’s a solid night’s sleep.

Smart Drug Smarts has become the longest-running single project I’ve ever worked on, period.

I have learned so much since starting the podcast.

Rubbing shoulders and sharing conversations with an amazing group of bright, curious, and deep-thinking people, this should come as no surprise.

And here’s the fun part: I’m not just talking about the show’s guests.

I’m also talking about the listeners.

Podcasters don’t know exactly how many listeners they’ve got.  People come in from iTunes, from YouTube, from random web-searches…  Some press Play and might jet after they decide they don’t like the intro music; others go back to the first episode and listen to everything you’ve ever done to get caught up.  I never know from week to week how many people will be listening, and whether those people are first-timers or long-timers…

But what I do know, is that of the people I’ve been lucky enough to meet — on email, on Twitter, and in a few dozen cases, in person — the level of amazing-ness among the people who have elected to become part of the Smart Drug Smarts community is truly phenomenal.

It’s a group I feel privileged to be part of…

  • Neuroscientists
  • Biochemists
  • Academic researchers
  • Man-machine interface do-it-yourselfers
  • Highly competitive business professionals
  • and a new generation of bright, vigorous university and grad students

All of us united by a deep curiosity to know where the cutting edge lies.

So What’s Next?

As our community has grown, people from the retail end of the cognitive enhancement world have taken notice, and we’ve had more than a few offers to promote products on the podcast, on the web, etc.

And as you know if you’ve been listening for a while, we’ve demurred on those offers.  Some seemed overtly sketchy.  Some probably weren’t sketchy, but I didn’t have the time or resources to feel 100% sure about going to bat for them.

And of course, a major concern has always been maintaining the trust the podcast has earned as an honest broker of information about cognitive enhancement: what works, what doesn’t, what’s safe, what isn’t, and what we just don’t know yet.

By late 2014, I’d decided a few things:

  • I loved the podcast.  I loved doing it.  And I wanted to put even more time and focus into doing it.
  • Doing that was going to incur more hard costs, in addition to my own time, and I ought to find a way to make Smart Drug Smarts profitable.
  • I didn’t want to be like a TV channel with 300 commercials for 300 different products, some of which might be great, but many of which are crap.

I decided that I wanted Smart Drug Smarts to create products of its own — things that I wanted, I would use, I would trust, and I could fully endorse — both from the standpoint of sound science, and also of safe, rigorously-tested manufacturing processes.

I also knew there was a lot that I didn’t know.

I knew the effects I was hungry for, and I knew chemicals I was interested in, but I didn’t know a whole lot about supplement manufacturing, pill-pressing, shipping and fulfillment, or the logistics and legwork involved in setting up a nutraceutical business.  It sounded like then — and I can confirm now, it is — a lot of work.

So I did the same thing I’d done back when I created the podcast and needed my first interview guests…  I began chasing down experts.

On Episode #21, I interviewed Roy Krebs and Abelard Lindsay.  Abelard did most of the talking, and this was appropriate; he was the citizen-scientist of the two, the biohacker and self-experimentalist who had devised and refined the two-compound cognitive enhancer now known as CILTEP.

But it was Roy — the quiet one, who didn’t really talk much during the episode — whom I realized late last year was another kind of expert I’d soon be needing.  Because what Roy had done, in the time following Episode #21, was turn CILTEP from a mix-it-in-your-kitchen recipe for do-it-yourselfers into the flagship product of a successful company.  One with manufacturing, purity-testing, bottling, shipping, and customer service running like clockwork.

I knew Roy and his partner Ben Hebert.  I knew that they knew their stuff in the running of a supplement company.  They know how to get things done, and how to keep customers happy and supported.

And also — I knew they were a little bit hamstrung.

Their company’s name is Natural Stacks, and they take the “Natural” seriously.  Products under their brand don’t contain any man-made ingredients.

And as you might have guessed, this restriction cuts out a lot of “the good stuff.”

Axon Labs is born.

Early this year, Roy and Ben and I began talking about forming a new company based around cognitive enhancement.  A “house brand” for Smart Drug Smarts — one where man-made compounds are A-okay, but where we would hold ourselves to the standards that matter: science-backed efficacy in our products, safety and purity-testing, and a great customer experience.

And once again, we reached out to Abelard Lindsay — whose enthusiasm for diving into the medical literature and looking for compounds with unrecognized complementary benefits was undiminished.  We told him now the handcuffs were off – man-made chemicals were on the table.

By the time you hear or read this, Axon Labs will be unveiling its first products.

It’s been almost a half-year to get the first batch ready, but all of us involved would agree it’s really been much longer than that.  Cooked into the mix are two-and-a-half years of my study into cognitive enhancement through Smart Drug Smarts, almost as much time on the business-end of nutraceuticals by Roy and Ben, and nearly a decade of study and self-experimentation by Abelard.

We’re immensely proud of what we’ve put together.  It wasn’t easy.  Biochemistry, bureaucracy, multiple time zones, and very busy people.  But nothing worth doing is easy, right?

I’ll be talking all about it in an episode soon.

And yet, it’s important for me to emphasize: I don’t want the fact that Smart Drug Smarts will have a product line to impact what got people listening in the first place.  My initial goal and the show’s de facto slogan remains unchanged: To help you improve your brain, by any and all means at your disposal.

Axon Labs is just going to be one new set of means.  🙂

Jesse

PS:  Now, with all that as preamble, it is my pleasure to present…

Axon Labs

Would you rather hear this as audio?  Listen on Soundcloud.


I’m almost sure that my last haircut improved my health.

Not in the ways one might expect.  I wasn’t nesting lice or vermin.  It wasn’t a profoundly dangerous hairstyle, likely to get caught in industrial equipment and drag me down with it.

But it made me look like the me I was used to.

And whacking it down to the scalp — which I did, in a slight fit of “oh, hell with it” — was more of a change than I at first expected.

Face-Blindness for the Rest of Us

There’s a condition called prosopagnosia, which some scientists estimate affects almost one in forty people.  (I find this hard to believe, but it’s a “spectrum disorder,” much worse for some people than others.)  You know the people who say “I’m not so good with names, but I never forget a face?”

Well, people with prosopagnosia do not say that.  They do forget faces.  In fact, they never really recognize them in the first place.

For most of us, faces are a very special part of our visual reality, pulled from our vast data-stream of visual inputs and given preferential treatment by an area of the brain known as the fusiform gyrus.  You know how your smart phone has facial recognition software that puts a little box around people’s faces and makes sure to adjust focus and lighting to protect and emphasize them, versus other parts of the image?

Well, your brain — in particular, your fusiform gyrus — is constantly doing the same thing.

Unless, that is, you have prosopagnosia — which can be congenital (the fusiform gyrus never adequately learns to do its job) or acquired (brain damage bangs it up, and afterwards facial recognition takes a dive).  Prosopagnosics, as they’re called, have brains that function much more like an old school camera with no on-board computer, treating all parts of the visual field the same, not playing favorites with faces at all.

This is generally a bad thing.  Egalitarian ideals like “all visual elements are created equal” don’t really work so well in practice.  Not with vision.

Prosopagnosics, depending on the severity of their condition, range from having a bad memory for faces, to literally being unable to recognize themselves in the mirror.  They compensate by identifying friends and loved ones by secondary cues, like their manner of dress, their voice, or how they move.

Now, it should be mentioned — I don’t have prosopagnosia.

We’re All Icons

If you’re not a prosopagnosic, when you first meet someone, you’re aggressively cataloging details about their face, taking notes for later (unconsciously, at least), and drawing inferences about what you might expect about them, based on their facial idiosyncrasies.

Like all stereotypes, these guesses might not be borne out by further real-world data, but think about what comes to mind if the face of someone you meet is characterized by…

  • Ruddy-colored cheeks with visible capillaries
  • A deep, caramel-colored tan
  • Strong vertical creasing in the forehead, above the nose
  • Orange lipstick

In each case, you’ll probably take these as personality-clues as to what you might expect from a person.  (This is especially true in cases where the clues seemingly disagree with each other and imply a conscious choice — like a friend I have who is in his late 40s, but dyes his hair almost a canary-yellow “blonde.”)

But as we get to know individuals better, personal experience trumps facially-derived guesswork, and (again, for non-prosopagnosics) the faces of people we know come to represent our body of knowledge about that individual rather than the type of person we’d expect, based on their looks.

In other words, we recognize people’s faces as icons for the people we know, rather than advertisements for whom we might expect.

The Mirror Works Both Ways

The statement above is true even when the face in the mirror is us.

I was so used to seeing myself looking, well… the way I normally look, that a massive hairstyle change* was enough to momentarily shatter the visual iconography I had for myself.

  • Full hair-eradication, more accurately.  Think Kobe Bryant or Bruce Willis.

This isn’t to say that I had any “Who am I?” identity crisis following my haircut.  Very much the opposite.  It was a “Who is he?” moment.

Later in the afternoon on the day of my haircut (and the initial shock had worn off), I was doing a workout.  I had a mirror nearby and caught a glimpse of myself — shirtless and now completely bald — and for a moment I didn’t recognize myself.  I knew it was a mirror, but it looked like not me.

Honestly, it was reminiscent of all the prison movies where the hero gets captured and has his head shaved and then is hosed down to de-louse him.  When those scenes happen in the movies, we’re always struck with the thought “wow, they’ve stripped him down to his animal self.”

And sure enough, with my visual icon-of-self disrupted, that’s what I saw in the mirror: the animal chassis of me, not my well-worn identity.

And that is why I think the haircut improved my health.  Or will, anyway…

It’s Good To Think Of Yourself As Meat, Sometimes.

Western society has a long and confused history with the Mind-Body Problem.

I’m not going to dive into the details here (but if you’re interested, there are about 10,000 books on the subject), except to say that as a rule, people tend to fall into two opposing camps:

  • Those who exult in the mind (often abstracted into the “ego” or “identity” or “immortal soul”) and view the body as unfortunate-but-necessary baggage.
  • Those who reject the artificial, illusory mind/body distinction and encourage us to think of the two holistically, for the improvement of each — er, it.  (See?  Everyday language gets tricky when you commit yourself to this stance.)

Normally I find myself siding with the second camp.  The “it’s all a closed loop; physiology affects the mind; and the mind’s choices feed back into our physiology, and so on” position.

This makes good, solid sense to me.

And yet…

I can see where the fusiform gyrus — so marvelous in its function — creates a built-in logical fallacy for us.

We see ourselves (using our objective visual system) and because of our tendency to iconize the people we know, what comes to mind is our self (either our identity/soul, or our “holistic self” — either of which amounts to the same thing, in practice).

We look in the mirror and see the psychosocial aspects

  • Do I look sexy for so-and-so?
  • Will this suit make me look impressive for such-and-such occasion?
  • Do I look older than the me from last year?

…and 99 times out of 100, the identity-considerations leap front-and-center and distract us from thinking about the hundred-odd pounds of primate staring back at us.

If we thought about that primate, we might ask…

  • How is this specimen?
  • If I were an alien, going to the galactic pet store to buy a human pet for my alien kid, would I pick this one?
  • Is he going to be fun to play with?  Strong for work?  Lively?  Tasty?

Catching that unrecognized me in the mirror, I had a flashing moment where I didn’t see my identity, I saw the body I inhabit — and that brief instant was a powerful reminder.

Pour Your Foundation.

Whichever end of the Mind-Body Problem you find yourself siding with, it’s the body that’s the physical substrate of our existence.

To put that less nerdily:

“If you don’t take care of your body, where will you live?”

  • Somebody said this before me, but the speaker’s name is lost to history.

I’m like everyone else; 99.9% of the time I’m caught up in ego-related concerns — the things I want to do, be, see, experience.  And the maintenance of the meat-package that I come in — things like brushing my teeth — mostly seem like annoying impositions on my goals.

How many more inventions might have come from Edison if he hadn’t had to brush his teeth twice a day?

Could posterity have a few more Shakespeare plays if the Bard hadn’t had to use the loo?

And yet, it’s probably the opposite that’s true.  Maintenance work on our physical selves is a short-term loss, long-term gain.  (Absurd but true: If Shakespeare had never gone to the restroom, he’d have been in too much pain to do any writing.)

What resulted for me from my moment of non-self-recognition is this:  The thinking me is going to give a little more time, effort, and attention to the care and feeding of his animal chassis.

Sure, the animal-you is easy to forget about.  You can ignore him for a long, long time with little consequence; he’s slow to complain.  But eventually it will be he who is the primary determinant of how far you can go.

And that is a fact worth recognizing.

The correlation between being intelligent and being correct is, unfortunately, not as strong as we’d like it to be.

If smart people were as right as they are smart, knowing what to do all the time would be a lot simpler than it actually is.  But, alas.

A case-in-point is an article entitled “The New Normal,” published recently in Georgia State University Magazine, highlighting the thinking of uncontested smart person (and Smart Drug Smarts podcast alumnus) Nicole Vincent, associate professor of philosophy and associate neuroscience faculty member at GSU.

Unfortunately, the key idea of this article is just plain wrong.

The article presages a future where society has to deal with the nasty, unintended consequences of ever-more-effective cognition-enhancing drugs.  In this hypothetical dystopia, health/safety and efficacy concerns have all been addressed; the problems presented are purely social ones.

The title – “The New Normal” – refers to the social expectation that everyone will be using these drugs, for fear of underperforming and not keeping up with the cognitively-enhanced Joneses.

Citing high-responsibility professions like surgeons and airline pilots, Vincent warns of creeping public pressure for individuals to use the best-available cognitive enhancers to maximize their performance.  “You’re performing a job that many people’s lives depend on,” she says.  “If you mess up and people die when you could have just taken this [performance-enhancing] pill, people will see that as negligence.”

Why yes, I daresay they would.

Let me step back for a moment and say that I agree with most of the premises that the article’s “doomsday scenario” of changing cultural norms is based on.

  • I agree that cognitive enhancement technologies (including, but not limited to, “smart drugs”) will continue to improve.
  • I agree that early-adopters and more competitive members of society will use these things, and change our collective expectations — first of what is “acceptable,” next of what is “normal,” and finally what is “required” (either legally, or by overwhelming social pressure).
  • I agree that we’ll release these technologies into our society without having a clear understanding of their eventual consequences.*

* Humans have a bad track record when it comes to keeping genies in bottles.  If there are any technological genies that haven’t been un-bottled, I can’t think of them.  (Of course, this could be because their inventors kept them so darned secret we just don’t know such genies have been invented — and if so, kudos to those inventors.)  But as a rule — from atomic weapons to boy bands — if we invent things, we tend to use them and only afterwards consider what we’ve wrought on ourselves.

So if I agree with almost every premise presented by Vincent, what is she wrong about, exactly?

Her thesis fails the So-What Test.

Cognitive Enhancement will become the new normal.  So what.

As these technologies move from the Early Adopters to the Early Majority and eventually to everyone else, even the kicking, screaming Laggards will be pressured along (see the Diffusion of Innovations for this fun, cocktail-party terminology).

But… so what?

Let me provide some examples of other ideas that have failed the So-What Test:

  • “If access to basic education continues to expand… people will have to be literate to effectively participate in society.”
  • “If air travel becomes commonplace… businesses may expect workers to travel for hours at a time, at extreme heights, with absolutely nothing underneath of them.”
  • “If medicine further reduces infant mortality… manufacturers of child coffins will be put out of business — or else suffer the ignominy of re-marketing their products for small household pets.”

So freaking what, in all cases.

I could come up with more examples — a lot more.  All these if-thens are 100% correct.  And all are absurd in a way that is self-evident to pretty much everyone except… philosophers.

I don’t want to put words in anyone’s mouth (or over-speculate about someone else’s writing), but Vincent’s stance seems to be “we haven’t figured out all the ramifications of these technologies yet, so we should maintain the status quo until we do.”

But we can’t.  

And I don’t just mean we shouldn’t, I mean we can’t.

With apologies to Nostradamus and Madame Cleo, most of our track-records for predicting the future are just plain rotten.  And that includes really smart people — even professional think-tanks full of really smart people.

Accurately predicting the future requires access to enormous data sets, solid estimates of rates-of-change, an inherently counterintuitive understanding of exponential growth, and effective models of how various simultaneously-moving metrics interact with each other.

In fact, I’m just speculating that this recipe — if it could be pulled off — could accurately predict the future.  We don’t know.  But I find it hard to imagine that any of these tent-pole prerequisites wouldn’t be necessary.

Vincent’s stance seems to be “we haven’t figured out all the ramifications of these technologies yet, so we should maintain the status quo until we do.”

It was Abraham Lincoln who said: “The best way to predict your future is to create it.”  I’ve been reading Team of Rivals: The Political Genius of Abraham Lincoln, and one thing is easy for us to forget now, 150 years later, but was an enormous hurdle for Lincoln and other slavery-abolitionists:

There were many of Lincoln’s contemporaries — even those who morally opposed slavery — who thought that the Law of Unintended Consequences, when applied to a societal change as massive as the 13th Amendment (which made slaves’ wartime emancipation permanent), was just too risky.  What righteous babies might be thrown out with the slavery-colored bathwater?  Heck, what about the disaster inflicted on the federal government’s Strategic Mule Supply, if each of the freed slaves really got “40 acres and a mule”?

(Please refer back to the So-What Test, mentioned above.)

Rhetorical Bag of Dirty Tricks #47 and #48:  If you want to sound good, align your ideas with those of Abraham Lincoln.  To demonize your opposition, reference their ideas alongside Hitler’s.  I do both, although I’m leaving Hitler out of this post.

“The only constant is change.”

Trying to game out the future before it arrives, as we’ve discussed, is a fool’s errand.

And attempting to stop the future from arriving — to stop time in its tracks — is as close as history gives us to a recipe for a lost cause.  There are so many examples of losing battles fought in the name of such causes; the cultural annihilation of both the Native Americans and the samurai of Imperial Japan both come to mind.

Looking at these long-ago-settled battles from the winners’ side of history — knowing who triumphed and why, we now see the romance under the dust.  The American Indians, the samurai — both were fighting technologically superior forces in doomed, all-or-nothing conflicts.  The winners’ superior firepower, their superior numbers — both feel a lot like cheating as we look back on those conflicts now.

The “noble savages” didn’t stand a chance, but boy-oh-boy, did they have heart.

The position taken in the GSU article — against the creeping use of cognitive enhancement technologies — would try to paint baseline Homo Sapiens (circa 2015) as a noble savage race.

It’s an argument that packs emotional appeal.

You, me, and everyone we know, falls into the “us” that is under this impending, theoretical threat.  Even those of us who are using cognitive enhancers (those currently available) — we’re still a part of the “home team,” compared to those upgraded rascals from 2020, or 2030, or 2045, and whatever brain-enhancers they’re using to one-up, two-up, and eventually disenfranchise the biological “normals.”

What Part of “Progress” Don’t You Like?

I’m a sucker for historical romance.  I don’t mean boy-meets-girl kissy-kissy stuff where the girl wears a corset; I mean the broad, sweeping emotionality of individual humans struggling amidst great forces.

And the Tide of History is among the greatest of forces — less tangible but equally powerful as any natural disaster.

I watch a movie like The Last Samurai and see the doomed samurai charge, and I get misty-eyed like everyone else.  But I recognize that those noble samurai are, however unwittingly, the bad guys.

Unbeknownst to them, they were fighting against a world that cured Polio.

They were fighting against a world that explores space.

They were fighting against a world where run-of-the-mill consumer technology allows me to research samurai while listening to Icelandic music (created on synthetic instruments, and presented in Surround-Sound) as I sip African coffee and wait for a transcontinental flight that will be faster, cheaper, and safer than it used to be to travel between nearby villages.

Of course, the samurai didn’t know they were fighting against this stuff.

They just weren’t sure about this whole modernization thing, and what sort of “new normals” might emerge.

Bob Dylan was right: The times, they are a-changin’.

You won’t be forced to keep up.

Cultural tides may pull you along, but you’ll be free to swim against the current if you really want to.  There are examples of that, too.  The Amish are one.

The Amish are still here, in 2015.  So far as I know, they’re not under any particular threat.  They’re doing okay.  They decided to pull the cultural emergency-brake in 1830, or whatever, and well…

They continue to exist.  Why?  Because we live in a peaceful-enough, prosperous-enough culture that no one has decided it’s necessary to overrun, assimilate, or eradicate them and harvest their resources.  

It should be pointed out that societies like ours — this peaceful, this prosperous — are somewhat of an historical anomaly.  But the good news is:  We live in an era of unprecedented positive historical anomalies.

I recognize that those noble samurai are, however unwittingly, the bad guys.

If you want to opt out of further technological progress and rely on the goodwill of your fellow man (or, eventually, the Homo Sapiens-successors you’ll be opting out of becoming), there’s never been a safer time to do so.  We can’t predict the future, but the trend-lines do seem promising.

But for me, personally…

I don’t want to rely on the goodness of my fellow man.

That sort of reliance is something you do in a pinch, not as a general strategy.

Do you think the Amish would have made it through the Cold War without the more technologically-minded Americans picking up their cultural slack?  No sir, not at all.  Heck, they’d have been steamrolled in the Spanish-American War, generations earlier.

I didn’t start off this post intending to disparage the Amish, but dammit, now I will.  The fact is, they’re not going to read this anyway.

There is a word for people who have every opportunity to be effective, but choose not to be, and instead rely on others to be effective on their behalf.

That word is Freeloaders.

The Amish, I put it to you, are freeloaders.

GSU’s New Normal article posits a future where effective, cheap, safe, non-prescription “smart drugs” have become commonplace.

In that future, when it arrives, people who have the opportunity to use these drugs to improve themselves, and choose not to, will also be freeloaders.

I won’t be one of them.

Scroll to top