Author: Jesse Lawler

Jesse Lawler is a technologist, health nut, entrepreneur, and "one whose power switch defaults to On."  He created Smart Drug Smarts to learn how to make his brain do even more, and is greatly pleased to now see his little baby Frankenstein toddling around and helping others.  Jesse tweets about personal optimization, tech, and other stuff he finds interesting at @Lawlerpalooza.

Had I just woken up?

Or had I been awake?

I was sure I was awake — although I couldn’t actually see anything.

I also couldn’t feel anything.  That is, I literally couldn’t feel anything.  (And here I mean literally from back when literally meant “literally,” before literally started meaning “not literally, but figuratively.”)

I wasn’t scared.  Because I knew if I wanted to feel something, I could do so.  I just had to move.

But I hadn’t twitched a muscle in…well, I wasn’t sure how long.

What time was it, anyway?

Senselessness and Sensibility

Have you ever woken up in bed, and before you open your eyes, before you move, you don’t know where your limbs are?  You’re sure they’re still there, but whether your arm is down at your side or splayed over your head, you really have no idea.

Those motor neurons haven’t fired in a long time.  The part of your mind that monitors your body is like an air traffic controller who comes back from a coffee break to find no blips on the screen.

Then, with the faintest internal whisper of “wiggle my pinky finger,” the position of your finger, your elbow, arm, shoulder — it all flashes back to life.  You’ve refreshed the buffers and your input streams are back online.

But if you don’t send an outbound signal to your body, you can maintain a sort of proprioceptive silence.  It’s a state that’s easily broken.  If a fly lands on you, or a breeze blows across your skin, or a kid tickles you… your bodily sensations fire up, the same as if you sent a move impulse.

But if you wait, you can sometimes feel a strange sense of spacelessness.  It’s as if you’ve entered a large, empty room that may or may not have borders.

Unfortunately, you can’t explore this space the way you normally would — by moving around inside it.  Because the moment you move, the illusion shatters.  Physical sensations implode the imagined environment, putting you firmly back in the driver’s seat of your own body.

The only way to search the boundaries of this space is with thought.  Pure, inwardly-directed thought.  Rumination.  Decision.  Amazing capabilities of our brains — but apparently secondary ones, given the persuasive argument made by neuroscientist Daniel Wolpert that the brain’s raison d’etre is to direct physical movements.

When we accidentally find ourselves in this “hey, my physical senses are offline” state, we’re operating on a tight time frame.  We’re like a secret agent in the movies who has snuck into the highly guarded enemy stronghold, racing against time to get what we’re after before our cover is blown and we’re swarmed by guards.

In this case, the “guards” are just the dumb-luck odds of physical reality interrupting our reverie.  A bug will land on you.  A breeze will blow.  The person in bed next to you will roll over.  The real world just doesn’t stay still for very long — and unless you’ve been mainlining novocaine, you’re going to notice.

This is one of the prime advantages of a sensory deprivation tank.

Alone With Yourself

Alone in a tank is just where I found myself (or rather, where I couldn’t find myself) when I realized I wasn’t sure if I’d been asleep or not.

Inside a sensory deprivation tank there are no bugs, no breezes, no bed-mates.

There’s also no light, no sound, and no temperature differential.

There’s not a lot of anything — except for water and dissolved epsom salts (about 140 pounds of salt from what I hear).  And also time.

Time is really the commodity you’re buying when you buy a “float.”  It’s time when you can’t be bothered.  The forces that can’t bother you range from your boss to your kids to CNN to gravity.

The forces that still can bother you include your own mind.

In fact, that’s about the only thing on the list.

Stripping away the distractions of the physical world and laying in a pool of skin-temperature salt water, you’re naked, alone, buoyant, safe, and temporarily devoid of both obligations and opportunities.

Of course, you can stand up and walk out — there aren’t locks on the doors, just a suction-seal to keep out sound — but if you choose to stay in the pool, you’re choosing to be with yourself in a way that’s almost impossible to match in terms of experiential purity.

There is nothing to distract you from you.

The ancient Greeks’ Temple of Apollo at Delphi said many things through its human oracles, but the one they chose to carve into the rock was: “Know Thyself.”

If getting to know yourself is normally like a game of “Where’s Waldo,” getting to know yourself in a floatation tank is more like a blank white page with Waldo just standing there with no crowd to fade into.

If you hate being in a float tank — and some people do hate it — the inescapable conclusion (for those who choose to confront it) is that whatever is bothering you is you.  There’s a lack of plausible suspects when you’re in a dark, silent room with imperceptible temperature and neutral buoyancy.

I Love Nothing (With a Capital “N”)

Some of us love float tanks.  We love them for the opportunity they provide to grope around in our inner perception, reaching for the walls like a blind person (literally, in this case, as well as figuratively).

Where can our minds go when all distraction is removed?

What bothers us when there is nothing there to bother us?

What are we capable of when the cognitive crutch of physical reality is removed?

Can I remember things when I can’t jot on a Post-It or save to my Google Calendar?

Can I stick to a decision when the only accountability is to my own mind?

The simplest questions — things like “Am I awake?” — become legitimately confusing.  Questions that we would never, ever ask in the normal hubbub of perceptual reality.

What time is it, anyway?  How can you measure the passage of time with no clock?  Has it been 30 minutes since the last time you wondered, or 90?  Maybe your session has ended already, but they forgot to tell you?  What if you’ve been here for 17 hours?  (These are the sorts of quackish speculations that bubble up when your brain has gone without input for a while.)

Ganzfeld Effects

“Ganzfeld Effects” are the name for hallucinatory sensations that the brain produces when, deprived of sensory input, it strains to find signals in the “noise” of a silent stream.

Think about your ability to follow a spoken conversation in a loud room.  Lots of people might be talking, glasses clinking, dogs barking… But once you lock onto a speaker’s voice, you ignore the extra sounds and follow the conversation effortlessly.  The human brain has been described as a “pattern recognition machine” — but a big part of pattern recognition is pattern amplification.  The brain splits a promising subset of the total stimuli off from the perceptual firehose and amplifies it as if to say “How about this?  Is there something here worth focusing on?”

In familiar environments, we quickly latch onto the correct data-slice and proceed with our lives.

But in the blackness of a floatation tank — or the undifferentiated white-out of a blizzard, for example — the brain over-amplifies meaningless sensory information, straining to find a something in the nothing.  Sane, sober people wind up hallucinating without madness, without drugs.  It’s just the brain doing its best in unfamiliar conditions.

Whispers and Reminders

I find that my hallucinations in the tank are mostly auditory.  Sometimes I “see” blooms or ripples of color — especially in the peripheral areas of my visual field.  But nothing that ever coalesces into an image of anything in particular.  Never an armadillo, or a tractor-trailer, or a muppet.  This happens for some people, but apparently not for me.

Many times, however, I will “hear” voices.  Something that was just said — words in English, the right rhythms of speech — but too quiet to hear clearly.  Like hearing words through a wall with only the vowels coming through, the consonants muffled and lost.

Other times I can hear the words — or actually, recall them — because the voices stop talking as soon as I shift my attention to listen, like criminals caught discussing a plan.  But still, even in cases where I can “overhear,” the words make no sense.  They’re sham sentences, with syntax but no meaning.  Like a self-licking ice cream cone.

All this is very, very strange.  And it’s even stranger because the induction process of a floatation tank is so mundane.  Boring, even.

Maybe in today’s hyper-stimulated world, boring-ness is the greatest novelty we can find?

Ultimately though, I think the magic of a tank is its ability to disassemble our normal view of ourselves, allowing us to see in isolation the inner workings that — when combined with our normal physical surroundings — add up to what we think of as “us.”

It’s like the face of an old mechanical clock, which reliably tells the time and which we barely think of otherwise…

But open the face and inside is a mysterious cosmos of interlocking gears and springs and who-knows-whats.  Each one is fascinating, complex, delicate, and the obvious product of intense refinement and craftsmanship.  Considering that the finished clock is the combinatorial result of all these microcosms, suddenly the familiar becomes awesome again.

Now… what time do you think it is?

“How do smart people breed stupid people?”

It’s a question from one population geneticist to another.

The second geneticist can tell from the goofy grin on his colleague’s face that the answer is going to be a joke.  He thinks about it and finally shrugs.

“By repeatedly screwing their sisters!” the first one squeals.

(Population geneticists aren’t selected for their senses of humor.)

Today, kids, we’re going to be talking about incest.  Now I’ll grant you up-front that incest is pretty much universally reviled.  But here’s the thing: Incest is defined culturally.  (More on this below.)

The less morally-culpable term is “inbreeding,” defined as follows:

“…to breed by the continued mating of closely related individuals.”

Animal breeders have used inbreeding as a technique from time immemorial to achieve many useful results.  Everything from creating cows that give more milk, to dogs that snuggle with kids instead of eating them.

Despite its long pedigree, inbreeding brings with it well-known dangers: the root cause of which is the accumulation of genetic goofs.  Such goofs would normally be masked by a working spare in the second set of chromosomes received from the animal’s mom or dad.  But this assumes a respectable genetic distance between mom and dad.

In animal breeding, it’s often “worth it” (from our perspective) to risk a genetic misfire by breeding close relatives — because the worst-case scenario is just an underperforming cow.

Do I care so much if the milk on my breakfast cereal comes from a less-than-stellar bovine?

Nope, not really.

We humans get a lot more persnickety when it comes to the reproductive strategies of our own species.  And for good reason.  For one thing, the “worst-case” scenario I mentioned above isn’t really the biggest biological misfire possible.

The true disaster scenario from inbreeding is an irretrievably rotten stretch of genetic material, with no spare, and a fatal flaw thus literally written into the resulting embryo or fetus.  The likely result is a “spontaneous abortion” (miscarriage).  This doesn’t cause much emotional strain if you’re a rancher and it’s happening to a cow; it does if you’re human and it’s happening in your immediate family.

Note:  Severe genetic maladies also occur for dozens of reasons that have nothing at all to do with inbreeding — but inbreeding significantly boosts the likelihood.

Human cultures worldwide — along with most animals — share an automatic revulsion to close inbreeding.  And interesting studies have shown that our degree of revulsion tracks closely with our genetic “distance” from the relative under consideration.

(i.e. Sociologist asks student: “Would you rather screw your first-cousin or your half-sister?” “Um, can I choose ‘None of the above’?” “Nope.  You’ve gotta pick one.” “Your study sucks.”)

You share half your genetic material with your mom, your dad, and any full siblings.

With half-siblings and grandparents, this overlap drops to 25%.

Corollary:  If you’re ever playing “Would You Rather…?” and some sick bastard asks if you’d rather sleep with your mom or your grandma — from a genetic perspective, the “correct answer” is your grandma.

For first cousins, the percentage of full genetic overlap drops to “only” 12.5%.  That’s just one gene out of eight.

And that’s where things get messy.

So what was incest, again?

As described above, inbreeding is simply a strategy for managed reproduction, with no judgment implied.

Incest, by contrast, is inbreeding plus moral editorializing.

Incest is inbreeding when we think it’s gross.

Nowadays, in most of the world, our definitional umbrella for incest extends pretty far.  The majority of you reading this will not have given serious romantic consideration to your first cousins.  I contend that this is a good thing.

This broad definition of incest is far from a universal norm, though.  Until modern times, cousin-marriages — known to sociologists as “consanguineous” marriages — were widely practiced in much of the world.

 In many places, they still are.

The reasons for consanguinity’s popularity in the olden days were pragmatic.  Until recently, most people eked out a rural existence with few potential mates living within a day’s walk.  Of those available nearby, many were often blood relatives.

The other reasons had to do with family loyalty.  Marriages inside a kin-group keep wealth and property consolidated.  And while divorce was less common in the past, the early death of a spouse was much more common.  Consanguineous marriages reduce the number of competing interests when settling estate claims.

So — this still happens?

It doesn’t just happen.  In some parts of the world, consanguinity is more popular than mini-skirts.

(Okay, that’s a bad joke — because consanguinity is most prevalent in the Arab world, which never really embraced mini-skirts.)

According to a report from the 2009 Reproductive Health Journal, Pakistan holds the dubious honor of being home to the world’s most consanguineous marriages — with a whopping 70% of marriages between blood-relative brides and grooms.  In Saudi Arabia, the number is 66.7%.  In Iraq, it’s 60%.

All in all, it is estimated that 1.1 billion people are either married to cousins, or the children of consanguineous unions.

Maybe it’s not so bad?

Albert Einstein married his first cousin.

So did Charles Darwin.

Consanguinity is not a practice limited to the Arab world, or the Amish, or people who shipwreck on desert islands while vacationing with their cousins.

In 2003, Discover Magazine published an article offering up a defense of close-but-not-too-close levels of inbreeding.  The authors pointed out that while the odds of serious genetic disorders do rise, they may not rise enough to warrant the fact that, say, 31 out of 50 US states have outlawed first-cousin marriages.

One point the article emphasized was that inbreeding’s negative effects are inversely proportional to the genetic health of the original breeding population.

In other words, if your family has a healthy genetic makeup with comparatively few defects, you may be able to (choke down your disgust and) safely inbreed for a few generations without any bad results.  If your family is not so genetically well-endowed, you won’t have to wait multiple generations to see problems.

The point made by Discover was that although the chances of congenital defects increase, the increase is still to a comparatively small number.

“Tripling the risk” sounds bad.

“Becoming 2% more likely” sounds a lot more palatable.

But if the base rate of a certain problem is 1%, then “tripling the risk” and “becoming 2% more likely” are the same thing — both get you to 3%.  Savvy statisticians and science writers can spin facts like this to suit their own agendas.

(Was it the hidden agenda of the Discover writer to seduce his own cousin? I can only say that the evidence does not discount such a possibility.)

I admit it.  I’m biased.

But I’ll admit my bias up-front: I’m pro-smarts.

I think our global society will succeed or fail based on the careful marshaling and increase of our collective cognitive resources.

I agree with Einstein (the cousin-f**ker mentioned earlier), who famously said:

“We cannot solve our problems with the same level of thinking that created them.”

If one accepts this idea, and adds the follow-on premise that smart people will inevitably create new problems, the implication is an ongoing intellectual arms race, with humanity continually bailing itself out of some dodgy jam it just recently got itself into.

Note:  This is exactly like every single TV sitcom’s plot, only with global catastrophe hanging in the balance.

So let’s grant Discover its point that serious congenital defects — while more likely in consanguineous marriages — are still not thaaat likely.

But now, let’s leave aside birth defects and official diseases with scary-sounding names.  Because…

Inbreeding makes people dumb.

Yes, exceptions exist.  Dumb is relative.  And everything I’m about to say is based on averages and likelihoods and “Normal Distribution Curves.”

A Normal Distribution, you may recall from science class, is the famous chart of a “bell-shaped curve” that can be used to predict everything from household power bills to the distance of the spitball you just threw at your teacher from the previous spitball you threw at your teacher.

In 1965, a Japanese study of cousin-marriages showed an average IQ deficit of 7 points in the resulting children.  (See more here.)

A 1993 Indian study showed an even further drop: 11.2 points of IQ.  (Among India’s 140 million Muslims, it’s estimated that 22% of marriages are consanguineous—meaning tens of millions of people.)

Intelligence, like other multi-variable traits, exhibits a “normal distribution” in any reasonably large population.  If consanguineous marriages reduce children’s IQ by somewhere between 7 and 11.2 points (we’ll round to 10 to keep the math easy), we can visually imagine taking the IQ-curve — with its peak normally at 100 — and sliding it to the left by 10 points, so its hump sits centered at 90.

To be fair, a 90 IQ isn’t very dumb.  Someone with 90 IQ is smarter than one out of every four people he meets.  This is not someone who wears a drool-bucket and is baffled by door handles.

But the problem isn’t what consanguinity does to the middle of the IQ curve; it’s what it does at the edges.

Amputating our allotment of geniuses.

Genius, if you go by the numbers, is defined as IQ 160 and above.

Normally, you’ll get six folks this smart in every 100,000 people.  That’s the straight Vegas odds if you’re betting on genius.

If you slice 10 points off the average IQ to accommodate consanguineous marriages, then to find how many geniuses you’re left with, you have to look at the number of people who would normally have had a 170+ IQ.  These are the only ones who will still be left at genius-level after the 10-point decrement.

The bell-curve has tapered down to super-skinny at this point.  As likely as not, you’ll have nobody with a 170+ IQ in a 100,000-person population.  Just 0.38 people per 100,000, to be exact.

So if you’re really hell-bent on doing it, consanguinity will cost your society almost 95% of the geniuses that random-ass luck would have given you for free.

Numbers Geeks:  You can see my calculations here.

Meanwhile, by sliding the bell-shaped curve left, you’ve pushed a much fatter slice into the dangerously low IQ territory.  An IQ of 70 used to be the cut-off for “borderline mental retardation” (back when that term was in vogue).  The term is no longer used — and the numbers-only designation wasn’t a good one — but this remains a level of measured intelligence at which teachers and social workers start making additional assessments to see, “Can this person really take care of himself?”

At straight Vegas odds, 70-and-below IQ “should” be just about 2.5% of the population.

But applying the 10 point penalty drops the entire 80-IQ-and-below population into the 70-and-below range.  Doing so quadruples the size of this group.  (A full 10% of the population on the standard IQ curve sits at 80 and below.)

Sometimes “tradition” is just plain dumb.

Can anything justify defoliating our limited supply of geniuses and simultaneously quadrupling the number of cognitive hard-luck cases?

Maybe.

But whatever it is would have to be pretty damn compelling.

Consanguinity doesn’t cut it.  Trading away all those IQ points for easier probate law and a convenient reduction in the number of in-laws… That’s just a bad bargain.

These days, over 60% of the world’s population lives in cities — cities complete with dance halls, Internet dating sites, and busybody spinsters with nothing better to do than help you hook up.  There’s no longer any geographical imperative for us to boink our relatives.

Of course, I realize the main motive force behind consanguineous marriages is not rational decision-making; it’s cultural inertia.  (Yes, that self-same bugbear who mandates European lawyers wear powdered wigs, and afixes Confederate Flag bumper stickers to the occasional pick-up.)

But culture, tradition… Are these really sufficient excuses for people to make their next generation dumb?

I am not trying to denigrate the Arab world, or the Amish.*

* Actually, nix that.  I’ve started with the letter “A” and will be denigrating all sociopolitical groups in alphabetical order.  Baathists and Belgians — you’re next!

I’m saying that sometimes studies like those cited above give us a clear signal that such-and-such cultural norm is demonstrably wrong.  When these signals come, we should count them as lucky breaks — even when they require us to break with tradition.

By ditching consanguineous marriage, cultures get what biology owes them anyway: a fresh shuffle to the genetic deck.  And the implementation is as simple as encouraging people to date outside their immediate gene pool.

That shouldn’t be too tough a sell, right?

A Conciliatory PS: 

I don’t expect I’ve got too many readers in consanguineous relationships, or who are the children of such relationships.  But I could be wrong.  With the overall number being about one person in eight worldwide, I could be very wrong.

If what I’ve written here offends you, or has left you wistful for additional IQ points that you might have had under different parentage, let me offer the following:

According to family lore, when I was a baby, my dad dropped me square on my head from a not-insignificant height.  Who knows how many IQ points this might have cost me, but if they ever do a large-scale study on head-dropping babies (they won’t), the results won’t be good.

But still, if they did — and if my family had a long tradition of head-dropping babies (we don’t) — I would still be eager to be among the first generation to formally shit-can that tradition.

The world loves coffee.

To say “people love coffee” is a little like saying “people love sex.” In fact, going by the straight numbers — two billion cups of coffee drunk per day vs. slightly fewer fornications — people may like coffee more than sex.

Muslims, Christians, Jews, Atheists, Europeans, Africans, Americans, Flat-earthers, scientists, terrorists, and Red Cross volunteers… Coffee-drinking cuts across all social divisions and unites us all in one big human family.

Homo Sapiens caffeinophilus, you might call us.

Coffee is everywhere. The thought of caffeine-abstinence for the 90% of adults worldwide who partake daily is headache-inducing — both figuratively and often literally.

And so drastically reducing the world’s coffee supply — as unpopular as that would surely be — sounds less like reality than it does a bad plot device by a villain in a James Bond movie financed by Starbucks.

The coffee bean just doesn’t fit well with our preconceptions about endangered species.

But this grim prospect may be closer to reality than we’d like to think — and without a mustache-twirling mastermind as a convenient scapegoat. If we lose coffee, the perpetrator will be climate change.

And that means that ultimately, the villain will be us.

When Canaries Aren’t Enough.

We all know the canary in a coal mine expression, right?

Just in case, here it goes: In the bad ol’ days, in deep underground mines, sometimes noxious gases would seep up and kill miners. Canaries breathe fast and are apparently more cardiovascularly fragile than your average coal miner. So in the days before high-tech early warning systems, a dead canary — assuming you’d had the presence of mind to enter your coal mine carrying a live one — was a helpful clue that you should get the hell out of there.

Canaries (and similar warning bells) work well in cases of acknowledged and immediate mortal peril.

But they don’t work so well in situations where the thing that kills you takes a while to do so. You’ve never noticed canaries smoking Marlboros or eating McDonald’s Extra Value Meals alongside consumers of those products, and there’s a good reason for this. It’s not because canaries probably wouldn’t die from these things, given enough exposure; it’s because they wouldn’t die fast enough to merit the annoyances of long-term bird ownership.

Canary systems are great when the terror groundwork has already been laid. For nineteenth-century coal miners, the inciting fear wasn’t about the dead canaries; the birds were just a disposable commodity.

What made them care in the first place were graveyards full of dead coal miners.

Sometimes, It’s Gotta Hurt.

It is said there are two ways people can learn not to do things.

  1. The first way is to put your hand on a hot stove.
  2. The second way is to watch someone else put his hand on a hot stove.

The latter method is considered preferable.

The problem is, these two options are only for learning by individuals. When it comes to whole societies, there seems to be only one of the options available… And it’s not the preferred one.

Societies learn by hands-on experience (pardon the pun). It’s not that history fails to provide national-level cautionary tales — very much the opposite. It’s just that regardless of relevant historical warning signs, nations and their leaders always seem able to cook up plausible-sounding dismissals that “things were different then/there” and sweep aside the warnings.

After all, history is endlessly debatable, and the lead-ups to most grand-scale disasters aren’t as clear-cut as hand on a hot stove.

Today, global climate change is probably the most looming, and least acknowledged, societal train wreck in progress. And true to form, humanity seems determined to burn its own damned hand on its own damned stove.

Environmental changes have spelled doom for human societies before — at limited, regional levels. (Jared Diamond’s 2003 book Collapse contains sufficiently chilling examples for whose who want to ruin a few nights’ sleep.)

But these stories aren’t well-known enough, they aren’t scary enough, and they aren’t connected enough in the popular imagination to our species-wide bad habits. They’re not an immediate, whack-you-in-the-face threat like the dead bodies of your coal mining buddies.

And so we continue to reach toward the environmental stove with our own damned hand. (In fact, it’s a better analogy to say both hands. We’ve got exactly one planet; there is no back-up hand.)

Luckily, there is a middle-ground method of learning, which societies do seem capable of. It’s the inoculation model — akin to burning your fingertip, but recoiling in time to save your hand.

“That Which Does Not Kill Us…”

…had still better scare the bejeezus out of us.

Getting these historical fingertip-burns right is a tricky thing. Societies are fickle, disperse in their attention, and easily distracted. Death and dismemberment are almost always part of the recipe necessary to catch a whole society’s interest to the point where its self-defense instincts are triggered.

Horrible as each of them was to the victims, I would cite both the bombings of Hiroshima and Nagasaki and the recent Ebola scare as successful historical inoculations.

With Hiroshima and Nagasaki, the world saw immediately just how serious these atomic weapons technologies were. The smoky silhouettes of incinerated civilians on still-standing walls were as motivating to strangers half a world away as dead coal miners had been to their small-town coworkers three-quarters of a century before.

And despite some close calls, this terrifying debut has kept our collective hand off the nuclear stove for over 70 years since.

With the recent Ebola virus outbreaks — though some have criticized the health care community’s response as “panic-mongering” — the fearful prospect of a widespread epidemic has marshaled public safety to a far higher level today than it would have been at otherwise. “False alarms,” when it comes to biological pandemics, are something we should be cheering for, not complaining about.

(By contrast, the resources brought to bear against a distinctly regional threat like Isis are way disproportionate to the risk. Isis’ methods are designed to be newsworthy, but the organization is not a threat to people living in New Brunswick, or Johannesburg, or Detroit. For a pandemic virus, the opposite is true. And Global Climate Change, in this sense, is a lot more like Ebola than it is like Isis.)

“Remember the Cappuccino!”

The sinking of the Maine, a U.S. warship parked in the Havana harbor, sparked the Spanish-American War (Cuba, back then, was a Spanish colony) and led to an unmitigated American ass-kicking of the formerly first-rate global power that was Spain.

The Maine exploded and sunk under debatable circumstances — and may have been an honest-to-goodness technical accident rather than foul play— but nevertheless, “Remember the Maine!” was trumpeted on newspaper headlines as a rallying cry for American patriotism.

It met the 1890s standards for a fingertip-burn that merited an immediate, dramatic response.

All the examples that I’ve given so far — Hiroshima, Ebola, the sinking of the Maine — had a human body count. Maybe that’s a prerequisite for societies to wake up and smell the coffee.

But maybe, just maybe, times have changed?

Could the tragic death of something non-human (but well-loved) cross the fingertip-burn threshold?

The Bitter End.

If you’re not a coffee snob, chances are good that you know one. And any self-respecting coffee snob will tell you there are two primary strains of commercial coffee:

  • Coffea Arabica — More aromatic and more popular, accounting for over 70% of global consumption.
  • Coffea Robusta — A more bitter runner-up variety.

If you’re drinking coffee as you read this, odds are good that it’s Arabica.

This also means that your cup-of-joe has been bred from a very distinct lineage, originating in the mountains of Ethiopia. These recent, confined origins mean that our commercial coffee has very little genetic diversity and is particularly vulnerable to climate change. Put simply — a disruption that kills one of ’em is likely to kill all of ‘em.

Arabica plants grow best in a narrow temperature range between 18 and 22 degrees Celsius, and they require gentle, regular rainfall.

With global weather patterns destabilizing, Arabica is in jeopardy. Researchers predict that agricultural lands capable of supporting Arabica could fall by half in the coming decades. To make matters worse, short-sighted efforts to improve coffee yields from the diminishing acreage could speed soil depletion and further accelerate shortfalls in production.

As Arabica’s availability dips, will we switch to the bitter Robusta alternative? Will we grit our teeth and just keep paying more for the dwindling supply, until we long for the days of free refills and coffees that cost only $3.50?

And to make matters worse, some climatologists predict that the world’s prime coffee-producing regions — places such as Vietnam, India and Central America — will be among those hardest-hit by climate change.

Are coffee’s days numbered?

Ghosts of Bananas Past

To some, all this may sound like panicked fear-mongering. After all, the world drinks three-quarters of a trillion cups of coffee annually. We couldn’t just run out…

It’s happened before.

Not with coffee, but with something almost as unthinkable.

Bananas.

I can hear you scoffing. “You’re not fooling me,” you say. “We’ve still got bananas.”

Well, yes and no.

The bananas we eat today are not, it turns out, our great-grandparents’ bananas. It wasn’t long before the aforementioned Spanish-American War that bananas got globally popular. In fact, a worldwide banana craze led to the initial development of refrigerated shipping. But I digress.

At the time, the world was hooked on a strain of banana known as the Gros Michel. (There are many, many strains of bananas — thousands of them. Most, you wouldn’t want anything to do with.)

But in the 1950s, a banana-blight began ravaging the Gros Michel strain. It got worse, and worse, and, well… worse.

Devoted horticultural scientists, bullet-sweating banana plantation owners, and all the financial might of companies like Chiquita were unable to save the Gros Michel.

Season after season, the world watched helplessly as the beloved banana strain went extinct. We were powerless to stop it.

The stopgap was a second strain, the Cavendish. It’s the one that we know today, simply, as “bananas.” The Cavendish is somewhat smaller than the Gros Michel. They rot faster, they bruise easier, and according to those who lived at a time when Gros Michels were available to taste-test and compare, our modern Cavendishes taste worse. Gros Michels were sweeter, with a creamier texture.

They were, apparently by unanimous consent, a better banana.

But it was our modern Cavendish that was immune to the banana blight. And so, like the rat-ancestors who inherited the earth when the dinosaurs died out, by the early 1960s the Cavendish inherited the title of “banana” to a dispirited but option-less global consumer public.

The Gros Michel banana blight was bad. If you owned a banana plantation back then, it was downright disastrous.

But it was an isolated disease, affecting a solitary crop. An important crop, sure — but it wasn’t a domino poised to tumble an entire ecosystem or undercut global food production.

In short, it wasn’t worth freaking out about.

It didn’t qualify as a fingertip-burn.

“Never let a good crisis go to waste.”

These were the words of Winston Churchill, but it encapsulates an idea as old as politics.

Bad news is a great motivator. A society amped up on adrenaline, fear, righteous indignation — is a society ready to get stuff done.

So imagine the crisis of a Coffee-less Future.

We are a world of addicts. (Let’s be honest, okay?)

And Climate Change — the red-handed culprit should this happen — is a villain who won’t stop at one crop, or one industry, or one continent…

The loss of coffee wouldn’t just be a disaster to breakfast, an existential threat to baristas, or a stock sell-off for SBUX

It could be a fresh batch of dead coal miners.

A culinary mushroom cloud.

I like coffee. I drink it. I’ve even been known to photograph it on occasion. But if a few billion caffeine-withdrawal headaches could snap us all to attention on the inadvisability of playing chicken with the global environment…

I’ll be happy to switch to tea.

If we can keep our collective hand off the stove for the comparatively low price of an extinct beverage, we should count ourselves lucky.

Maybe a tomorrow without coffee is exactly the rude awakening we all need.

There’s something it’s hard not to notice when you speak with people about psychedelics.

Most pop culture portrayals of psychedelics have discussions that begin (and all too often, end) with the word “Duuuuuuuude.”  This may originate with the character “Shaggy” from Scooby Doo, the cultural progenitor of cartoon druggies.  Something about Shaggy clearly struck a chord; he’s been spliced and cloned into dozens of equivalent, well-meaning imbeciles across all media ever since.

But with all due respect to mystery-solving dogs and their human sidekicks, when you talk with real users of psychedelics, the topic expands well beyond “Duuuuude.”  People are eager to talk about their experiences.  And it’s very rarely just “I was so fucked up” or “I partied like it was 1999.” 

Well, sometimes it is that, but those things are the jumping-off point, not the follow-through.

People who become psychedelics aficionados — those who maintain an interest after their last rave or beyond their first bad trip — don’t want to talk about shiny colors or how their house cat suddenly turned telepathic.  They want to talk about what their psychedelic experiences have taught them about themselves.

It’s a weaker punchline than “Duuuuude.”

And often a lot more confusing, long-winded, and deeply personal.

But as these are what real (i.e. non-cartoon) psychedelics users find comment-worthy about their experiences, it seems worth paying attention to.

Half-Way Between You and Them

What follows is my pet theory on why psychedelic experiences can be so transformative for people.  But first, a question:

Why do people go to psychologists — or even to friends, family members, and others who know them well — to get advice on their own lives?

With all due respect to mystery-solving dogs and their human sidekicks, the topic of psychedelics goes well beyond “Duuuuude.”

First, because — whether we’re narcissists or self-haters — we’re all deeply interested in ourselves.  And it’s always fun to get other people to discuss this best-loved of topics with us.

And second, because we’re extremely biased when it comes to ourselves.  We are not good judges of our own behavior, or recognizers our own idiosyncrasies.  We are the water we swim in — and we are thus both omnipresent and invisible in our lives.  With less freedom than Peter Pan’s shadow, we follow ourselves around 24 hours a day.

Self-Blind: All Our Mirrors Are Wacky

At some point, you’ve done the optical illusion where you stare at a high-contrast image for 30 seconds, then look at a white wall, and you can see the “burned-in” negative image of whatever you’d been previously looking at.  (If you haven’t done this, were you never a kid?)

Your brain’s optical system — even in that short time span — had constructed a sort of overlay to “balance out” the strong contrasts in your visual field.  This is similar to how a camera automatically controls for exposure, so overly-bright parts of an image don’t “white out” and lose detail.  In your own optical system, this contrast-reduction helps sensitize you to variations in your visual field.  “Variations,” in this case, meaning movement.  (Maximizing awareness of nearby moving objects probably needs no justification.  Think: predators, prey, pies-in-face, etc.)

So here’s the analogy: The nuances of our own behavior are the constant, unchanging elements in our own experiential world.  From our point of view, that is.

That’s why someone else’s opinion — the friend, the psychologist, even the stranger who tells you that your fly is open — can be so incredibly valuable.  We are a moving, high-contrast object in the perceptual experience of that person’s life… so we show up to them with greater clarity.  We “pop off the background” in a way we can’t do for ourselves.

Just like they do for us.  And just like they can’t do for themselves.

The Things Only We Know

However, for all the upsides of getting an outside perspective, there is undeniably value in self-reflection as well.  Although it’s seldom without effort, we can identify things about ourselves that others can never tell us, because we’ve got a huge advantage over them…

We have access to a far greater data-set about our own world, our own behavior, and our own experiences than any outside observer has.  (At least, this was true in the era before smart phones.  Nowadays, my iPhone may know more about me than I do — in a facts-and-figures sort of way.)

The conjunction of these two tool-sets — the memory library we store about ourselves, and the perspective offered by someone else, watching from the outside — is ripe with possibilities for new revelations about our behavior.  What’s effective, what’s ineffective, what as-yet-untested strategies may prove to be effective, and why.

Psychedelics — in my theory — cuts out the middle-man.

During a psychedelic experience, the user’s view of reality is profoundly affected, like looking through a prism, or a kaleidoscope, or (to keep with the idea of an outside perspective) someone else’s eyeglasses.

And yet — here’s where the magic happens… Looking from that outside perspective, the psychedelics user gets to page through his or her whole catalog of self-knowledge.  The smorgasbord of memories and details even close friends don’t know — whether because these things are too private to admit, or too mundane to ever come up in conversation.

This fertile blend of outside perspective plus inner knowledge is the essential recipe for the insights that psychedelics can sometimes provide.  Of course, a psychedelically-skewed perspective could also be so confusing as to be useless.  Your pet goldfish will see you with a perspective that’s even more alien than your psychologist’s — but your goldfish’s perspective is less likely to be instructive when optimizing your behavior.

Know Thyself.  One Way or Another.

“Know thyself” is a quote attributed to Socrates — and to nine or ten other long-dead thinkers.  Self-interested as humans are, there’s no reason to think that just one person, or one culture, came up with this idea.  That’s probably what makes it such good advice.

The psychedelics user gets to page through his or her whole catalog of self-knowledge…

Psychedelics are one means for people to know themselves better.  Maybe not the best means, certainly not the only means, and for some people, not even a safe means.  My comparison of psychedelic insight to psychological counseling is no more or less serious than my comparison of psychedelic insight to talking with a goldfish.

To put that another way: some psychedelic “insights” might not make it past the Duuuuuude threshold.  But the same will be true of talking with a psychologist, or of any other path to self-knowledge.

If knowing one’s self were easy, everyone would be doing it.

Psychedelics aren’t easy. 

And neither are they a direct path to meaningful insight, any more than the discovery of fire was a direct path to the steam engine.  My point isn’t to evangelize, or even to recommend – it’s just to propose a mechanism behind the age-old, cross-cultural claims of the value of psychedelic “visionary” experiences.

Of course, there are also probably epochs-old, cross-cultural versions of people saying Duuuuuuude and their friends laughing at them.

But ultimately, the punchline is the process. 

It’s the mental discombobulation of psychedelic states that gives them their utility.

Somewhere on the biochemical middle-ground between sobriety and being “completely fucked up,” a psychedelics user may just find himself on an optimal cognitive plateau, offering an unexpected view toward self-discovery.

It took me a while to realize that I was the crazy guy.

There’s a saying among poker players — I assume for good reason — that goes like this: “If you can’t spot the dumbest guy at the table… It’s you.”

I’m starting to think that this may be a special case of a broader rule that goes well beyond poker.

I was flattered to wake up yesterday to a request to join a radio show panel, the nationally-syndicated “To The Point” produced by KCRW Radio out of Los Angeles. They were doing an episode about smart drugs — specifically, “moda” — and wanted to know if I would join their expert panel, which would include three others?

The producer implied (impressively, without ever quite saying it) that I wasn’t supposed to ask who the other panelists were. The set-up would be a little like Roman gladiators at the Coliseum, not knowing in advance what would come out from behind the arena doors. This makes for a livelier show for the audience.

Needless to say, I jumped at the opportunity.

KCRW is the radio big-leagues; I hadn’t just heard of them, I’ve listened to them. They’re probably the only radio station in Los Angeles I can find on a dial. Plus, this subject was right up my alley; I’ve used Modafinil on-and-off for years.

So this morning I dialed in to KCRW and was put into their digital bullpen, where they keep call-in guests on hold until the producer signals “it’s time,” and then suddenly the host is addressing you with questions.

(If you’ve ever called in to a radio show and been queued to ask the deejay to play a song for your sweetheart or to win concert tickets – it’s exactly like that.)

The first panelist introduced was a health reporter for VICE News, Sydney Lupkin. KCRW broadcasts to a general audience, many of whom would never have heard of smart drugs — and Sydney, along with host Barbara Bogaev, did a great job of opening the topic and implying a simmering hotbed of controversy around the use of “moda.” (The half-clandestine use of this abbreviated term was presented almost as a counterculture nod, like calling marijuana “weed” or Barack Obama “Barry.”)

And before I knew it, I was up next, answering a question about “how Modafinil feels when you’re on it.” I said my piece and then passed the mic, unsure if I’d said too much or not enough — it’s tough in these audio-only situations with multiple parties and no eye contact. You never know if you’re blabbing too long or if the host is praying for you to fill space.

But in this case, they needed to move on to get from me to the real Smart Drugs Wild Man. Certainly, with the undertones of “Modafinil running amok on our campuses,” one of the remaining two guests was sure to be a strung-out 19-year-old with 500 milligrams of Modafinil in his veins, who hadn’t slept since Tuesday.

However, the next guest proved to be Professor James Giordano, from the Georgetown University Medical Center. His speech and manner and credentials were all impeccable, and I wiped sweat off my brow when he backed up some points I’d made in my earlier monologue: 1. Smart drugs are out there. 2. Some, like the racetams, have strong safety and efficacy records and a multi-decade pedigree. 3. Probably the major concern for would-be users is identifying good providers in a “gray market” retail landscape.

We went to a commercial break, and for about a minute the audio went dead; I had time to google my two unveiled co-panelists, and to wonder about the third. The show had such an expectant feeling to it, an undercurrent that something shocking is happening here – prepare to be shocked! I was expecting Johnny-the-University-Kid-Who-Never-Sleeps. Or maybe Otto-the-Online-Modafinil-Retailer, coming on with a digitally-garbled voice, hinting at the value of his product while slinging accusations at “The Man” for keeping his business underground.

But soon the commercial break ended. We were back.

The next voice was a familiar one: Dr. Jeremy Martinez, from the Matrix Institute on Addictions — whom I’d interviewed previously on Episode 80 of my podcast. Dr. Martinez is a leading expert on addictions and addictive behavior, practicing in Los Angeles — which is also the big leagues, if you’re a doctor specializing in addiction. Like Professor Giordano before him, Dr. Martinez was well-spoken, straight-laced, and (befitting an addiction specialist) probably a bit conservative in his approach to the modulation of human brain chemistry.

One of the remaining guests was a strung-out 19-year-old with 500 milligrams of Modafinil in his veins, who hadn’t slept since Tuesday.

But wait a minute… Were we at four panelists already?

Had I gotten it wrong? Had the producer whom I’d spoken with said it would be me with four other panelists?

I was pretty sure the answer was no, but it hardly made sense to have a panel-discussion where everyone on the panel seemed to be in such agreement. “To The Point” isn’t Family Feud or some faux-news fight-bait show… But still, this is American mass media; there are rules that must be obeyed.

And then I felt a sinking feeling, as the verbal baton was passed back to me for another question…

It suddenly hit me.

Just like the poker player realizing there’s no one dumber at the table…

I was Johnny-the-University-Kid-Who-Never-Sleeps. I was Otto-the-Online-Modafinil-Retailer.

I was the Cognitive-Enhancement Wild Man, the one whom the conservative members of the KCRW audience were giving dirty looks through their radios, while I waved my pom-poms for these so-called smart drugs.

But I was the weirdest guy they could find?

I was the far edge of the lunatic fringe, pro-cognitive-enhancement spectrum?

I was — dare I put it so bluntly? — the cautionary warning of what your college kid might turn into?

I consoled myself with the thought that maybe there’d been an accident, and that Johnny-the-Non-Sleeper was unavailable on account of pan-hemispheric cognitive over-stimulation. I readied myself for the task. If someone needed to hold the line for the pro-enhancement crowd, I would do my part.

Luckily, the next question posed to me was one that’s always seemed as trivial to answer as it is amazing that it gets asked in the first place…

The “Cheating Question”

Should we be “worried” about the use of smart drugs?

Is it like “cheating in sports, with steroids”?

If there is one question where I am willing to let my freak flag fly high, this is it. I came out of the gates swinging. I probably frothed at the mouth a bit. (Mouth-froth-concealment is one great upside of both radio and podcasting over television.) My answer — constrained for the radio — was necessarily bite-sized, but I’d like to riff on it at greater length here, because this is the question that won’t die.

I was the cautionary warning of what your college kid might turn into.

It seems to me so absurdly mis-applied, and yet it’s an entrenched part of the public discussion. “Are smart drugs like steroids?” With the implications: “Is using them ‘unfair’ to the other ‘competitors,’ irrespective of the risks to the user himself?”

But to pretend that this analogy holds is to pretend that we live in society where muscles are more than a mating display or where intelligence is only a nifty parlor trick, essentially no big deal.

This could not be further from the truth.

If a Barry Bonds type takes steroids and balloons his athletic ability, maybe he hits a few more home runs. Records are broken; next year’s baseball cards and tonight’s ESPN highlight reel will look slightly different. But real effects on people’s lives? Zilch. Nada. With all due respect to physical performance, we no longer live in a world of blacksmiths and rickshaw operators. Physical musculature is of great use to the individual, but none to society.

Now let’s look at the corresponding situation in intelligence. If the intellectual equivalent of Barry Bonds — maybe this is Stephen Hawking, Elon Musk, or Ray Kurzweil (pick your favorite genius) — if he or she is able to boost his cognitive performance by the equivalent of “a few home runs,” this translates into a greater chance of a Unified Theory of Physics, or of colonizing Mars sooner, or of getting closer to mind-uploading. This isn’t about baseball cards; these are outcomes that fundamentally alter the trajectory of our entire species and its possibilities in the universe.

To equate this with “cheating, like steroids” is not in the same ideological ballpark.

It’s not in the same league.

It’s not even the same sport.

No, we should categorically not question the ethics of people voluntarily using cognitive enhancement to “get ahead.”

Not any more than we should question the ethics of a woman who uses perfume to smell better, or a man who squints on the golf course so he can see a little better. We all use the best tools available to us, constantly — and for good reason.

Life is not a zero-sum game, and the first people to adopt an effective new tool may indeed gain an advantage that later adopters resent… But in the end, the leaders in a field push the whole field forward. Barry Bonds, like it or not, made baseball better. He pushed the envelope, and even if it was cheating, he established new horizons.

But as I said: The horizons of baseball, they don’t matter that much.

The horizons of human cognition, though… They matter as much as anything we know about, or could even conceive of. From our current vantage point as the sole thinking species on the only known inhabited planet in the universe, the horizons of human cognition are literally insurmountable in importance.

So yeah, okay…

Maybe I am the Lunatic Fringe.

If you are reading this at an inopportune time, you need to keep reading.

It might be the middle of the night.

You might be procrastinating while at work.

But either way, the last thing you should be doing now is having clicked on a completely optional blog post and started reading.  (Despite the relative awesomeness of the blog; but I digress…)

Maybe you’re reading this at an appropriate moment for you to be killing some time online.  Sunday afternoon on your iPad, for example.  Or maybe you’re bored on a subway commute.  If so, this article is not for you.  You have an appropriate relationship with Internet time management.

But this post is for people like me.

It’s for people who default to online.  Internet Addiction, they call it.  You’ve probably heard of this condition.  And even if you haven’t, the name kind of says it all.

I am not a textbook-case Internet Addict.  I don’t even have a Facebook account.  (This is partially because I know that having a Facebook account would turn me from a functional addict into the Internet’s version of a wakes-up-in-the-gutter-with-needles-sticking-out-of-his-arms addict.)

The things I do online are not necessarily representative of most Internet Addicts.  But despite that, I do share one defining characteristic with my addicted brothers and sisters…

I keep coming back to the online world.  By default.

Sometimes even when the physical world dangles very worthwhile carrots.

The Lost Continent of My To-Do List

As an Internet Person, I’ve got my obligatory to-do list.  In fact, a couple different to-do lists, in different formats.  (For me, it’s Asana, Trello, and Workflowy — dependent upon the project.)

And one thing I’ve noticed with increasing regularity for the past few months is that when I’m organizing my days, the to-do’s involving the physical world…  They tend to get schucked to the “optional” section at the end of the to-do list.

Meaning that they’ll bounce to tomorrow.  And then the next tomorrow.  And then the tomorrow after that.

(Does anyone ever finish their full daily to-do list?  If so, please don’t answer.  I hate you.)

I keep coming back to the online world.  By default.

So it turns out, I’m ignoring physical reality.

Exactly what kind of to-do’s are these things?  Nothing all that fancy.  Some of them would be easy kills.  Trips to the grocery store.  Scanning physical papers that would be so easy to digitize if I’d take the 15 minutes and just be done with it.  Going to my storage locker and pulling stuff out of boxes that I’ve wanted-but-not-really-needed for going on three months now.

The digital world is just so friggin’ convenient.  And getting moreso.  Amazon Prime is the ultimate enabler.  TaskRabbit doesn’t help either.

The things I find myself actually doing in the physical world are — this is embarrassing — the bare minimum requirements of human physicality.

Eating.  Sleeping.  Bathing.  Exercising.  Sex.  Full stop.

If you think I’m exaggerating, let me stress:  I’m writing this blog post instead of doing the physical-world to-do’s on my list for today.

Hash-tag: #iSuck

Starting Next Week, I’ve Got A New Strategy

I’m calling it…

(Yes, it’s got a catchy name…)

Physical World Phriday

Fridays will be my day off-the-laptop.  All those never-quite-gotten-to to-do’s in the Physical World… Friday will be their day to rise front-and-center, and get the attention they deserve.

And hopefully, to get mercilessly done, like the virtual to-do’s on my list eventually are.

I anticipate that the laptop-less-ness of next Friday will be brutally difficult.  I’m so strapped to it, normally, that I rarely use my smart phone as an Internet device, which will make me a bit more digitally isolated than most people nowadays.

But that’s the idea, isn’t it?

#PhysicalWorldPhriday

I’ll be hash-tagging it on Twitter at 11:59 on Thursday.  And then…

I’ll be gone.

On November 28th, 2012, I published the very first episode of Smart Drug Smarts, interviewing Dr. Ward Dean — a doctor who had literally written the original book on Smart Drugs. I figured then — and looking back, I can’t really fault my logic — that as a computer programmer with no particular medical background, if I was going to do a podcast about smart drugs, I’d better have some unimpeachable guests come on as experts.

In the time since then, over two-and-a-half years, I’ve been lucky enough to conduct over 80 interviews with some of the world’s top experts on some of the world’s coolest stuff.

And sometime in the past year — I never really stopped to notice when it happened, but by now it’s definitely true — Smart Drug Smarts has become the longest-running single project I’ve ever worked on, period.

I’ve got to say, I’m very proud of that… and I have every intention to continue building from here.

One question I’ve gotten asked a lot is…

“So why did you start the podcast?”

I feel like people expect one overriding answer, but it was more a smorgasbord of semi-related upsides…

  1. I love media production and was looking for a creative outlet.
  2. I’ll take any excuse to talk with smart folks.
  3. I’ve had a lifelong interest in brains, physical health, and psychology.
  4. This felt like a way for me to participate in science fiction.

Smart Drug Smarts has ticked all of these boxes for me.

And of course: I was, and I am, a fan of cognitive enhancers.

More broadly, I’m a fan of cognition.

That’s either an obvious or a profound statement, depending on how charitable you’re feeling.  But I’ve personally found that the moments in my life I’ve enjoyed the most — contrary to what we’re taught to expect — weren’t often moments of public praise or physical pleasure…

They were instead moments of intellectual insight.

  • Wow, is that really true?
  • I think I figured it out!
  • Wait, this changes everything…

I wrote about this in my post The Physical Sensation of Epiphany — and these types of internal thrills are still the primary carrots I find myself chasing.

It’s funny, because I’d pay good money for a moment of new insight.  But what actually happens is — when I have a moment of insight, that’s often something people pay me for.  Talk about having your cake and eating it too.

For me, smart drugs are a booster rocket along that course.

They’re a multiplier on my odds-of-insight on a given day.

There are those who will tell you that such-and-such chemical will triple your IQ, allow you to see through walls, or rewire your hippocampus with a direct feed to Google while you sleep. I’m not that guy.  And I haven’t yet seen, or taken, such a drug.

What I have experienced are a variety of chemicals that allow me to fine-tune my state of mind… to consistently direct myself into ways of thinking, seeing, feeling, and behaving in line with what I’m trying to accomplish.  Sometimes that is enhanced focus.  Sometimes it’s expanded creativity.  Sometimes it’s a solid night’s sleep.

Smart Drug Smarts has become the longest-running single project I’ve ever worked on, period.

I have learned so much since starting the podcast.

Rubbing shoulders and sharing conversations with an amazing group of bright, curious, and deep-thinking people, this should come as no surprise.

And here’s the fun part: I’m not just talking about the show’s guests.

I’m also talking about the listeners.

Podcasters don’t know exactly how many listeners they’ve got.  People come in from iTunes, from YouTube, from random web-searches…  Some press Play and might jet after they decide they don’t like the intro music; others go back to the first episode and listen to everything you’ve ever done to get caught up.  I never know from week to week how many people will be listening, and whether those people are first-timers or long-timers…

But what I do know, is that of the people I’ve been lucky enough to meet — on email, on Twitter, and in a few dozen cases, in person — the level of amazing-ness among the people who have elected to become part of the Smart Drug Smarts community is truly phenomenal.

It’s a group I feel privileged to be part of…

  • Neuroscientists
  • Biochemists
  • Academic researchers
  • Man-machine interface do-it-yourselfers
  • Highly competitive business professionals
  • and a new generation of bright, vigorous university and grad students

All of us united by a deep curiosity to know where the cutting edge lies.

So What’s Next?

As our community has grown, people from the retail end of the cognitive enhancement world have taken notice, and we’ve had more than a few offers to promote products on the podcast, on the web, etc.

And as you know if you’ve been listening for a while, we’ve demurred on those offers.  Some seemed overtly sketchy.  Some probably weren’t sketchy, but I didn’t have the time or resources to feel 100% sure about going to bat for them.

And of course, a major concern has always been maintaining the trust the podcast has earned as an honest broker of information about cognitive enhancement: what works, what doesn’t, what’s safe, what isn’t, and what we just don’t know yet.

By late 2014, I’d decided a few things:

  • I loved the podcast.  I loved doing it.  And I wanted to put even more time and focus into doing it.
  • Doing that was going to incur more hard costs, in addition to my own time, and I ought to find a way to make Smart Drug Smarts profitable.
  • I didn’t want to be like a TV channel with 300 commercials for 300 different products, some of which might be great, but many of which are crap.

I decided that I wanted Smart Drug Smarts to create products of its own — things that I wanted, I would use, I would trust, and I could fully endorse — both from the standpoint of sound science, and also of safe, rigorously-tested manufacturing processes.

I also knew there was a lot that I didn’t know.

I knew the effects I was hungry for, and I knew chemicals I was interested in, but I didn’t know a whole lot about supplement manufacturing, pill-pressing, shipping and fulfillment, or the logistics and legwork involved in setting up a nutraceutical business.  It sounded like then — and I can confirm now, it is — a lot of work.

So I did the same thing I’d done back when I created the podcast and needed my first interview guests…  I began chasing down experts.

On Episode #21, I interviewed Roy Krebs and Abelard Lindsay.  Abelard did most of the talking, and this was appropriate; he was the citizen-scientist of the two, the biohacker and self-experimentalist who had devised and refined the two-compound cognitive enhancer now known as CILTEP.

But it was Roy — the quiet one, who didn’t really talk much during the episode — whom I realized late last year was another kind of expert I’d soon be needing.  Because what Roy had done, in the time following Episode #21, was turn CILTEP from a mix-it-in-your-kitchen recipe for do-it-yourselfers into the flagship product of a successful company.  One with manufacturing, purity-testing, bottling, shipping, and customer service running like clockwork.

I knew Roy and his partner Ben Hebert.  I knew that they knew their stuff in the running of a supplement company.  They know how to get things done, and how to keep customers happy and supported.

And also — I knew they were a little bit hamstrung.

Their company’s name is Natural Stacks, and they take the “Natural” seriously.  Products under their brand don’t contain any man-made ingredients.

And as you might have guessed, this restriction cuts out a lot of “the good stuff.”

Axon Labs is born.

Early this year, Roy and Ben and I began talking about forming a new company based around cognitive enhancement.  A “house brand” for Smart Drug Smarts — one where man-made compounds are A-okay, but where we would hold ourselves to the standards that matter: science-backed efficacy in our products, safety and purity-testing, and a great customer experience.

And once again, we reached out to Abelard Lindsay — whose enthusiasm for diving into the medical literature and looking for compounds with unrecognized complementary benefits was undiminished.  We told him now the handcuffs were off – man-made chemicals were on the table.

By the time you hear or read this, Axon Labs will be unveiling its first products.

It’s been almost a half-year to get the first batch ready, but all of us involved would agree it’s really been much longer than that.  Cooked into the mix are two-and-a-half years of my study into cognitive enhancement through Smart Drug Smarts, almost as much time on the business-end of nutraceuticals by Roy and Ben, and nearly a decade of study and self-experimentation by Abelard.

We’re immensely proud of what we’ve put together.  It wasn’t easy.  Biochemistry, bureaucracy, multiple time zones, and very busy people.  But nothing worth doing is easy, right?

I’ll be talking all about it in an episode soon.

And yet, it’s important for me to emphasize: I don’t want the fact that Smart Drug Smarts will have a product line to impact what got people listening in the first place.  My initial goal and the show’s de facto slogan remains unchanged: To help you improve your brain, by any and all means at your disposal.

Axon Labs is just going to be one new set of means.  🙂

Jesse

PS:  Now, with all that as preamble, it is my pleasure to present…

Axon Labs

Would you rather hear this as audio?  Listen on Soundcloud.


I’m almost sure that my last haircut improved my health.

Not in the ways one might expect.  I wasn’t nesting lice or vermin.  It wasn’t a profoundly dangerous hairstyle, likely to get caught in industrial equipment and drag me down with it.

But it made me look like the me I was used to.

And whacking it down to the scalp — which I did, in a slight fit of “oh, hell with it” — was more of a change than I at first expected.

Face-Blindness for the Rest of Us

There’s a condition called prosopagnosia, which some scientists estimate affects almost one in forty people.  (I find this hard to believe, but it’s a “spectrum disorder,” much worse for some people than others.)  You know the people who say “I’m not so good with names, but I never forget a face?”

Well, people with prosopagnosia do not say that.  They do forget faces.  In fact, they never really recognize them in the first place.

For most of us, faces are a very special part of our visual reality, pulled from our vast data-stream of visual inputs and given preferential treatment by an area of the brain known as the fusiform gyrus.  You know how your smart phone has facial recognition software that puts a little box around people’s faces and makes sure to adjust focus and lighting to protect and emphasize them, versus other parts of the image?

Well, your brain — in particular, your fusiform gyrus — is constantly doing the same thing.

Unless, that is, you have prosopagnosia — which can be congenital (the fusiform gyrus never adequately learns to do its job) or acquired (brain damage bangs it up, and afterwards facial recognition takes a dive).  Prosopagnosics, as they’re called, have brains that function much more like an old school camera with no on-board computer, treating all parts of the visual field the same, not playing favorites with faces at all.

This is generally a bad thing.  Egalitarian ideals like “all visual elements are created equal” don’t really work so well in practice.  Not with vision.

Prosopagnosics, depending on the severity of their condition, range from having a bad memory for faces, to literally being unable to recognize themselves in the mirror.  They compensate by identifying friends and loved ones by secondary cues, like their manner of dress, their voice, or how they move.

Now, it should be mentioned — I don’t have prosopagnosia.

We’re All Icons

If you’re not a prosopagnosic, when you first meet someone, you’re aggressively cataloging details about their face, taking notes for later (unconsciously, at least), and drawing inferences about what you might expect about them, based on their facial idiosyncrasies.

Like all stereotypes, these guesses might not be borne out by further real-world data, but think about what comes to mind if the face of someone you meet is characterized by…

  • Ruddy-colored cheeks with visible capillaries
  • A deep, caramel-colored tan
  • Strong vertical creasing in the forehead, above the nose
  • Orange lipstick

In each case, you’ll probably take these as personality-clues as to what you might expect from a person.  (This is especially true in cases where the clues seemingly disagree with each other and imply a conscious choice — like a friend I have who is in his late 40s, but dyes his hair almost a canary-yellow “blonde.”)

But as we get to know individuals better, personal experience trumps facially-derived guesswork, and (again, for non-prosopagnosics) the faces of people we know come to represent our body of knowledge about that individual rather than the type of person we’d expect, based on their looks.

In other words, we recognize people’s faces as icons for the people we know, rather than advertisements for whom we might expect.

The Mirror Works Both Ways

The statement above is true even when the face in the mirror is us.

I was so used to seeing myself looking, well… the way I normally look, that a massive hairstyle change* was enough to momentarily shatter the visual iconography I had for myself.

  • Full hair-eradication, more accurately.  Think Kobe Bryant or Bruce Willis.

This isn’t to say that I had any “Who am I?” identity crisis following my haircut.  Very much the opposite.  It was a “Who is he?” moment.

Later in the afternoon on the day of my haircut (and the initial shock had worn off), I was doing a workout.  I had a mirror nearby and caught a glimpse of myself — shirtless and now completely bald — and for a moment I didn’t recognize myself.  I knew it was a mirror, but it looked like not me.

Honestly, it was reminiscent of all the prison movies where the hero gets captured and has his head shaved and then is hosed down to de-louse him.  When those scenes happen in the movies, we’re always struck with the thought “wow, they’ve stripped him down to his animal self.”

And sure enough, with my visual icon-of-self disrupted, that’s what I saw in the mirror: the animal chassis of me, not my well-worn identity.

And that is why I think the haircut improved my health.  Or will, anyway…

It’s Good To Think Of Yourself As Meat, Sometimes.

Western society has a long and confused history with the Mind-Body Problem.

I’m not going to dive into the details here (but if you’re interested, there are about 10,000 books on the subject), except to say that as a rule, people tend to fall into two opposing camps:

  • Those who exult in the mind (often abstracted into the “ego” or “identity” or “immortal soul”) and view the body as unfortunate-but-necessary baggage.
  • Those who reject the artificial, illusory mind/body distinction and encourage us to think of the two holistically, for the improvement of each — er, it.  (See?  Everyday language gets tricky when you commit yourself to this stance.)

Normally I find myself siding with the second camp.  The “it’s all a closed loop; physiology affects the mind; and the mind’s choices feed back into our physiology, and so on” position.

This makes good, solid sense to me.

And yet…

I can see where the fusiform gyrus — so marvelous in its function — creates a built-in logical fallacy for us.

We see ourselves (using our objective visual system) and because of our tendency to iconize the people we know, what comes to mind is our self (either our identity/soul, or our “holistic self” — either of which amounts to the same thing, in practice).

We look in the mirror and see the psychosocial aspects

  • Do I look sexy for so-and-so?
  • Will this suit make me look impressive for such-and-such occasion?
  • Do I look older than the me from last year?

…and 99 times out of 100, the identity-considerations leap front-and-center and distract us from thinking about the hundred-odd pounds of primate staring back at us.

If we thought about that primate, we might ask…

  • How is this specimen?
  • If I were an alien, going to the galactic pet store to buy a human pet for my alien kid, would I pick this one?
  • Is he going to be fun to play with?  Strong for work?  Lively?  Tasty?

Catching that unrecognized me in the mirror, I had a flashing moment where I didn’t see my identity, I saw the body I inhabit — and that brief instant was a powerful reminder.

Pour Your Foundation.

Whichever end of the Mind-Body Problem you find yourself siding with, it’s the body that’s the physical substrate of our existence.

To put that less nerdily:

“If you don’t take care of your body, where will you live?”

  • Somebody said this before me, but the speaker’s name is lost to history.

I’m like everyone else; 99.9% of the time I’m caught up in ego-related concerns — the things I want to do, be, see, experience.  And the maintenance of the meat-package that I come in — things like brushing my teeth — mostly seem like annoying impositions on my goals.

How many more inventions might have come from Edison if he hadn’t had to brush his teeth twice a day?

Could posterity have a few more Shakespeare plays if the Bard hadn’t had to use the loo?

And yet, it’s probably the opposite that’s true.  Maintenance work on our physical selves is a short-term loss, long-term gain.  (Absurd but true: If Shakespeare had never gone to the restroom, he’d have been in too much pain to do any writing.)

What resulted for me from my moment of non-self-recognition is this:  The thinking me is going to give a little more time, effort, and attention to the care and feeding of his animal chassis.

Sure, the animal-you is easy to forget about.  You can ignore him for a long, long time with little consequence; he’s slow to complain.  But eventually it will be he who is the primary determinant of how far you can go.

And that is a fact worth recognizing.

The correlation between being intelligent and being correct is, unfortunately, not as strong as we’d like it to be.

If smart people were as right as they are smart, knowing what to do all the time would be a lot simpler than it actually is.  But, alas.

A case-in-point is an article entitled “The New Normal,” published recently in Georgia State University Magazine, highlighting the thinking of uncontested smart person (and Smart Drug Smarts podcast alumnus) Nicole Vincent, associate professor of philosophy and associate neuroscience faculty member at GSU.

Unfortunately, the key idea of this article is just plain wrong.

The article presages a future where society has to deal with the nasty, unintended consequences of ever-more-effective cognition-enhancing drugs.  In this hypothetical dystopia, health/safety and efficacy concerns have all been addressed; the problems presented are purely social ones.

The title – “The New Normal” – refers to the social expectation that everyone will be using these drugs, for fear of underperforming and not keeping up with the cognitively-enhanced Joneses.

Citing high-responsibility professions like surgeons and airline pilots, Vincent warns of creeping public pressure for individuals to use the best-available cognitive enhancers to maximize their performance.  “You’re performing a job that many people’s lives depend on,” she says.  “If you mess up and people die when you could have just taken this [performance-enhancing] pill, people will see that as negligence.”

Why yes, I daresay they would.

Let me step back for a moment and say that I agree with most of the premises that the article’s “doomsday scenario” of changing cultural norms is based on.

  • I agree that cognitive enhancement technologies (including, but not limited to, “smart drugs”) will continue to improve.
  • I agree that early-adopters and more competitive members of society will use these things, and change our collective expectations — first of what is “acceptable,” next of what is “normal,” and finally what is “required” (either legally, or by overwhelming social pressure).
  • I agree that we’ll release these technologies into our society without having a clear understanding of their eventual consequences.*

* Humans have a bad track record when it comes to keeping genies in bottles.  If there are any technological genies that haven’t been un-bottled, I can’t think of them.  (Of course, this could be because their inventors kept them so darned secret we just don’t know such genies have been invented — and if so, kudos to those inventors.)  But as a rule — from atomic weapons to boy bands — if we invent things, we tend to use them and only afterwards consider what we’ve wrought on ourselves.

So if I agree with almost every premise presented by Vincent, what is she wrong about, exactly?

Her thesis fails the So-What Test.

Cognitive Enhancement will become the new normal.  So what.

As these technologies move from the Early Adopters to the Early Majority and eventually to everyone else, even the kicking, screaming Laggards will be pressured along (see the Diffusion of Innovations for this fun, cocktail-party terminology).

But… so what?

Let me provide some examples of other ideas that have failed the So-What Test:

  • “If access to basic education continues to expand… people will have to be literate to effectively participate in society.”
  • “If air travel becomes commonplace… businesses may expect workers to travel for hours at a time, at extreme heights, with absolutely nothing underneath of them.”
  • “If medicine further reduces infant mortality… manufacturers of child coffins will be put out of business — or else suffer the ignominy of re-marketing their products for small household pets.”

So freaking what, in all cases.

I could come up with more examples — a lot more.  All these if-thens are 100% correct.  And all are absurd in a way that is self-evident to pretty much everyone except… philosophers.

I don’t want to put words in anyone’s mouth (or over-speculate about someone else’s writing), but Vincent’s stance seems to be “we haven’t figured out all the ramifications of these technologies yet, so we should maintain the status quo until we do.”

But we can’t.  

And I don’t just mean we shouldn’t, I mean we can’t.

With apologies to Nostradamus and Madame Cleo, most of our track-records for predicting the future are just plain rotten.  And that includes really smart people — even professional think-tanks full of really smart people.

Accurately predicting the future requires access to enormous data sets, solid estimates of rates-of-change, an inherently counterintuitive understanding of exponential growth, and effective models of how various simultaneously-moving metrics interact with each other.

In fact, I’m just speculating that this recipe — if it could be pulled off — could accurately predict the future.  We don’t know.  But I find it hard to imagine that any of these tent-pole prerequisites wouldn’t be necessary.

Vincent’s stance seems to be “we haven’t figured out all the ramifications of these technologies yet, so we should maintain the status quo until we do.”

It was Abraham Lincoln who said: “The best way to predict your future is to create it.”  I’ve been reading Team of Rivals: The Political Genius of Abraham Lincoln, and one thing is easy for us to forget now, 150 years later, but was an enormous hurdle for Lincoln and other slavery-abolitionists:

There were many of Lincoln’s contemporaries — even those who morally opposed slavery — who thought that the Law of Unintended Consequences, when applied to a societal change as massive as the 13th Amendment (which made slaves’ wartime emancipation permanent), was just too risky.  What righteous babies might be thrown out with the slavery-colored bathwater?  Heck, what about the disaster inflicted on the federal government’s Strategic Mule Supply, if each of the freed slaves really got “40 acres and a mule”?

(Please refer back to the So-What Test, mentioned above.)

Rhetorical Bag of Dirty Tricks #47 and #48:  If you want to sound good, align your ideas with those of Abraham Lincoln.  To demonize your opposition, reference their ideas alongside Hitler’s.  I do both, although I’m leaving Hitler out of this post.

“The only constant is change.”

Trying to game out the future before it arrives, as we’ve discussed, is a fool’s errand.

And attempting to stop the future from arriving — to stop time in its tracks — is as close as history gives us to a recipe for a lost cause.  There are so many examples of losing battles fought in the name of such causes; the cultural annihilation of both the Native Americans and the samurai of Imperial Japan both come to mind.

Looking at these long-ago-settled battles from the winners’ side of history — knowing who triumphed and why, we now see the romance under the dust.  The American Indians, the samurai — both were fighting technologically superior forces in doomed, all-or-nothing conflicts.  The winners’ superior firepower, their superior numbers — both feel a lot like cheating as we look back on those conflicts now.

The “noble savages” didn’t stand a chance, but boy-oh-boy, did they have heart.

The position taken in the GSU article — against the creeping use of cognitive enhancement technologies — would try to paint baseline Homo Sapiens (circa 2015) as a noble savage race.

It’s an argument that packs emotional appeal.

You, me, and everyone we know, falls into the “us” that is under this impending, theoretical threat.  Even those of us who are using cognitive enhancers (those currently available) — we’re still a part of the “home team,” compared to those upgraded rascals from 2020, or 2030, or 2045, and whatever brain-enhancers they’re using to one-up, two-up, and eventually disenfranchise the biological “normals.”

What Part of “Progress” Don’t You Like?

I’m a sucker for historical romance.  I don’t mean boy-meets-girl kissy-kissy stuff where the girl wears a corset; I mean the broad, sweeping emotionality of individual humans struggling amidst great forces.

And the Tide of History is among the greatest of forces — less tangible but equally powerful as any natural disaster.

I watch a movie like The Last Samurai and see the doomed samurai charge, and I get misty-eyed like everyone else.  But I recognize that those noble samurai are, however unwittingly, the bad guys.

Unbeknownst to them, they were fighting against a world that cured Polio.

They were fighting against a world that explores space.

They were fighting against a world where run-of-the-mill consumer technology allows me to research samurai while listening to Icelandic music (created on synthetic instruments, and presented in Surround-Sound) as I sip African coffee and wait for a transcontinental flight that will be faster, cheaper, and safer than it used to be to travel between nearby villages.

Of course, the samurai didn’t know they were fighting against this stuff.

They just weren’t sure about this whole modernization thing, and what sort of “new normals” might emerge.

Bob Dylan was right: The times, they are a-changin’.

You won’t be forced to keep up.

Cultural tides may pull you along, but you’ll be free to swim against the current if you really want to.  There are examples of that, too.  The Amish are one.

The Amish are still here, in 2015.  So far as I know, they’re not under any particular threat.  They’re doing okay.  They decided to pull the cultural emergency-brake in 1830, or whatever, and well…

They continue to exist.  Why?  Because we live in a peaceful-enough, prosperous-enough culture that no one has decided it’s necessary to overrun, assimilate, or eradicate them and harvest their resources.  

It should be pointed out that societies like ours — this peaceful, this prosperous — are somewhat of an historical anomaly.  But the good news is:  We live in an era of unprecedented positive historical anomalies.

I recognize that those noble samurai are, however unwittingly, the bad guys.

If you want to opt out of further technological progress and rely on the goodwill of your fellow man (or, eventually, the Homo Sapiens-successors you’ll be opting out of becoming), there’s never been a safer time to do so.  We can’t predict the future, but the trend-lines do seem promising.

But for me, personally…

I don’t want to rely on the goodness of my fellow man.

That sort of reliance is something you do in a pinch, not as a general strategy.

Do you think the Amish would have made it through the Cold War without the more technologically-minded Americans picking up their cultural slack?  No sir, not at all.  Heck, they’d have been steamrolled in the Spanish-American War, generations earlier.

I didn’t start off this post intending to disparage the Amish, but dammit, now I will.  The fact is, they’re not going to read this anyway.

There is a word for people who have every opportunity to be effective, but choose not to be, and instead rely on others to be effective on their behalf.

That word is Freeloaders.

The Amish, I put it to you, are freeloaders.

GSU’s New Normal article posits a future where effective, cheap, safe, non-prescription “smart drugs” have become commonplace.

In that future, when it arrives, people who have the opportunity to use these drugs to improve themselves, and choose not to, will also be freeloaders.

I won’t be one of them.

I recently read an article about those baddest of bad guys, Nazi Germany, and how their toolkit for perpetrating war contained quite a bit of chemical help.

Pervitin — something we now call by the street name speed — was doled out like candy to soldiers in the Wehrmacht, the Germans’ invading force that conquered Europe during 1939-1940.  This methamphetamine was prized for its fight-all-night qualities — increased vitality, speed, and motivation, and reduced need to rest while you’re mid-blitzkrieg.  (Later in the war they would add cocaine to the mix.  Seriously.)

The Wehrmacht also encouraged the use of more alcohol than you’d think military discipline would allow — because of alcohol’s propensity for reducing moral hang-ups about extreme behavior.  And let’s face it: When you’re the Nazis, morality is just sand in your gears.

But the Nazis are far from the only military to encourage, or even mandate, the use of psychotropic drugs by personnel.

It’s a downright common practice.

If you sign up for the U.S. military today, you’re contractually obligated to allow Uncle Sam to inject you with… well, pretty much whatever he wants, whenever he wants, without telling you any more than he wants to about what you’re being injected with.

I’m not a big fan of the “not telling you what you’re being injected with” part, but the fact that injections are sometimes a job requirement… that strikes me as reasonable.

If a soldier is going up against an enemy known to use certain chemical agents, mandating the use of a prophylactic antiserum makes good sense. This could be true even if the antiserum has known, limited downsides. The wear-n-tear on an individual soldier’s body, in a utilitarian sense, may be more than justified when held up against the downsides to the soldier and his team, should he succumb to a chemical attack.

And militaries aren’t alone.

Many professions, implicitly or explicitly, require taking drugs.

  • Third-world doctors need vaccinations.
  • Lifeguards unwittingly but unavoidably take in daily transdermal cocktails from sunscreens and pool-cleaning agents.
  • Sommeliers and people who lead wine-tasting tours… well, you get the point.

But the usual pros-and-cons pragmatism of public opinion regarding professional drug use gets complicated when the drugs involved affect people’s minds.

Caffeine is the one substance that society gives a free pass.  No one seems up in arms about people making a Starbucks-stop on the way to work, or (gasp!) going for a second cup of joe in the staff kitchen.

All other psychoactive drugs, though, raise eyebrows.

I’ve revealed myself as the stray kid who slipped through Nancy Reagan’s thought-net, and doesn’t believe all drugs are always bad, always.

An easy example: Despite the staggering numbers of Americans taking antidepressants, there’s a sort of society-wide “don’t ask, don’t tell” policy.  We know that some of our staff, co-workers, and bosses are using these things — but we’d prefer not to think about it.

I’m about to go off the rails and get all crazy now.

If you’re easily shocked, please brace yourself.

The fact is, there are situations where people are better at their jobs with their mental states chemically altered.

As a boss, I like my employees to be perked-up from caffeine.  (I’ve openly encouraged Caffeine Naps in my office.)

It may be that Sarah in Accounting is a lot more effective on her antidepressant meds than off them.

And if Bill in IT happens to maintain a Ritalin prescription that he doesn’t technically need — but it helps him to focus better — who am I to complain?

Now that I’ve revealed myself as the stray kid who slipped through Nancy Reagan’s “Just Say No” thought-net, and doesn’t believe that all drugs are always bad, always, let’s continue…

I want to talk about a class of professions where the professionals’ psychological states really, really matter: Those who are authorized and empowered to use violence.  The men and women who carry guns.

This is pure self-interest on my part: Someone’s thoughts and mood matter a heck of a lot more to me if he or she is potentially authorized to hurt me, and has the means and training to do so.

Today is a dark day for American law enforcement.

“To Protect and Serve” seems increasingly like a euphemism for “To Bully, Beat Down, and Skip the Consequences.”  Some recent Hall of Shame examples:

The number, severity, and “you’ve got to be kidding me!?” nature of these stories make police aggression seem like a systemic problem.  All sorts of solutions should be explored (and, to be fair, probably are being explored): Changes to hiring practices.  Increased oversight.  Stronger carrot-and-stick incentives for good and bad behavior.

What about a chemical intervention?

How would you feel if Pfizer or Dow Chemicals or Merck invented a substance that could chill out the police a bit?  Not impair them functionally, but change their minds, maybe change the way they see the world…  And reduce their impulse toward violence.

I’m not talking “Don’t pull out your gun when you’re in danger”; I don’t want to endanger our police any more than I want them to endanger the rest of us.  I’m talking about “Don’t continue clubbing the guy who’s already collapsed on the ground” or “Don’t apply the Taser to the grandmother.”

If such a drug were theoretically available, wouldn’t it be worth a field-test?  A trial program in a few precincts, to see if excess police violence is damped down a bit?

I hope you’re nodding.

What if such a drug already exists?

What if it is MDMA?

Yeah, it’s an illegal drug.  A rave drug.  The main ingredient in Ecstasy*, the serotonin-dumping, dance-all-night-in-laser-light pill that flooded America in the 1990s and has been a Schedule-1 narcotic — both highly illegal and highly popular — ever since.   That drug.

* Ecstasy often contains speed and other additives, and is not pure MDMA.

Someone’s thoughts and mood matter a heck of a lot more to me if he or she is authorized to hurt me, and has the means and training to do so.

Just humor me for a moment and try to forget that MDMA is an illegal, recreational substance.

Let’s look at the demonstrated positive effects on its users:

  • MDMA increases the release of oxytocin and prolactin (hormones associated with trust and bonding).
  • MDMA significantly decreases activity in the left amygdala, associated with fear and traumatic memory.
  • Animal studies have shown MDMA to dose-dependently decrease aggressive behavior.
  • Users often report ongoing improvements to their mood, and to feelings of trust and fellowship with others — long after the drug has dropped to physiologically undetectable levels.

I’m not proposing cops get high and go out on patrol.  I’m proposing cops get high, feel the love that MDMA seems to reliably bestow… and then sleep it off, and go to work a day or two later.

Am I crazy to suspect that the psychic nudge this drug might give would make police violence a little less likely?  Isn’t that what we’re after?

Okay.  I realize there are some “yes, buts” that I’ve got to address now…

“Yes, But… Will It Work?”

First off, thats not the right question.  We should test this crazy idea.  Not assume I’m right based on a blog post.

I’m not proposing a policy.  I’m proposing a study.  

I’m making a testable hypothesis, and trying to convince you that it’s worth investigating.

“Okay, So… Could It Work?”

Now you’re talking.  I think yes, and here’s why:

What horrifies us about our increasingly militarized, overly-aggressive police force isn’t that it has the capacity for violence, but that this capacity is being too liberally applied.

Let’s assume we’re okay with bad guys getting a billy-club in the face or a firm tasing every now and then.  The important thing is to reduce the number of billy-clubs-to-the-face for everyone else.

It’s the duty of law enforcement personnel to make tough, real-world, real-time decisions on “does this situation merit violence?”

If you are a non-military U.S. civilian, you’ve got a 20 times greater chance of being killed by a cop, than being killed by a terrorist.

Now please permit me to interrupt with a quick diversion into statistics, so we can talk about something important called a “false positive.”  We’ll keep the math simple and this whole thing quick…

A “false positive” is when you’re looking for something — and you think you find it — but you’re wrong.

You’re separating out green M&M’s, and you mis-identify a brown M&M as green and add it to the green pile.  That brown M&M is a false positive.  (A green M&M that you miss, and doesn’t wind up in the green pile, would be a false negative.)

False positives, it turns out, are exactly what society hates, when it comes to cops and violence.

Let’s look at an example with simple numbers:

Officer Jones has 1000 interactions with civilians over the course of a year.  In each interaction, he’s got to do some mental calculus and decide “does this situation merit violence?”

And let’s say we’re the Jiminy Cricket of Public Conscience, and we know the correct answer is 10.  In 10 of these interactions, the person needs some billy-clubbing; everyone else should leave Officer Jones’ presence unscathed.  This would be the perfect-world scenario.

But the real world has error rates.  Officer Jones is not perfect, and he mis-reads the situation 1% of the time.  In these cases, he will either billy-club, or fail to billy-club, the correct people.

So the 10 times over the course of the year when he runs into an actual violence-deserver, with only a 1% error rate, chances are good that all 10 of them will get the club-treatment.  (9.9 is what statistics would predict, so pretty close.)

False positives, it turns out, are exactly what society hates, when it comes to cops and violence.

The problem is, that same 1% error rate, applied to the 990 people who don’t deserve clubbing, means that 10 people (990 x 1% = 9.9) are going to get thwacked, also.  Yikes.

So Officer Jones will beat down 20 people during the year, and half of them won’t deserve it.

What started as an innocuous-sounding 1% error rate has resulted in a 50% mis-application of violence*, with 10 officer-delivered assaults on undeserving civilians.

The disparity between that 1% and the 50%, both of which are “true”, is why Mark Twain famously quipped: “There are three kinds of lies: lies, damned lies, and statistics.”

Thanks for bearing with me on that detour.

I needed to do that, so we can understand why an MDMA-induced tweak in cops’ instinct-to-violence might matter so much.

If MDMA could theoretically make a cop’s move for the billy-club 50% less likely, we’d be cutting our innocent-civilian beatings from 10 down to 5.  Not perfect, but a great start.

But wait — we’d also be cutting our righteous manhandling of violence-deserving criminals from 10 down to 5, wouldn’t we?  Well yes, we would — but there’s something important to consider here:

The only situation when cops should apply violence is when doing so will protect themselves or others from physical danger.  If a cop is dealing with someone, and that person moves from being a possible threat to being a definitive threat — that’s generally a pretty unambiguous move.  A person goes from yelling and waving his arms around, to throwing punches, etc.

So in nerdy terms, a false negative (a cop not using violence, when he should) tends to be a self-correcting situation — because no cop is going to ignore violence right in front of him — whereas a false positive (a cop using violence, when he shouldn’t) isn’t self-correcting, because it’s the cop who has prematurely upped the ante.

So what we’d be hoping for with MDMA, is a general de-itching of cops’ trigger fingers.  Making the pause a little longer, the hesitation a little greater, before Johnny Law commits to the use of force.

This approach works because the number of times violence shouldn’t be used dwarfs the number of times violence should be used.  This will always be true in civil society.   (In fact, in any non-zombie-apocalypse scenario.)

So if we accept the premise that MDMA may reduce cops’ inclination to violence, then the answer to “Could It Work?” (or at least “Could It Help?”) seems to be a resounding yes.

“Yes, But… Tweaking With Cops’ Minds Is Unethical.”

Is it?  Because… we do this already.

A cop’s psychological state is society’s business.  (And we may soon decide the same about other professions like airline pilots, where professionals carry the lives of many civilians in their hands.)

We’ve all seen TV shows where cops — often griping about it — are forced to meet with a psychologist and “talk about their feelings,” etc.  Script-writers love this as an easy way to layer in character development, but there’s good reason why these characters’ real-world equivalents exist.  Police psychologists are representatives for us tax-paying civilians who want our peace officers mentally well-calibrated.  (Too frequently nowadays, we have reason to wonder.)

Normally when this tweaking with people’s minds is unethical objection comes up, those making the objection are not opposed to the general concept (tweaking), but to the specific methodology (in this case, with psychoactive compounds).  Objections to “skillfully presented verbal arguments,” for example, don’t hold much weight with anyone — although such arguments can tweak people’s minds as effectively as any drug.

Let’s accept that we influence other people’s minds constantly.  Pleasant colors in hospital waiting rooms.  Soothing music in dentist’s office.  Perfumes to attract romantic partners.  As social animals, it is our constant endeavor to manipulate the mental states of our fellows.

So let’s overrule this objection and move on.

“Yes, But… What About the Cops’ Physical Health?”

MDMA has physical downsides.

All that said, MDMA seems to be not that physically detrimental.  It’s dangerous, but manageable.  In a UK study published in the Lancet (the world’s oldest medical journal), Ecstasy was ranked only 16 out of 20 on a list of dangerous drugs based on harm to the user and harm to others.

A Personal Note…

Just in case you think I’m writing this piece as a recreational user who thinks the world would be a better place if MDMA were in every public drinking fountain, let me offer full disclosure:

I’ve never tried the stuff.

The truth is, despite ample opportunities, I’ve always been a bit unnerved by MDMA’s reputation for “serotonin recuperation hangovers.”  I’m not eager to do anything that could undercut my body’s natural production of serotonin (a “feel good” neurotransmitter).  So, at least for the moment, it’s not for me.

But then, I don’t carry a gun.  I’m not the one tazing septuagenarians or beating civilians to death while “taking them into custody.”

Modest physical downsides to someone like me — an unarmed, not-particularly-dangerous civilian — might not be worth the benefit of damping down my instinct towards violence…

But for a member of an increasingly dangerous police force, maybe it’s time to bite the psychopharmacological bullet and do the science to learn if MDMA’s use might be worth the speculative benefits.

I’m completely ignoring an elephant in the room: MDMA is the primary ingredient in something called “Ecstasy” — it’s reputed to be intensely pleasurable, and many cops might jump at the chance to take it.

I Am Not Anti-Police.

Not even a little.

I’m fully aware that most cops don’t do this terrible stuff.

The ones we hear about are ugly statistical anomalies.  But in a nation of 300 million people, including hundreds of thousands of cops, statistical anomalies will happen predictably, year-in and year-out.

This proposal is about strategically reducing those violent anomalies.

So, why not run a pilot program?

Take a few precincts across the country, and make the program strictly voluntary.  Cops who want to fool around with some MDMA, maybe even occasionally micro-dosing while on the beat, are free to do so.  Cops who want to abstain, can.

Run the test programs for 2-3 years.  See what happens to police violence during that time.   See what happens to police-community relations during that time.  If there are violent incidents, see how many of them are from the MDMA users vs. everyone else in the “control group.”

This is what science is about, right?

Make a hypothesis, test it, review the results, and make decisions based on accumulated evidence.

Hitler wanted his Wehrmacht to be energetic, assertive, and morally compromised.  He used a chemical cocktail of methamphetamine, booze and cocaine to accomplish that.  His goal was despicable, but his logic was sound.

I would like to see America’s police force calmer, less hostile, and more cognizant of the overall Brotherhood of Man.

If MDMA could edge our cops in that direction, isn’t it worth an honest-to-goodness social experiment?

Or are we so poisoned by Nancy Reagan Just Say No dogmatism — and afraid of finding a legitimate use for a “party drug” — that we’re willing to continue getting our asses beat by our peace officers?

Let’s grow up, get serious, and do some damned science.


Acknowledgment to this excellent article by ex-police-officer Redditt Hudson, on America’s problems with violence and institutionalized racism within the police community.

Scroll to top