Research outside the mainstream, psychoactive substances and taboo topics.
The world loves coffee.
To say “people love coffee” is a little like saying “people love sex.” In fact, going by the straight numbers — two billion cups of coffee drunk per day vs. slightly fewer fornications — people may like coffee more than sex.
Muslims, Christians, Jews, Atheists, Europeans, Africans, Americans, Flat-earthers, scientists, terrorists, and Red Cross volunteers… Coffee-drinking cuts across all social divisions and unites us all in one big human family.
Homo Sapiens caffeinophilus, you might call us.
Coffee is everywhere. The thought of caffeine-abstinence for the 90% of adults worldwide who partake daily is headache-inducing — both figuratively and often literally.
And so drastically reducing the world’s coffee supply — as unpopular as that would surely be — sounds less like reality than it does a bad plot device by a villain in a James Bond movie financed by Starbucks.
The coffee bean just doesn’t fit well with our preconceptions about endangered species.
But this grim prospect may be closer to reality than we’d like to think — and without a mustache-twirling mastermind as a convenient scapegoat. If we lose coffee, the perpetrator will be climate change.
And that means that ultimately, the villain will be us.
We all know the canary in a coal mine expression, right?
Just in case, here it goes: In the bad ol’ days, in deep underground mines, sometimes noxious gases would seep up and kill miners. Canaries breathe fast and are apparently more cardiovascularly fragile than your average coal miner. So in the days before high-tech early warning systems, a dead canary — assuming you’d had the presence of mind to enter your coal mine carrying a live one — was a helpful clue that you should get the hell out of there.
Canaries (and similar warning bells) work well in cases of acknowledged and immediate mortal peril.
But they don’t work so well in situations where the thing that kills you takes a while to do so. You’ve never noticed canaries smoking Marlboros or eating McDonald’s Extra Value Meals alongside consumers of those products, and there’s a good reason for this. It’s not because canaries probably wouldn’t die from these things, given enough exposure; it’s because they wouldn’t die fast enough to merit the annoyances of long-term bird ownership.
Canary systems are great when the terror groundwork has already been laid. For nineteenth-century coal miners, the inciting fear wasn’t about the dead canaries; the birds were just a disposable commodity.
What made them care in the first place were graveyards full of dead coal miners.
It is said there are two ways people can learn not to do things.
The latter method is considered preferable.
The problem is, these two options are only for learning by individuals. When it comes to whole societies, there seems to be only one of the options available… And it’s not the preferred one.
Societies learn by hands-on experience (pardon the pun). It’s not that history fails to provide national-level cautionary tales — very much the opposite. It’s just that regardless of relevant historical warning signs, nations and their leaders always seem able to cook up plausible-sounding dismissals that “things were different then/there” and sweep aside the warnings.
After all, history is endlessly debatable, and the lead-ups to most grand-scale disasters aren’t as clear-cut as hand on a hot stove.
Today, global climate change is probably the most looming, and least acknowledged, societal train wreck in progress. And true to form, humanity seems determined to burn its own damned hand on its own damned stove.
Environmental changes have spelled doom for human societies before — at limited, regional levels. (Jared Diamond’s 2003 book Collapse contains sufficiently chilling examples for whose who want to ruin a few nights’ sleep.)
But these stories aren’t well-known enough, they aren’t scary enough, and they aren’t connected enough in the popular imagination to our species-wide bad habits. They’re not an immediate, whack-you-in-the-face threat like the dead bodies of your coal mining buddies.
And so we continue to reach toward the environmental stove with our own damned hand. (In fact, it’s a better analogy to say both hands. We’ve got exactly one planet; there is no back-up hand.)
Luckily, there is a middle-ground method of learning, which societies do seem capable of. It’s the inoculation model — akin to burning your fingertip, but recoiling in time to save your hand.
…had still better scare the bejeezus out of us.
Getting these historical fingertip-burns right is a tricky thing. Societies are fickle, disperse in their attention, and easily distracted. Death and dismemberment are almost always part of the recipe necessary to catch a whole society’s interest to the point where its self-defense instincts are triggered.
Horrible as each of them was to the victims, I would cite both the bombings of Hiroshima and Nagasaki and the recent Ebola scare as successful historical inoculations.
With Hiroshima and Nagasaki, the world saw immediately just how serious these atomic weapons technologies were. The smoky silhouettes of incinerated civilians on still-standing walls were as motivating to strangers half a world away as dead coal miners had been to their small-town coworkers three-quarters of a century before.
And despite some close calls, this terrifying debut has kept our collective hand off the nuclear stove for over 70 years since.
With the recent Ebola virus outbreaks — though some have criticized the health care community’s response as “panic-mongering” — the fearful prospect of a widespread epidemic has marshaled public safety to a far higher level today than it would have been at otherwise. “False alarms,” when it comes to biological pandemics, are something we should be cheering for, not complaining about.
(By contrast, the resources brought to bear against a distinctly regional threat like Isis are way disproportionate to the risk. Isis’ methods are designed to be newsworthy, but the organization is not a threat to people living in New Brunswick, or Johannesburg, or Detroit. For a pandemic virus, the opposite is true. And Global Climate Change, in this sense, is a lot more like Ebola than it is like Isis.)
The sinking of the Maine, a U.S. warship parked in the Havana harbor, sparked the Spanish-American War (Cuba, back then, was a Spanish colony) and led to an unmitigated American ass-kicking of the formerly first-rate global power that was Spain.
The Maine exploded and sunk under debatable circumstances — and may have been an honest-to-goodness technical accident rather than foul play— but nevertheless, “Remember the Maine!” was trumpeted on newspaper headlines as a rallying cry for American patriotism.
It met the 1890s standards for a fingertip-burn that merited an immediate, dramatic response.
All the examples that I’ve given so far — Hiroshima, Ebola, the sinking of the Maine — had a human body count. Maybe that’s a prerequisite for societies to wake up and smell the coffee.
But maybe, just maybe, times have changed?
Could the tragic death of something non-human (but well-loved) cross the fingertip-burn threshold?
If you’re not a coffee snob, chances are good that you know one. And any self-respecting coffee snob will tell you there are two primary strains of commercial coffee:
If you’re drinking coffee as you read this, odds are good that it’s Arabica.
This also means that your cup-of-joe has been bred from a very distinct lineage, originating in the mountains of Ethiopia. These recent, confined origins mean that our commercial coffee has very little genetic diversity and is particularly vulnerable to climate change. Put simply — a disruption that kills one of ’em is likely to kill all of ‘em.
Arabica plants grow best in a narrow temperature range between 18 and 22 degrees Celsius, and they require gentle, regular rainfall.
With global weather patterns destabilizing, Arabica is in jeopardy. Researchers predict that agricultural lands capable of supporting Arabica could fall by half in the coming decades. To make matters worse, short-sighted efforts to improve coffee yields from the diminishing acreage could speed soil depletion and further accelerate shortfalls in production.
As Arabica’s availability dips, will we switch to the bitter Robusta alternative? Will we grit our teeth and just keep paying more for the dwindling supply, until we long for the days of free refills and coffees that cost only $3.50?
And to make matters worse, some climatologists predict that the world’s prime coffee-producing regions — places such as Vietnam, India and Central America — will be among those hardest-hit by climate change.
Are coffee’s days numbered?
To some, all this may sound like panicked fear-mongering. After all, the world drinks three-quarters of a trillion cups of coffee annually. We couldn’t just run out…
It’s happened before.
Not with coffee, but with something almost as unthinkable.
I can hear you scoffing. “You’re not fooling me,” you say. “We’ve still got bananas.”
Well, yes and no.
The bananas we eat today are not, it turns out, our great-grandparents’ bananas. It wasn’t long before the aforementioned Spanish-American War that bananas got globally popular. In fact, a worldwide banana craze led to the initial development of refrigerated shipping. But I digress.
At the time, the world was hooked on a strain of banana known as the Gros Michel. (There are many, many strains of bananas — thousands of them. Most, you wouldn’t want anything to do with.)
But in the 1950s, a banana-blight began ravaging the Gros Michel strain. It got worse, and worse, and, well… worse.
Devoted horticultural scientists, bullet-sweating banana plantation owners, and all the financial might of companies like Chiquita were unable to save the Gros Michel.
Season after season, the world watched helplessly as the beloved banana strain went extinct. We were powerless to stop it.
The stopgap was a second strain, the Cavendish. It’s the one that we know today, simply, as “bananas.” The Cavendish is somewhat smaller than the Gros Michel. They rot faster, they bruise easier, and according to those who lived at a time when Gros Michels were available to taste-test and compare, our modern Cavendishes taste worse. Gros Michels were sweeter, with a creamier texture.
They were, apparently by unanimous consent, a better banana.
But it was our modern Cavendish that was immune to the banana blight. And so, like the rat-ancestors who inherited the earth when the dinosaurs died out, by the early 1960s the Cavendish inherited the title of “banana” to a dispirited but option-less global consumer public.
The Gros Michel banana blight was bad. If you owned a banana plantation back then, it was downright disastrous.
But it was an isolated disease, affecting a solitary crop. An important crop, sure — but it wasn’t a domino poised to tumble an entire ecosystem or undercut global food production.
In short, it wasn’t worth freaking out about.
It didn’t qualify as a fingertip-burn.
These were the words of Winston Churchill, but it encapsulates an idea as old as politics.
Bad news is a great motivator. A society amped up on adrenaline, fear, righteous indignation — is a society ready to get stuff done.
So imagine the crisis of a Coffee-less Future.
We are a world of addicts. (Let’s be honest, okay?)
And Climate Change — the red-handed culprit should this happen — is a villain who won’t stop at one crop, or one industry, or one continent…
The loss of coffee wouldn’t just be a disaster to breakfast, an existential threat to baristas, or a stock sell-off for SBUX…
It could be a fresh batch of dead coal miners.
A culinary mushroom cloud.
I like coffee. I drink it. I’ve even been known to photograph it on occasion. But if a few billion caffeine-withdrawal headaches could snap us all to attention on the inadvisability of playing chicken with the global environment…
I’ll be happy to switch to tea.
If we can keep our collective hand off the stove for the comparatively low price of an extinct beverage, we should count ourselves lucky.
Maybe a tomorrow without coffee is exactly the rude awakening we all need.
There’s something it’s hard not to notice when you speak with people about psychedelics.
Most pop culture portrayals of psychedelics have discussions that begin (and all too often, end) with the word “Duuuuuuuude.” This may originate with the character “Shaggy” from Scooby Doo, the cultural progenitor of cartoon druggies. Something about Shaggy clearly struck a chord; he’s been spliced and cloned into dozens of equivalent, well-meaning imbeciles across all media ever since.
But with all due respect to mystery-solving dogs and their human sidekicks, when you talk with real users of psychedelics, the topic expands well beyond “Duuuuude.” People are eager to talk about their experiences. And it’s very rarely just “I was so fucked up” or “I partied like it was 1999.”
Well, sometimes it is that, but those things are the jumping-off point, not the follow-through.
People who become psychedelics aficionados — those who maintain an interest after their last rave or beyond their first bad trip — don’t want to talk about shiny colors or how their house cat suddenly turned telepathic. They want to talk about what their psychedelic experiences have taught them about themselves.
It’s a weaker punchline than “Duuuuude.”
And often a lot more confusing, long-winded, and deeply personal.
But as these are what real (i.e. non-cartoon) psychedelics users find comment-worthy about their experiences, it seems worth paying attention to.
What follows is my pet theory on why psychedelic experiences can be so transformative for people. But first, a question:
Why do people go to psychologists — or even to friends, family members, and others who know them well — to get advice on their own lives?
With all due respect to mystery-solving dogs and their human sidekicks, the topic of psychedelics goes well beyond “Duuuuude.”
First, because — whether we’re narcissists or self-haters — we’re all deeply interested in ourselves. And it’s always fun to get other people to discuss this best-loved of topics with us.
And second, because we’re extremely biased when it comes to ourselves. We are not good judges of our own behavior, or recognizers our own idiosyncrasies. We are the water we swim in — and we are thus both omnipresent and invisible in our lives. With less freedom than Peter Pan’s shadow, we follow ourselves around 24 hours a day.
At some point, you’ve done the optical illusion where you stare at a high-contrast image for 30 seconds, then look at a white wall, and you can see the “burned-in” negative image of whatever you’d been previously looking at. (If you haven’t done this, were you never a kid?)
Your brain’s optical system — even in that short time span — had constructed a sort of overlay to “balance out” the strong contrasts in your visual field. This is similar to how a camera automatically controls for exposure, so overly-bright parts of an image don’t “white out” and lose detail. In your own optical system, this contrast-reduction helps sensitize you to variations in your visual field. “Variations,” in this case, meaning movement. (Maximizing awareness of nearby moving objects probably needs no justification. Think: predators, prey, pies-in-face, etc.)
So here’s the analogy: The nuances of our own behavior are the constant, unchanging elements in our own experiential world. From our point of view, that is.
That’s why someone else’s opinion — the friend, the psychologist, even the stranger who tells you that your fly is open — can be so incredibly valuable. We are a moving, high-contrast object in the perceptual experience of that person’s life… so we show up to them with greater clarity. We “pop off the background” in a way we can’t do for ourselves.
Just like they do for us. And just like they can’t do for themselves.
However, for all the upsides of getting an outside perspective, there is undeniably value in self-reflection as well. Although it’s seldom without effort, we can identify things about ourselves that others can never tell us, because we’ve got a huge advantage over them…
We have access to a far greater data-set about our own world, our own behavior, and our own experiences than any outside observer has. (At least, this was true in the era before smart phones. Nowadays, my iPhone may know more about me than I do — in a facts-and-figures sort of way.)
The conjunction of these two tool-sets — the memory library we store about ourselves, and the perspective offered by someone else, watching from the outside — is ripe with possibilities for new revelations about our behavior. What’s effective, what’s ineffective, what as-yet-untested strategies may prove to be effective, and why.
Psychedelics — in my theory — cuts out the middle-man.
During a psychedelic experience, the user’s view of reality is profoundly affected, like looking through a prism, or a kaleidoscope, or (to keep with the idea of an outside perspective) someone else’s eyeglasses.
And yet — here’s where the magic happens… Looking from that outside perspective, the psychedelics user gets to page through his or her whole catalog of self-knowledge. The smorgasbord of memories and details even close friends don’t know — whether because these things are too private to admit, or too mundane to ever come up in conversation.
This fertile blend of outside perspective plus inner knowledge is the essential recipe for the insights that psychedelics can sometimes provide. Of course, a psychedelically-skewed perspective could also be so confusing as to be useless. Your pet goldfish will see you with a perspective that’s even more alien than your psychologist’s — but your goldfish’s perspective is less likely to be instructive when optimizing your behavior.
“Know thyself” is a quote attributed to Socrates — and to nine or ten other long-dead thinkers. Self-interested as humans are, there’s no reason to think that just one person, or one culture, came up with this idea. That’s probably what makes it such good advice.
The psychedelics user gets to page through his or her whole catalog of self-knowledge…
Psychedelics are one means for people to know themselves better. Maybe not the best means, certainly not the only means, and for some people, not even a safe means. My comparison of psychedelic insight to psychological counseling is no more or less serious than my comparison of psychedelic insight to talking with a goldfish.
To put that another way: some psychedelic “insights” might not make it past the Duuuuuude threshold. But the same will be true of talking with a psychologist, or of any other path to self-knowledge.
If knowing one’s self were easy, everyone would be doing it.
Psychedelics aren’t easy.
And neither are they a direct path to meaningful insight, any more than the discovery of fire was a direct path to the steam engine. My point isn’t to evangelize, or even to recommend – it’s just to propose a mechanism behind the age-old, cross-cultural claims of the value of psychedelic “visionary” experiences.
Of course, there are also probably epochs-old, cross-cultural versions of people saying Duuuuuuude and their friends laughing at them.
But ultimately, the punchline is the process.
It’s the mental discombobulation of psychedelic states that gives them their utility.
Somewhere on the biochemical middle-ground between sobriety and being “completely fucked up,” a psychedelics user may just find himself on an optimal cognitive plateau, offering an unexpected view toward self-discovery.
It took me a while to realize that I was the crazy guy.
There’s a saying among poker players — I assume for good reason — that goes like this: “If you can’t spot the dumbest guy at the table… It’s you.”
I’m starting to think that this may be a special case of a broader rule that goes well beyond poker.
I was flattered to wake up yesterday to a request to join a radio show panel, the nationally-syndicated “To The Point” produced by KCRW Radio out of Los Angeles. They were doing an episode about smart drugs — specifically, “moda” — and wanted to know if I would join their expert panel, which would include three others?
The producer implied (impressively, without ever quite saying it) that I wasn’t supposed to ask who the other panelists were. The set-up would be a little like Roman gladiators at the Coliseum, not knowing in advance what would come out from behind the arena doors. This makes for a livelier show for the audience.
KCRW is the radio big-leagues; I hadn’t just heard of them, I’ve listened to them. They’re probably the only radio station in Los Angeles I can find on a dial. Plus, this subject was right up my alley; I’ve used Modafinil on-and-off for years.
So this morning I dialed in to KCRW and was put into their digital bullpen, where they keep call-in guests on hold until the producer signals “it’s time,” and then suddenly the host is addressing you with questions.
(If you’ve ever called in to a radio show and been queued to ask the deejay to play a song for your sweetheart or to win concert tickets – it’s exactly like that.)
The first panelist introduced was a health reporter for VICE News, Sydney Lupkin. KCRW broadcasts to a general audience, many of whom would never have heard of smart drugs — and Sydney, along with host Barbara Bogaev, did a great job of opening the topic and implying a simmering hotbed of controversy around the use of “moda.” (The half-clandestine use of this abbreviated term was presented almost as a counterculture nod, like calling marijuana “weed” or Barack Obama “Barry.”)
And before I knew it, I was up next, answering a question about “how Modafinil feels when you’re on it.” I said my piece and then passed the mic, unsure if I’d said too much or not enough — it’s tough in these audio-only situations with multiple parties and no eye contact. You never know if you’re blabbing too long or if the host is praying for you to fill space.
But in this case, they needed to move on to get from me to the real Smart Drugs Wild Man. Certainly, with the undertones of “Modafinil running amok on our campuses,” one of the remaining two guests was sure to be a strung-out 19-year-old with 500 milligrams of Modafinil in his veins, who hadn’t slept since Tuesday.
However, the next guest proved to be Professor James Giordano, from the Georgetown University Medical Center. His speech and manner and credentials were all impeccable, and I wiped sweat off my brow when he backed up some points I’d made in my earlier monologue: 1. Smart drugs are out there. 2. Some, like the racetams, have strong safety and efficacy records and a multi-decade pedigree. 3. Probably the major concern for would-be users is identifying good providers in a “gray market” retail landscape.
We went to a commercial break, and for about a minute the audio went dead; I had time to google my two unveiled co-panelists, and to wonder about the third. The show had such an expectant feeling to it, an undercurrent that something shocking is happening here – prepare to be shocked! I was expecting Johnny-the-University-Kid-Who-Never-Sleeps. Or maybe Otto-the-Online-Modafinil-Retailer, coming on with a digitally-garbled voice, hinting at the value of his product while slinging accusations at “The Man” for keeping his business underground.
But soon the commercial break ended. We were back.
The next voice was a familiar one: Dr. Jeremy Martinez, from the Matrix Institute on Addictions — whom I’d interviewed previously on Episode 80 of my podcast. Dr. Martinez is a leading expert on addictions and addictive behavior, practicing in Los Angeles — which is also the big leagues, if you’re a doctor specializing in addiction. Like Professor Giordano before him, Dr. Martinez was well-spoken, straight-laced, and (befitting an addiction specialist) probably a bit conservative in his approach to the modulation of human brain chemistry.
One of the remaining guests was a strung-out 19-year-old with 500 milligrams of Modafinil in his veins, who hadn’t slept since Tuesday.
But wait a minute… Were we at four panelists already?
Had I gotten it wrong? Had the producer whom I’d spoken with said it would be me with four other panelists?
I was pretty sure the answer was no, but it hardly made sense to have a panel-discussion where everyone on the panel seemed to be in such agreement. “To The Point” isn’t Family Feud or some faux-news fight-bait show… But still, this is American mass media; there are rules that must be obeyed.
And then I felt a sinking feeling, as the verbal baton was passed back to me for another question…
Just like the poker player realizing there’s no one dumber at the table…
I was Johnny-the-University-Kid-Who-Never-Sleeps. I was Otto-the-Online-Modafinil-Retailer.
I was the Cognitive-Enhancement Wild Man, the one whom the conservative members of the KCRW audience were giving dirty looks through their radios, while I waved my pom-poms for these so-called smart drugs.
But I was the weirdest guy they could find?
I was the far edge of the lunatic fringe, pro-cognitive-enhancement spectrum?
I was — dare I put it so bluntly? — the cautionary warning of what your college kid might turn into?
I consoled myself with the thought that maybe there’d been an accident, and that Johnny-the-Non-Sleeper was unavailable on account of pan-hemispheric cognitive over-stimulation. I readied myself for the task. If someone needed to hold the line for the pro-enhancement crowd, I would do my part.
Luckily, the next question posed to me was one that’s always seemed as trivial to answer as it is amazing that it gets asked in the first place…
Should we be “worried” about the use of smart drugs?
Is it like “cheating in sports, with steroids”?
If there is one question where I am willing to let my freak flag fly high, this is it. I came out of the gates swinging. I probably frothed at the mouth a bit. (Mouth-froth-concealment is one great upside of both radio and podcasting over television.) My answer — constrained for the radio — was necessarily bite-sized, but I’d like to riff on it at greater length here, because this is the question that won’t die.
I was the cautionary warning of what your college kid might turn into.
It seems to me so absurdly mis-applied, and yet it’s an entrenched part of the public discussion. “Are smart drugs like steroids?” With the implications: “Is using them ‘unfair’ to the other ‘competitors,’ irrespective of the risks to the user himself?”
But to pretend that this analogy holds is to pretend that we live in society where muscles are more than a mating display or where intelligence is only a nifty parlor trick, essentially no big deal.
This could not be further from the truth.
If a Barry Bonds type takes steroids and balloons his athletic ability, maybe he hits a few more home runs. Records are broken; next year’s baseball cards and tonight’s ESPN highlight reel will look slightly different. But real effects on people’s lives? Zilch. Nada. With all due respect to physical performance, we no longer live in a world of blacksmiths and rickshaw operators. Physical musculature is of great use to the individual, but none to society.
Now let’s look at the corresponding situation in intelligence. If the intellectual equivalent of Barry Bonds — maybe this is Stephen Hawking, Elon Musk, or Ray Kurzweil (pick your favorite genius) — if he or she is able to boost his cognitive performance by the equivalent of “a few home runs,” this translates into a greater chance of a Unified Theory of Physics, or of colonizing Mars sooner, or of getting closer to mind-uploading. This isn’t about baseball cards; these are outcomes that fundamentally alter the trajectory of our entire species and its possibilities in the universe.
To equate this with “cheating, like steroids” is not in the same ideological ballpark.
It’s not in the same league.
It’s not even the same sport.
No, we should categorically not question the ethics of people voluntarily using cognitive enhancement to “get ahead.”
Not any more than we should question the ethics of a woman who uses perfume to smell better, or a man who squints on the golf course so he can see a little better. We all use the best tools available to us, constantly — and for good reason.
Life is not a zero-sum game, and the first people to adopt an effective new tool may indeed gain an advantage that later adopters resent… But in the end, the leaders in a field push the whole field forward. Barry Bonds, like it or not, made baseball better. He pushed the envelope, and even if it was cheating, he established new horizons.
But as I said: The horizons of baseball, they don’t matter that much.
The horizons of human cognition, though… They matter as much as anything we know about, or could even conceive of. From our current vantage point as the sole thinking species on the only known inhabited planet in the universe, the horizons of human cognition are literally insurmountable in importance.
So yeah, okay…
Maybe I am the Lunatic Fringe.