Category: Fringe

Research outside the mainstream, psychoactive substances and taboo topics.

If you are reading this at an inopportune time, you need to keep reading.

It might be the middle of the night.

You might be procrastinating while at work.

But either way, the last thing you should be doing now is having clicked on a completely optional blog post and started reading.  (Despite the relative awesomeness of the blog; but I digress…)

Maybe you’re reading this at an appropriate moment for you to be killing some time online.  Sunday afternoon on your iPad, for example.  Or maybe you’re bored on a subway commute.  If so, this article is not for you.  You have an appropriate relationship with Internet time management.

But this post is for people like me.

It’s for people who default to online.  Internet Addiction, they call it.  You’ve probably heard of this condition.  And even if you haven’t, the name kind of says it all.

I am not a textbook-case Internet Addict.  I don’t even have a Facebook account.  (This is partially because I know that having a Facebook account would turn me from a functional addict into the Internet’s version of a wakes-up-in-the-gutter-with-needles-sticking-out-of-his-arms addict.)

The things I do online are not necessarily representative of most Internet Addicts.  But despite that, I do share one defining characteristic with my addicted brothers and sisters…

I keep coming back to the online world.  By default.

Sometimes even when the physical world dangles very worthwhile carrots.

The Lost Continent of My To-Do List

As an Internet Person, I’ve got my obligatory to-do list.  In fact, a couple different to-do lists, in different formats.  (For me, it’s Asana, Trello, and Workflowy — dependent upon the project.)

And one thing I’ve noticed with increasing regularity for the past few months is that when I’m organizing my days, the to-do’s involving the physical world…  They tend to get schucked to the “optional” section at the end of the to-do list.

Meaning that they’ll bounce to tomorrow.  And then the next tomorrow.  And then the tomorrow after that.

(Does anyone ever finish their full daily to-do list?  If so, please don’t answer.  I hate you.)

I keep coming back to the online world.  By default.

So it turns out, I’m ignoring physical reality.

Exactly what kind of to-do’s are these things?  Nothing all that fancy.  Some of them would be easy kills.  Trips to the grocery store.  Scanning physical papers that would be so easy to digitize if I’d take the 15 minutes and just be done with it.  Going to my storage locker and pulling stuff out of boxes that I’ve wanted-but-not-really-needed for going on three months now.

The digital world is just so friggin’ convenient.  And getting moreso.  Amazon Prime is the ultimate enabler.  TaskRabbit doesn’t help either.

The things I find myself actually doing in the physical world are — this is embarrassing — the bare minimum requirements of human physicality.

Eating.  Sleeping.  Bathing.  Exercising.  Sex.  Full stop.

If you think I’m exaggerating, let me stress:  I’m writing this blog post instead of doing the physical-world to-do’s on my list for today.

Hash-tag: #iSuck

Starting Next Week, I’ve Got A New Strategy

I’m calling it…

(Yes, it’s got a catchy name…)

Physical World Phriday

Fridays will be my day off-the-laptop.  All those never-quite-gotten-to to-do’s in the Physical World… Friday will be their day to rise front-and-center, and get the attention they deserve.

And hopefully, to get mercilessly done, like the virtual to-do’s on my list eventually are.

I anticipate that the laptop-less-ness of next Friday will be brutally difficult.  I’m so strapped to it, normally, that I rarely use my smart phone as an Internet device, which will make me a bit more digitally isolated than most people nowadays.

But that’s the idea, isn’t it?

#PhysicalWorldPhriday

I’ll be hash-tagging it on Twitter at 11:59 on Thursday.  And then…

I’ll be gone.

Would you rather hear this as audio?  Listen on Soundcloud.


I’m almost sure that my last haircut improved my health.

Not in the ways one might expect.  I wasn’t nesting lice or vermin.  It wasn’t a profoundly dangerous hairstyle, likely to get caught in industrial equipment and drag me down with it.

But it made me look like the me I was used to.

And whacking it down to the scalp — which I did, in a slight fit of “oh, hell with it” — was more of a change than I at first expected.

Face-Blindness for the Rest of Us

There’s a condition called prosopagnosia, which some scientists estimate affects almost one in forty people.  (I find this hard to believe, but it’s a “spectrum disorder,” much worse for some people than others.)  You know the people who say “I’m not so good with names, but I never forget a face?”

Well, people with prosopagnosia do not say that.  They do forget faces.  In fact, they never really recognize them in the first place.

For most of us, faces are a very special part of our visual reality, pulled from our vast data-stream of visual inputs and given preferential treatment by an area of the brain known as the fusiform gyrus.  You know how your smart phone has facial recognition software that puts a little box around people’s faces and makes sure to adjust focus and lighting to protect and emphasize them, versus other parts of the image?

Well, your brain — in particular, your fusiform gyrus — is constantly doing the same thing.

Unless, that is, you have prosopagnosia — which can be congenital (the fusiform gyrus never adequately learns to do its job) or acquired (brain damage bangs it up, and afterwards facial recognition takes a dive).  Prosopagnosics, as they’re called, have brains that function much more like an old school camera with no on-board computer, treating all parts of the visual field the same, not playing favorites with faces at all.

This is generally a bad thing.  Egalitarian ideals like “all visual elements are created equal” don’t really work so well in practice.  Not with vision.

Prosopagnosics, depending on the severity of their condition, range from having a bad memory for faces, to literally being unable to recognize themselves in the mirror.  They compensate by identifying friends and loved ones by secondary cues, like their manner of dress, their voice, or how they move.

Now, it should be mentioned — I don’t have prosopagnosia.

We’re All Icons

If you’re not a prosopagnosic, when you first meet someone, you’re aggressively cataloging details about their face, taking notes for later (unconsciously, at least), and drawing inferences about what you might expect about them, based on their facial idiosyncrasies.

Like all stereotypes, these guesses might not be borne out by further real-world data, but think about what comes to mind if the face of someone you meet is characterized by…

  • Ruddy-colored cheeks with visible capillaries
  • A deep, caramel-colored tan
  • Strong vertical creasing in the forehead, above the nose
  • Orange lipstick

In each case, you’ll probably take these as personality-clues as to what you might expect from a person.  (This is especially true in cases where the clues seemingly disagree with each other and imply a conscious choice — like a friend I have who is in his late 40s, but dyes his hair almost a canary-yellow “blonde.”)

But as we get to know individuals better, personal experience trumps facially-derived guesswork, and (again, for non-prosopagnosics) the faces of people we know come to represent our body of knowledge about that individual rather than the type of person we’d expect, based on their looks.

In other words, we recognize people’s faces as icons for the people we know, rather than advertisements for whom we might expect.

The Mirror Works Both Ways

The statement above is true even when the face in the mirror is us.

I was so used to seeing myself looking, well… the way I normally look, that a massive hairstyle change* was enough to momentarily shatter the visual iconography I had for myself.

  • Full hair-eradication, more accurately.  Think Kobe Bryant or Bruce Willis.

This isn’t to say that I had any “Who am I?” identity crisis following my haircut.  Very much the opposite.  It was a “Who is he?” moment.

Later in the afternoon on the day of my haircut (and the initial shock had worn off), I was doing a workout.  I had a mirror nearby and caught a glimpse of myself — shirtless and now completely bald — and for a moment I didn’t recognize myself.  I knew it was a mirror, but it looked like not me.

Honestly, it was reminiscent of all the prison movies where the hero gets captured and has his head shaved and then is hosed down to de-louse him.  When those scenes happen in the movies, we’re always struck with the thought “wow, they’ve stripped him down to his animal self.”

And sure enough, with my visual icon-of-self disrupted, that’s what I saw in the mirror: the animal chassis of me, not my well-worn identity.

And that is why I think the haircut improved my health.  Or will, anyway…

It’s Good To Think Of Yourself As Meat, Sometimes.

Western society has a long and confused history with the Mind-Body Problem.

I’m not going to dive into the details here (but if you’re interested, there are about 10,000 books on the subject), except to say that as a rule, people tend to fall into two opposing camps:

  • Those who exult in the mind (often abstracted into the “ego” or “identity” or “immortal soul”) and view the body as unfortunate-but-necessary baggage.
  • Those who reject the artificial, illusory mind/body distinction and encourage us to think of the two holistically, for the improvement of each — er, it.  (See?  Everyday language gets tricky when you commit yourself to this stance.)

Normally I find myself siding with the second camp.  The “it’s all a closed loop; physiology affects the mind; and the mind’s choices feed back into our physiology, and so on” position.

This makes good, solid sense to me.

And yet…

I can see where the fusiform gyrus — so marvelous in its function — creates a built-in logical fallacy for us.

We see ourselves (using our objective visual system) and because of our tendency to iconize the people we know, what comes to mind is our self (either our identity/soul, or our “holistic self” — either of which amounts to the same thing, in practice).

We look in the mirror and see the psychosocial aspects

  • Do I look sexy for so-and-so?
  • Will this suit make me look impressive for such-and-such occasion?
  • Do I look older than the me from last year?

…and 99 times out of 100, the identity-considerations leap front-and-center and distract us from thinking about the hundred-odd pounds of primate staring back at us.

If we thought about that primate, we might ask…

  • How is this specimen?
  • If I were an alien, going to the galactic pet store to buy a human pet for my alien kid, would I pick this one?
  • Is he going to be fun to play with?  Strong for work?  Lively?  Tasty?

Catching that unrecognized me in the mirror, I had a flashing moment where I didn’t see my identity, I saw the body I inhabit — and that brief instant was a powerful reminder.

Pour Your Foundation.

Whichever end of the Mind-Body Problem you find yourself siding with, it’s the body that’s the physical substrate of our existence.

To put that less nerdily:

“If you don’t take care of your body, where will you live?”

  • Somebody said this before me, but the speaker’s name is lost to history.

I’m like everyone else; 99.9% of the time I’m caught up in ego-related concerns — the things I want to do, be, see, experience.  And the maintenance of the meat-package that I come in — things like brushing my teeth — mostly seem like annoying impositions on my goals.

How many more inventions might have come from Edison if he hadn’t had to brush his teeth twice a day?

Could posterity have a few more Shakespeare plays if the Bard hadn’t had to use the loo?

And yet, it’s probably the opposite that’s true.  Maintenance work on our physical selves is a short-term loss, long-term gain.  (Absurd but true: If Shakespeare had never gone to the restroom, he’d have been in too much pain to do any writing.)

What resulted for me from my moment of non-self-recognition is this:  The thinking me is going to give a little more time, effort, and attention to the care and feeding of his animal chassis.

Sure, the animal-you is easy to forget about.  You can ignore him for a long, long time with little consequence; he’s slow to complain.  But eventually it will be he who is the primary determinant of how far you can go.

And that is a fact worth recognizing.

The correlation between being intelligent and being correct is, unfortunately, not as strong as we’d like it to be.

If smart people were as right as they are smart, knowing what to do all the time would be a lot simpler than it actually is.  But, alas.

A case-in-point is an article entitled “The New Normal,” published recently in Georgia State University Magazine, highlighting the thinking of uncontested smart person (and Smart Drug Smarts podcast alumnus) Nicole Vincent, associate professor of philosophy and associate neuroscience faculty member at GSU.

Unfortunately, the key idea of this article is just plain wrong.

The article presages a future where society has to deal with the nasty, unintended consequences of ever-more-effective cognition-enhancing drugs.  In this hypothetical dystopia, health/safety and efficacy concerns have all been addressed; the problems presented are purely social ones.

The title – “The New Normal” – refers to the social expectation that everyone will be using these drugs, for fear of underperforming and not keeping up with the cognitively-enhanced Joneses.

Citing high-responsibility professions like surgeons and airline pilots, Vincent warns of creeping public pressure for individuals to use the best-available cognitive enhancers to maximize their performance.  “You’re performing a job that many people’s lives depend on,” she says.  “If you mess up and people die when you could have just taken this [performance-enhancing] pill, people will see that as negligence.”

Why yes, I daresay they would.

Let me step back for a moment and say that I agree with most of the premises that the article’s “doomsday scenario” of changing cultural norms is based on.

  • I agree that cognitive enhancement technologies (including, but not limited to, “smart drugs”) will continue to improve.
  • I agree that early-adopters and more competitive members of society will use these things, and change our collective expectations — first of what is “acceptable,” next of what is “normal,” and finally what is “required” (either legally, or by overwhelming social pressure).
  • I agree that we’ll release these technologies into our society without having a clear understanding of their eventual consequences.*

* Humans have a bad track record when it comes to keeping genies in bottles.  If there are any technological genies that haven’t been un-bottled, I can’t think of them.  (Of course, this could be because their inventors kept them so darned secret we just don’t know such genies have been invented — and if so, kudos to those inventors.)  But as a rule — from atomic weapons to boy bands — if we invent things, we tend to use them and only afterwards consider what we’ve wrought on ourselves.

So if I agree with almost every premise presented by Vincent, what is she wrong about, exactly?

Her thesis fails the So-What Test.

Cognitive Enhancement will become the new normal.  So what.

As these technologies move from the Early Adopters to the Early Majority and eventually to everyone else, even the kicking, screaming Laggards will be pressured along (see the Diffusion of Innovations for this fun, cocktail-party terminology).

But… so what?

Let me provide some examples of other ideas that have failed the So-What Test:

  • “If access to basic education continues to expand… people will have to be literate to effectively participate in society.”
  • “If air travel becomes commonplace… businesses may expect workers to travel for hours at a time, at extreme heights, with absolutely nothing underneath of them.”
  • “If medicine further reduces infant mortality… manufacturers of child coffins will be put out of business — or else suffer the ignominy of re-marketing their products for small household pets.”

So freaking what, in all cases.

I could come up with more examples — a lot more.  All these if-thens are 100% correct.  And all are absurd in a way that is self-evident to pretty much everyone except… philosophers.

I don’t want to put words in anyone’s mouth (or over-speculate about someone else’s writing), but Vincent’s stance seems to be “we haven’t figured out all the ramifications of these technologies yet, so we should maintain the status quo until we do.”

But we can’t.  

And I don’t just mean we shouldn’t, I mean we can’t.

With apologies to Nostradamus and Madame Cleo, most of our track-records for predicting the future are just plain rotten.  And that includes really smart people — even professional think-tanks full of really smart people.

Accurately predicting the future requires access to enormous data sets, solid estimates of rates-of-change, an inherently counterintuitive understanding of exponential growth, and effective models of how various simultaneously-moving metrics interact with each other.

In fact, I’m just speculating that this recipe — if it could be pulled off — could accurately predict the future.  We don’t know.  But I find it hard to imagine that any of these tent-pole prerequisites wouldn’t be necessary.

Vincent’s stance seems to be “we haven’t figured out all the ramifications of these technologies yet, so we should maintain the status quo until we do.”

It was Abraham Lincoln who said: “The best way to predict your future is to create it.”  I’ve been reading Team of Rivals: The Political Genius of Abraham Lincoln, and one thing is easy for us to forget now, 150 years later, but was an enormous hurdle for Lincoln and other slavery-abolitionists:

There were many of Lincoln’s contemporaries — even those who morally opposed slavery — who thought that the Law of Unintended Consequences, when applied to a societal change as massive as the 13th Amendment (which made slaves’ wartime emancipation permanent), was just too risky.  What righteous babies might be thrown out with the slavery-colored bathwater?  Heck, what about the disaster inflicted on the federal government’s Strategic Mule Supply, if each of the freed slaves really got “40 acres and a mule”?

(Please refer back to the So-What Test, mentioned above.)

Rhetorical Bag of Dirty Tricks #47 and #48:  If you want to sound good, align your ideas with those of Abraham Lincoln.  To demonize your opposition, reference their ideas alongside Hitler’s.  I do both, although I’m leaving Hitler out of this post.

“The only constant is change.”

Trying to game out the future before it arrives, as we’ve discussed, is a fool’s errand.

And attempting to stop the future from arriving — to stop time in its tracks — is as close as history gives us to a recipe for a lost cause.  There are so many examples of losing battles fought in the name of such causes; the cultural annihilation of both the Native Americans and the samurai of Imperial Japan both come to mind.

Looking at these long-ago-settled battles from the winners’ side of history — knowing who triumphed and why, we now see the romance under the dust.  The American Indians, the samurai — both were fighting technologically superior forces in doomed, all-or-nothing conflicts.  The winners’ superior firepower, their superior numbers — both feel a lot like cheating as we look back on those conflicts now.

The “noble savages” didn’t stand a chance, but boy-oh-boy, did they have heart.

The position taken in the GSU article — against the creeping use of cognitive enhancement technologies — would try to paint baseline Homo Sapiens (circa 2015) as a noble savage race.

It’s an argument that packs emotional appeal.

You, me, and everyone we know, falls into the “us” that is under this impending, theoretical threat.  Even those of us who are using cognitive enhancers (those currently available) — we’re still a part of the “home team,” compared to those upgraded rascals from 2020, or 2030, or 2045, and whatever brain-enhancers they’re using to one-up, two-up, and eventually disenfranchise the biological “normals.”

What Part of “Progress” Don’t You Like?

I’m a sucker for historical romance.  I don’t mean boy-meets-girl kissy-kissy stuff where the girl wears a corset; I mean the broad, sweeping emotionality of individual humans struggling amidst great forces.

And the Tide of History is among the greatest of forces — less tangible but equally powerful as any natural disaster.

I watch a movie like The Last Samurai and see the doomed samurai charge, and I get misty-eyed like everyone else.  But I recognize that those noble samurai are, however unwittingly, the bad guys.

Unbeknownst to them, they were fighting against a world that cured Polio.

They were fighting against a world that explores space.

They were fighting against a world where run-of-the-mill consumer technology allows me to research samurai while listening to Icelandic music (created on synthetic instruments, and presented in Surround-Sound) as I sip African coffee and wait for a transcontinental flight that will be faster, cheaper, and safer than it used to be to travel between nearby villages.

Of course, the samurai didn’t know they were fighting against this stuff.

They just weren’t sure about this whole modernization thing, and what sort of “new normals” might emerge.

Bob Dylan was right: The times, they are a-changin’.

You won’t be forced to keep up.

Cultural tides may pull you along, but you’ll be free to swim against the current if you really want to.  There are examples of that, too.  The Amish are one.

The Amish are still here, in 2015.  So far as I know, they’re not under any particular threat.  They’re doing okay.  They decided to pull the cultural emergency-brake in 1830, or whatever, and well…

They continue to exist.  Why?  Because we live in a peaceful-enough, prosperous-enough culture that no one has decided it’s necessary to overrun, assimilate, or eradicate them and harvest their resources.  

It should be pointed out that societies like ours — this peaceful, this prosperous — are somewhat of an historical anomaly.  But the good news is:  We live in an era of unprecedented positive historical anomalies.

I recognize that those noble samurai are, however unwittingly, the bad guys.

If you want to opt out of further technological progress and rely on the goodwill of your fellow man (or, eventually, the Homo Sapiens-successors you’ll be opting out of becoming), there’s never been a safer time to do so.  We can’t predict the future, but the trend-lines do seem promising.

But for me, personally…

I don’t want to rely on the goodness of my fellow man.

That sort of reliance is something you do in a pinch, not as a general strategy.

Do you think the Amish would have made it through the Cold War without the more technologically-minded Americans picking up their cultural slack?  No sir, not at all.  Heck, they’d have been steamrolled in the Spanish-American War, generations earlier.

I didn’t start off this post intending to disparage the Amish, but dammit, now I will.  The fact is, they’re not going to read this anyway.

There is a word for people who have every opportunity to be effective, but choose not to be, and instead rely on others to be effective on their behalf.

That word is Freeloaders.

The Amish, I put it to you, are freeloaders.

GSU’s New Normal article posits a future where effective, cheap, safe, non-prescription “smart drugs” have become commonplace.

In that future, when it arrives, people who have the opportunity to use these drugs to improve themselves, and choose not to, will also be freeloaders.

I won’t be one of them.

Hey there, Performance Hackers!

Our friends at Quantified Self are putting on an exposition on the San Francisco waterfront on June 20th — and Jesse will be paying them a visit to check out all the newest developments in the self-tracking world.  The expo is for everyone interested in understanding where technology is going and how it’s affecting our lives.

QS’s Press Coordinator Ernesto Ramirez joins us for this micro-edition to give some background on the Quantified Self movement and offer a sneak-peek of what will be happening at the event.  Ernesto covers the most popular ways to self-assess, the crossover between do-it-yourself and personal tracking, and how much time actually goes into recording and analysis of activities.

Come and Say Hi

Join Jesse and the other optimization-fiends interested in how sensors, data, and “very personal computing” can be used to understand ourselves and the world around us.  Try out the new wearable devices and apps that can give you intimate and direct feedback about yourself: from how you sleep, eat, and exercise — to what triggers your fear and joy.

Any Smart Drug Smarts listeners in the San Fransisco area who’d like to attend can register here using the discount code smartpod to get $10 off the regular $20 ticket price.

We hope to see you there!

Where & When

June 20
10am – 4pm
Herbst Pavilion, Fort Mason

I recently read an article about those baddest of bad guys, Nazi Germany, and how their toolkit for perpetrating war contained quite a bit of chemical help.

Pervitin — something we now call by the street name speed — was doled out like candy to soldiers in the Wehrmacht, the Germans’ invading force that conquered Europe during 1939-1940.  This methamphetamine was prized for its fight-all-night qualities — increased vitality, speed, and motivation, and reduced need to rest while you’re mid-blitzkrieg.  (Later in the war they would add cocaine to the mix.  Seriously.)

The Wehrmacht also encouraged the use of more alcohol than you’d think military discipline would allow — because of alcohol’s propensity for reducing moral hang-ups about extreme behavior.  And let’s face it: When you’re the Nazis, morality is just sand in your gears.

But the Nazis are far from the only military to encourage, or even mandate, the use of psychotropic drugs by personnel.

It’s a downright common practice.

If you sign up for the U.S. military today, you’re contractually obligated to allow Uncle Sam to inject you with… well, pretty much whatever he wants, whenever he wants, without telling you any more than he wants to about what you’re being injected with.

I’m not a big fan of the “not telling you what you’re being injected with” part, but the fact that injections are sometimes a job requirement… that strikes me as reasonable.

If a soldier is going up against an enemy known to use certain chemical agents, mandating the use of a prophylactic antiserum makes good sense. This could be true even if the antiserum has known, limited downsides. The wear-n-tear on an individual soldier’s body, in a utilitarian sense, may be more than justified when held up against the downsides to the soldier and his team, should he succumb to a chemical attack.

And militaries aren’t alone.

Many professions, implicitly or explicitly, require taking drugs.

  • Third-world doctors need vaccinations.
  • Lifeguards unwittingly but unavoidably take in daily transdermal cocktails from sunscreens and pool-cleaning agents.
  • Sommeliers and people who lead wine-tasting tours… well, you get the point.

But the usual pros-and-cons pragmatism of public opinion regarding professional drug use gets complicated when the drugs involved affect people’s minds.

Caffeine is the one substance that society gives a free pass.  No one seems up in arms about people making a Starbucks-stop on the way to work, or (gasp!) going for a second cup of joe in the staff kitchen.

All other psychoactive drugs, though, raise eyebrows.

I’ve revealed myself as the stray kid who slipped through Nancy Reagan’s thought-net, and doesn’t believe all drugs are always bad, always.

An easy example: Despite the staggering numbers of Americans taking antidepressants, there’s a sort of society-wide “don’t ask, don’t tell” policy.  We know that some of our staff, co-workers, and bosses are using these things — but we’d prefer not to think about it.

I’m about to go off the rails and get all crazy now.

If you’re easily shocked, please brace yourself.

The fact is, there are situations where people are better at their jobs with their mental states chemically altered.

As a boss, I like my employees to be perked-up from caffeine.  (I’ve openly encouraged Caffeine Naps in my office.)

It may be that Sarah in Accounting is a lot more effective on her antidepressant meds than off them.

And if Bill in IT happens to maintain a Ritalin prescription that he doesn’t technically need — but it helps him to focus better — who am I to complain?

Now that I’ve revealed myself as the stray kid who slipped through Nancy Reagan’s “Just Say No” thought-net, and doesn’t believe that all drugs are always bad, always, let’s continue…

I want to talk about a class of professions where the professionals’ psychological states really, really matter: Those who are authorized and empowered to use violence.  The men and women who carry guns.

This is pure self-interest on my part: Someone’s thoughts and mood matter a heck of a lot more to me if he or she is potentially authorized to hurt me, and has the means and training to do so.

Today is a dark day for American law enforcement.

“To Protect and Serve” seems increasingly like a euphemism for “To Bully, Beat Down, and Skip the Consequences.”  Some recent Hall of Shame examples:

The number, severity, and “you’ve got to be kidding me!?” nature of these stories make police aggression seem like a systemic problem.  All sorts of solutions should be explored (and, to be fair, probably are being explored): Changes to hiring practices.  Increased oversight.  Stronger carrot-and-stick incentives for good and bad behavior.

What about a chemical intervention?

How would you feel if Pfizer or Dow Chemicals or Merck invented a substance that could chill out the police a bit?  Not impair them functionally, but change their minds, maybe change the way they see the world…  And reduce their impulse toward violence.

I’m not talking “Don’t pull out your gun when you’re in danger”; I don’t want to endanger our police any more than I want them to endanger the rest of us.  I’m talking about “Don’t continue clubbing the guy who’s already collapsed on the ground” or “Don’t apply the Taser to the grandmother.”

If such a drug were theoretically available, wouldn’t it be worth a field-test?  A trial program in a few precincts, to see if excess police violence is damped down a bit?

I hope you’re nodding.

What if such a drug already exists?

What if it is MDMA?

Yeah, it’s an illegal drug.  A rave drug.  The main ingredient in Ecstasy*, the serotonin-dumping, dance-all-night-in-laser-light pill that flooded America in the 1990s and has been a Schedule-1 narcotic — both highly illegal and highly popular — ever since.   That drug.

* Ecstasy often contains speed and other additives, and is not pure MDMA.

Someone’s thoughts and mood matter a heck of a lot more to me if he or she is authorized to hurt me, and has the means and training to do so.

Just humor me for a moment and try to forget that MDMA is an illegal, recreational substance.

Let’s look at the demonstrated positive effects on its users:

  • MDMA increases the release of oxytocin and prolactin (hormones associated with trust and bonding).
  • MDMA significantly decreases activity in the left amygdala, associated with fear and traumatic memory.
  • Animal studies have shown MDMA to dose-dependently decrease aggressive behavior.
  • Users often report ongoing improvements to their mood, and to feelings of trust and fellowship with others — long after the drug has dropped to physiologically undetectable levels.

I’m not proposing cops get high and go out on patrol.  I’m proposing cops get high, feel the love that MDMA seems to reliably bestow… and then sleep it off, and go to work a day or two later.

Am I crazy to suspect that the psychic nudge this drug might give would make police violence a little less likely?  Isn’t that what we’re after?

Okay.  I realize there are some “yes, buts” that I’ve got to address now…

“Yes, But… Will It Work?”

First off, thats not the right question.  We should test this crazy idea.  Not assume I’m right based on a blog post.

I’m not proposing a policy.  I’m proposing a study.  

I’m making a testable hypothesis, and trying to convince you that it’s worth investigating.

“Okay, So… Could It Work?”

Now you’re talking.  I think yes, and here’s why:

What horrifies us about our increasingly militarized, overly-aggressive police force isn’t that it has the capacity for violence, but that this capacity is being too liberally applied.

Let’s assume we’re okay with bad guys getting a billy-club in the face or a firm tasing every now and then.  The important thing is to reduce the number of billy-clubs-to-the-face for everyone else.

It’s the duty of law enforcement personnel to make tough, real-world, real-time decisions on “does this situation merit violence?”

If you are a non-military U.S. civilian, you’ve got a 20 times greater chance of being killed by a cop, than being killed by a terrorist.

Now please permit me to interrupt with a quick diversion into statistics, so we can talk about something important called a “false positive.”  We’ll keep the math simple and this whole thing quick…

A “false positive” is when you’re looking for something — and you think you find it — but you’re wrong.

You’re separating out green M&M’s, and you mis-identify a brown M&M as green and add it to the green pile.  That brown M&M is a false positive.  (A green M&M that you miss, and doesn’t wind up in the green pile, would be a false negative.)

False positives, it turns out, are exactly what society hates, when it comes to cops and violence.

Let’s look at an example with simple numbers:

Officer Jones has 1000 interactions with civilians over the course of a year.  In each interaction, he’s got to do some mental calculus and decide “does this situation merit violence?”

And let’s say we’re the Jiminy Cricket of Public Conscience, and we know the correct answer is 10.  In 10 of these interactions, the person needs some billy-clubbing; everyone else should leave Officer Jones’ presence unscathed.  This would be the perfect-world scenario.

But the real world has error rates.  Officer Jones is not perfect, and he mis-reads the situation 1% of the time.  In these cases, he will either billy-club, or fail to billy-club, the correct people.

So the 10 times over the course of the year when he runs into an actual violence-deserver, with only a 1% error rate, chances are good that all 10 of them will get the club-treatment.  (9.9 is what statistics would predict, so pretty close.)

False positives, it turns out, are exactly what society hates, when it comes to cops and violence.

The problem is, that same 1% error rate, applied to the 990 people who don’t deserve clubbing, means that 10 people (990 x 1% = 9.9) are going to get thwacked, also.  Yikes.

So Officer Jones will beat down 20 people during the year, and half of them won’t deserve it.

What started as an innocuous-sounding 1% error rate has resulted in a 50% mis-application of violence*, with 10 officer-delivered assaults on undeserving civilians.

The disparity between that 1% and the 50%, both of which are “true”, is why Mark Twain famously quipped: “There are three kinds of lies: lies, damned lies, and statistics.”

Thanks for bearing with me on that detour.

I needed to do that, so we can understand why an MDMA-induced tweak in cops’ instinct-to-violence might matter so much.

If MDMA could theoretically make a cop’s move for the billy-club 50% less likely, we’d be cutting our innocent-civilian beatings from 10 down to 5.  Not perfect, but a great start.

But wait — we’d also be cutting our righteous manhandling of violence-deserving criminals from 10 down to 5, wouldn’t we?  Well yes, we would — but there’s something important to consider here:

The only situation when cops should apply violence is when doing so will protect themselves or others from physical danger.  If a cop is dealing with someone, and that person moves from being a possible threat to being a definitive threat — that’s generally a pretty unambiguous move.  A person goes from yelling and waving his arms around, to throwing punches, etc.

So in nerdy terms, a false negative (a cop not using violence, when he should) tends to be a self-correcting situation — because no cop is going to ignore violence right in front of him — whereas a false positive (a cop using violence, when he shouldn’t) isn’t self-correcting, because it’s the cop who has prematurely upped the ante.

So what we’d be hoping for with MDMA, is a general de-itching of cops’ trigger fingers.  Making the pause a little longer, the hesitation a little greater, before Johnny Law commits to the use of force.

This approach works because the number of times violence shouldn’t be used dwarfs the number of times violence should be used.  This will always be true in civil society.   (In fact, in any non-zombie-apocalypse scenario.)

So if we accept the premise that MDMA may reduce cops’ inclination to violence, then the answer to “Could It Work?” (or at least “Could It Help?”) seems to be a resounding yes.

“Yes, But… Tweaking With Cops’ Minds Is Unethical.”

Is it?  Because… we do this already.

A cop’s psychological state is society’s business.  (And we may soon decide the same about other professions like airline pilots, where professionals carry the lives of many civilians in their hands.)

We’ve all seen TV shows where cops — often griping about it — are forced to meet with a psychologist and “talk about their feelings,” etc.  Script-writers love this as an easy way to layer in character development, but there’s good reason why these characters’ real-world equivalents exist.  Police psychologists are representatives for us tax-paying civilians who want our peace officers mentally well-calibrated.  (Too frequently nowadays, we have reason to wonder.)

Normally when this tweaking with people’s minds is unethical objection comes up, those making the objection are not opposed to the general concept (tweaking), but to the specific methodology (in this case, with psychoactive compounds).  Objections to “skillfully presented verbal arguments,” for example, don’t hold much weight with anyone — although such arguments can tweak people’s minds as effectively as any drug.

Let’s accept that we influence other people’s minds constantly.  Pleasant colors in hospital waiting rooms.  Soothing music in dentist’s office.  Perfumes to attract romantic partners.  As social animals, it is our constant endeavor to manipulate the mental states of our fellows.

So let’s overrule this objection and move on.

“Yes, But… What About the Cops’ Physical Health?”

MDMA has physical downsides.

All that said, MDMA seems to be not that physically detrimental.  It’s dangerous, but manageable.  In a UK study published in the Lancet (the world’s oldest medical journal), Ecstasy was ranked only 16 out of 20 on a list of dangerous drugs based on harm to the user and harm to others.

A Personal Note…

Just in case you think I’m writing this piece as a recreational user who thinks the world would be a better place if MDMA were in every public drinking fountain, let me offer full disclosure:

I’ve never tried the stuff.

The truth is, despite ample opportunities, I’ve always been a bit unnerved by MDMA’s reputation for “serotonin recuperation hangovers.”  I’m not eager to do anything that could undercut my body’s natural production of serotonin (a “feel good” neurotransmitter).  So, at least for the moment, it’s not for me.

But then, I don’t carry a gun.  I’m not the one tazing septuagenarians or beating civilians to death while “taking them into custody.”

Modest physical downsides to someone like me — an unarmed, not-particularly-dangerous civilian — might not be worth the benefit of damping down my instinct towards violence…

But for a member of an increasingly dangerous police force, maybe it’s time to bite the psychopharmacological bullet and do the science to learn if MDMA’s use might be worth the speculative benefits.

I’m completely ignoring an elephant in the room: MDMA is the primary ingredient in something called “Ecstasy” — it’s reputed to be intensely pleasurable, and many cops might jump at the chance to take it.

I Am Not Anti-Police.

Not even a little.

I’m fully aware that most cops don’t do this terrible stuff.

The ones we hear about are ugly statistical anomalies.  But in a nation of 300 million people, including hundreds of thousands of cops, statistical anomalies will happen predictably, year-in and year-out.

This proposal is about strategically reducing those violent anomalies.

So, why not run a pilot program?

Take a few precincts across the country, and make the program strictly voluntary.  Cops who want to fool around with some MDMA, maybe even occasionally micro-dosing while on the beat, are free to do so.  Cops who want to abstain, can.

Run the test programs for 2-3 years.  See what happens to police violence during that time.   See what happens to police-community relations during that time.  If there are violent incidents, see how many of them are from the MDMA users vs. everyone else in the “control group.”

This is what science is about, right?

Make a hypothesis, test it, review the results, and make decisions based on accumulated evidence.

Hitler wanted his Wehrmacht to be energetic, assertive, and morally compromised.  He used a chemical cocktail of methamphetamine, booze and cocaine to accomplish that.  His goal was despicable, but his logic was sound.

I would like to see America’s police force calmer, less hostile, and more cognizant of the overall Brotherhood of Man.

If MDMA could edge our cops in that direction, isn’t it worth an honest-to-goodness social experiment?

Or are we so poisoned by Nancy Reagan Just Say No dogmatism — and afraid of finding a legitimate use for a “party drug” — that we’re willing to continue getting our asses beat by our peace officers?

Let’s grow up, get serious, and do some damned science.


Acknowledgment to this excellent article by ex-police-officer Redditt Hudson, on America’s problems with violence and institutionalized racism within the police community.

Scroll to top