Author: Jesse Lawler

Jesse Lawler is a technologist, health nut, entrepreneur, and "one whose power switch defaults to On."  He created Smart Drug Smarts to learn how to make his brain do even more, and is greatly pleased to now see his little baby Frankenstein toddling around and helping others.  Jesse tweets about personal optimization, tech, and other stuff he finds interesting at @Lawlerpalooza.

If you’ve been listening from the beginning – and I’m consistently delighted to hear how many people stumbled onto Smart Drug Smarts recently and then went back and listened to the old ones…

Well, you know it took a good, long time before we were able to get to the regularly weekly schedule we’ve been managing the past few months.  There’s some excuses for this (a few of which are slightly valid…), but the good news is: Excuses are no longer necessary.

We’re now an every-Friday show.

But We Are Not Content.

Now that weekly-ness has been achieved, we achievement-addicts are wondering…  What more can we do?

And so we looked at our “Things We’d Love To Do If We Get Overly Ambitious” list, and realized we should find a place for some of the extra stuff we get access to.  You see, every now and then, we have the opportunity to do interviews outside our normal range of content.  We’ve even said no to stuff — cool stuff — because it didn’t seem to “fit” with what we’re doing.

Bonus content for those of you who like the show enough to carry it around in your pocket…

Also – sometimes – we’ve had guests who were gun-shy about talking about certain things to an everyone-with-an-iTunes-account audience.  (Remember the episode with the distorted, Speak-n-Spell voice guest?)

But I think maybe I can coax such guests to talk to a more limited audience, behind a just-slightly-ajar Internet door…

And So We’ve Created “Overdose Editions.”

Overdose Editions are going to live outside the normal RSS feed that distributes the podcast.  These episodes won’t be on iTunes, Stitcher, or even this website — not as audio, anyway.

We’ll be releasing them exclusively through our mobile app, Axon.

They’ll be bonus content for those of you who like the show enough to carry it around in your pocket, and allocate some precious icon-space on your smart phone screen.

Overdose Editions won’t be coming out on any fixed schedule — just when something appropriately weird or awesome comes up.

Want to Know Every Time a New Overdose Edition Lands?

No problem.  Two ways to do it — both work, and they’re complementary.

  1. Download Axon, now available for iPhone + iPad, and Android (long-delayed, but…) in the works.
  2. Sign up for our Mailing List.  If you haven’t signed up already, you should see a prompt at the top of your browser right now… and maybe another one poking up from the lower-right corner of the page, too.  Signing up won’t hurt and will expose your brain to more synapse-stimulating Smart Drug Smarts goodness.  And every time a new Overdose Edition appears, Rhiannan will be writing about it… just a tantalizing taste to remind you to listen.

So that’s that.  A little something new.  We’re really excited… And the first Overdose Edition will be out very soon — the first week in June, 2015*.

* If you’re reading this and it’s later than that, the good news is: You’ve already got content ready-n-waiting for you.

If you are an Android user and feel left out – we’re here to help.  Sign up for the mailing list and drop an email to Rhiannan and we will absolutely, positively get you hooked up – as a stopgap until Axon for Android is available.

I’m going to keep this short, because I don’t want to get all moralistic.

This is more a “just a little something I noticed” piece than a real, serious exhortation…

And before I start, let me openly admit that what I’m about to point out may have struck me only because it supports something I already believed.  So this entire post has Confirmation Bias written all over it, but, all that notwithstanding…

You (yes, you!) should stop drinking.

Why?  Because Alexander the Great couldn’t handle alcohol.

And he was Alexander the Great.  And you’re not.

I just read John Maxwell O’Brien’s excellent biography, Alexander the Great: The Invisible Enemy.  The book is a fairly straightforward piece of historical scholarship about the excessively-biographied Macedonian one-time-ruler-of-all-he-surveyed… but with one fairly unique angle.

Most previous biographers have spent a lot of explanatory time on the contradictory elements of Alexander’s character: Philosopher and butcher, dreamer and tyrant, charming and polite but also someone who occasionally stabbed his friends.  Incongruous elements in the warlord’s personality were explained with appeals to his complex relationship with his dead father Philip II, or a megalomania that set in after he took over the larger part of the world, or [insert more insubstantiable psychobabble here]…

O’Brien’s more straightforward take is this: Alexander the Great was a really, really bad drunk.

As his drinking worsened, so did his statesmanship, and his health, until he died at 32 from alcohol-related illness.  Full stop.

There was no conspiracy that poisoned Alexander in his prime.  There were no “puzzling contradictions endemic in this many-faceted leader’s personality.”  There was just an amazing guy who truncated his own epoch-making career for the love of one too many tankards of unwatered Grecian wine.

What am I saying?  That “alcohol is bad”?

Boo, hiss!  “Get a life, write a blog post worth a damn.”  I hear you.

But bear with me for a moment.

As his drinking worsened, so did his statesmanship, and his health, until he died at 32 from alcohol-related illness.  Full stop.

If O’Brien’s propositions are correct – and he paints a darned compelling picture – isn’t it worth considering the moral of the life story of one of the most venerated humans in all world history?

So who was Alexander?

Some bullet-points from his historical resume:

  • He was born a prince, and was tutored as a child by Aristotle (yes, that Aristotle), so he had some advantages.
  • He was insatiably ambitious, and not content to only outdo the deeds of his kingly father, he set out to exceed the deeds of mythical figures (like Hercules).
  • He was laugh-in-the-face-of-death brave, leading his armies literally as well as figuratively.
  • He unified the Greeks, destroyed the Persian Empire (at the time, the big kids on the block), and conquered lands from Greece to Egypt to Pakistan and into modern India.  (Check out this map, and remember this was over 300 years B.C., when technology was more-or-less limited to chariots and harsh language.)
  • He founded over 70 cities.
  • Military historians class him among the greatest tactical generals of all time.

Despite all this, the guy had character flaws.  He was also a butchering mass-murderer on more than one occasion, but the point I’m trying to make is that he was undeniably, extraordinarily gifted.  A world-class bad-ass.

And yet, when it came to a one-on-one, winner-takes-all fight between Alexander and Wine*, Wine won.

* Grecian wine in Alexander’s time was different than what we think of as wine now.  The “un-watered” wine Alexander and his Macedonian boozing-buddies drank would be closer to a hard alcohol in today’s terms.

It’s not like Alexander didn’t know the powers or the dangers of drink.  As a youth, he’d courted danger by mocking his kingly father’s inability to, well, balance upright while publicly drunk.  And as king, he occasionally sponsored drinking contests with double-digit body counts (yes, as in, dead bodies) resulting from alcohol poisoning.  So he knew he was playing with fire.

Isn’t it worth considering the moral of the life story of one of the most venerated humans in all world history?

And what Alexander knew anecdotally then, we know a lot more scientifically now.

Thanks to its prevalence in all societies (and despite an alcohol industry that would rather keep such information corked), the effects of alcohol on the brain and body are among the most-studied of any psychoactive substance.

Some Low-Marks on Alcohol’s Wall of Shame

  • 65% of suicides have been linked to extravagant drinking.  (Mental Health Foundation Understanding the relationship between alcohol and mental health, London: Mental Health Foundation, 2006.)
  • “[Brain damage from alcohol] occurs as a function of quantity and exposure; the more you drink, the greater the damage to key structures of the brain, such as the inferior frontal gyrus, in particular. This part of the brain mediates inhibitory control and decision-making, so tragically, it appears that some of the areas of the brain that are most effected by alcohol are important for self-control and judgment, the very things needed to recover from misuse of alcohol.”  (from this 2014 study)
  • Mouse studies show that alcohol drunk in early pregnancy (during the period in which human mothers would typically not even know they are pregnant) changes the way genes function in the brains of their offspring – changes apparent in the brain structure of the offspring even in adulthood.

These three citations are the smallest tip of an alcohol-iceberg.  The research is out there, if you look for it; and I’ve read such stuff before – so why is it a book about ancient history, not the plumes of recent studies, prompted me to write this post?

What struck me as I read this book was the issue of scale.

Most of us are not “problem drinkers.”

Most of us will not drink ourselves to death.

Equally few of us will ever let a drunken rage engulf us, and murder our friends.

But then, very few of us will conquer Asia Minor, either.

Alexander was a genius, an amazing physical specimen, legendarily determined, and incalculably brave.

All of this was not enough to avoid being bested by drink.

If you’ve read this far (and aren’t on your second drink yet), then your drinking is not on the scale of Alexander’s.  And that’s good.

But your ambitions – and likely, your advantages – probably aren’t on the scale of Alexander’s, either.

So then: How is your less-than-biography-worthy alcohol consumption subtly undercutting you?

Maybe it isn’t.

But maybe it is.

It’s worth thinking about just a little, isn’t it?

Nobody ever said being a mom was easy.

It used to be that you had to worry quite a bit about dying in childbirth. 

Also about your kids not living long enough to toddle, much less to adulthood.  Back in the Dark Ages, child mortality was so high, kids often weren’t even given names until they were 3 or 4 years old — because hey, what’s the point of cluttering the family tree with offspring that might not reach puberty, anyway?

Those days are (thankfully) past.

But being a modern mom has its own set of problems.

And I’m not talking about being a new mom, either, with a bouncing baby hot out the womb in 2015.  I’m talking about my mom, now in her mid-60s, dealing with some distinctly modern problems that are going to be increasingly par-for-the-course as maternal difficulties go.

We probably all remember the movie The Lion King, and the emblematic Circle of Life song by Elton John and a cast of cartoon animals.  Popular song, and a popular concept: Life, with all its ups and downs, trials and tribulations and individual triumphs and defeats, is still essentially repetitive.

Birth, growth, maturity, wisdom, aging, death, and so on… ad infinitum.  Each of us gets one spin around the circle, and when the upswing of our own transient life seems past, we can rest easier knowing we’ve passed the existential baton to a loved-and-raised progeny or two – kids we’ve brought into the world, loved, raised and shaped.  If not in the strictly biological sense, then at least ideologically.

Of course, not every human does this, but it’s a well-established strategy with some proven, winning examples.

Life, with all its ups and downs, trials and tribulations and individual triumphs and defeats, is still essentially repetitive.

We’ve got quite a lot of old people who have reported significant satisfaction in watching their offspring arcing around the same Circle of Life, just a partial-revolution behind mom and dad, chronologically.

The truth is, up until now the Circle of Life could just as well have been called the Handcuffs of Life.

If you were a cave-mom, your kids were going to be – you guessed it – cavemen or cavewomen.  End of Story.  No upward mobility, no aspirations, nothing but moving up the seniority-rungs of cave-society, until it’s their turn in a shallow grave with some bead necklaces.  And that’s if they’re lucky.

Agriculture didn’t change things much.  Serf farmer moms had serf farmer kids, and so on.

More recently, in the past couple centuries, we’ve had an explosion in the number of available professions in many parts of the world.  Depending on a mom’s perspective, she could be thrilled to see her child rise beyond the family’s traditional station – or maybe upset to see a kid “leave the family business,” if she took a more negative slant.

But the Circle of Life was still on firm footing.  Jaunty new professional options didn’t do much to interfere with the growth-marriage-work-aging-death thing.

But now, things are getting weird.

Especially if a mom’s kids are among a technology-adoring, futurist set – and she is perhaps a bit more Luddite in her leanings.

My mom was born in 1950.  Bless her heart, she still considers vacuum cleaners to be exciting technology.  Microwave ovens are somewhat suspect.  Modern car navigation systems that talk are “creepy.”  And the idea that her grown son “wants to change his brain with drugs”* is something she really prefers not to think about.

* “Why can’t you call them food supplements?” is a conversation we revisit now and again.

But we’re not breaking any new ground in family dynamics, are we?

There’s always been a generation gap.  Older folks always find the music too loud, the skirts too short, the newest gadgets too damned complicated.

But then, there was always the Circle of Life to fall back on.  Curmudgeonly moms and stick-in-the-mud dads still knew that come Hell or high water, the passing of years would inexorably place their offspring into a roughly identical carousel-seat to the one they sat in, one day.  “As you torment me, so my grandchildren shall one day torment you, sucka.” *

* Said with more or less ironic relish, depending on the sense of humor of the parents in question.  I’m happy to say both my parents are on the merrier end of the spectrum.

But the Future is arrived.

And people are taking notice.  Everyone is interested.  Some people are curious.  And some few of us are actually making major life decisions on the assumption that the Circle of Life, long coiled like a packed spring, is about to warp out at the end, arcing off into uncharted territory.

If people are right, who believe that in a decade or two, longevity technologies may extend our average lifespan by more than one year per year… What does that mean for your personal life plans?

Up until now the Circle of Life could just as well have been called the Handcuffs of Life.

If you watch the Paralympic Games, and find your reaction changing from “Isn’t it nice that person is still able to function?” to “Jesus Christ, did you just see that!?”… What does that do to your contentment with your biologically-inherited physicality?

If you’re persuaded by the Extended Mind Hypothesis, and agree that our devices are becoming important adjuncts to our thinking selves… Do you want to have a baby with your sweetheart, or engage in asexual co-evolution with your laptop?

I’m not supplying answers to these questions; I’m just pointing out that these weren’t questions sane people could even ask, until right about now.

But these questions are getting saner.

Incredible technologies are moving the Lunatic Fringe onto center-stage.

Maybe it hasn’t been a Circle for a while now…

During my mom’s generation, the newly-available birth control pill upset the Natural Order of Things and ignited public controversy that remains controversial even today.  The Circle of Life, one could say, got dented.  Humans had put a manual screw-nozzle on the spigot of fertility.

And it was a change that mattered.  The average age at which American moms had their first child raised significantly: from 21 years of age in 1970 to over 25 in 2008.  Predictable markers on the Circle of Life are suddenly shifting around.

Do you really want to have a baby with your sweetheart… Or engage in asexual co-evolution with your laptop?

But the availability of the Pill was one relatively well-defined choke-point on an otherwise untouched Human Condition.

What technology is about to give us, though, isn’t choke-points. 

It’s branches.  It’s springboards.  It’s lots of bizarre-looking options.  The Circle of Life, unchanged in aeons, suddenly has off-ramps under construction.

My mom hates this idea.

And I, of course, love my mom.

I want her to be okay with the idea of an amorphous future.  One that’s less predictable.  One where the progress of life creeps like a vine, curls like a paisley, expands like an ink drop in water…

I understand where she’s coming from.

She wants to understand where I’m going.

But I don’t think any of us, looking ahead, can lay claim to too much certainty when we look at the future.  It seems dishonest to me.  (And, Mom, you also taught me to always be honest.)

The Circle of Life may be breaking, it’s true.

But circles are not the only beautiful shapes in nature.

Spirals are nice, too.

Like this post but you’re not my mom? 
That’s okay too.  You may want to sign up for our email list, and we’ll let you know when we’ve got new writing, new podcast episodes, and new ideas on how to maintain and improve your brain for our big, weird future.  Look for the fill-in-the-blank email box at the top of this page.

About three months ago, I started down an intellectual rabbit hole that made me stop eating, as of midnight last Sunday night.

It’s now late afternoon Saturday as I write this. I’m closing in on six days with nothing but water going down my gullet.

This hasn’t been easy, but it has been memorable. More on that in a minute. But first, the “why,” for those of you still understandably hung up on the “why is this guy starving” part.

Why Would I Do This?

This podcast episode details the compelling science behind massive carbohydrate-restriction as a cancer ameliorator, and (possibly) a nip-it-in-the-bud prophylactic, if undertaken while a person is still largely cancer-free.  It’s worth a listen, and I won’t detail it point-for-point here.

So my initial motivation, as someone without cancer who prefers to stay that way, was just that:  If inconveniencing myself in a painful-but-not-overtly-dangerous way can, very possibly, keep me from dying from something that a whopping proportion of the population dies of, that’s pretty great.  My diet and exercise regimen should do well at protecting me against heart disease, that other über-killer, so if I can nix both heart disease and cancer from my list-of-demises, that leaves me with the less-likely and more-interesting array of Far Side deaths:  Falling grand pianos, shark attacks, assassins with too much time on their hands.

Dr. Thomas Seyfried’s specific recommendation for this potential “anti-cancer prophylactic practice” was a once-a-year, 7-day water-only fast.

So I decided that at some point in 2015, I’d tick that off my to-do list.  And if it wasn’t gallingly difficult (and assuming the scientific presumptions aren’t overturned in the next year), I’d make it an annual ritual.  I mentioned on several podcast episodes that I’d be doing this, and wound up with a head-count of 15 participants who wanted in on the starvation en masse.

My Ulterior Motive

I’ve got a deep, dark secret I’m going to put out here in public.

It may be that anti-cancer health stuff wasn’t my primary motive in doing this.  It may be that it’s just because it’s weird.

When Dr. Seyfried wowed me with the still-stunning fact that a person with average body fat can live on that fat alone (plus adequate water) for 60-70 days before succumbing to death-by-starvation, I was flabbergasted.

If inconveniencing myself in a painful-but-not-overtly-dangerous way can keep me from dying from something that a whopping proportion of the population dies of, that’s pretty great.

I thought about it, and the longest I’d ever gone sans-calories was maybe 20 hours (and that’s a top-end estimate) to prepare for a blood test.  Kinda pathetic.

And not eating sounded so weird. Like smacking our biology in the face.  The idea that we could get away with it and even benefit from it fascinated me.

Now believe me, I’m no anorexic.  In one day, seven hours, and 24 minutes, when my week is over and I can eat again, you’d best believe I’ll be chowing down.   But there do seem to be several verifiable benefits to short and medium-term non-caloric fasting.  (See this and this and this.)

I did not go in expecting this to be a joyride.  I eat a lot.

My diet is made up of natural, unprocessed foods, and no sweeteners — so the sheer volume of the food I eat is considerable.  Downshifting to nothing at the stroke of an arbitrary midnight was going to make my body wonder what the hell was going on.

I read up on this before starting, of course. But the research was a little tough, because most of the people who have written about fasting seem to have come from one of three camps, none of which I fell into:

  • Sick people, seeking to cure or alleviate some specific malady.
  • Obese people, seeking to lose weight.
  • “Spiritually-oriented” people, looking to ritualize the experience into having some deeper meaning than I found plausible.

As for me, I wanted in and I wanted out.  I like food.  I anticipated this would be a pain-in-the-ass, but one that I could draw some benefit from.  But short of being pretty sure I wouldn’t die in the endeavor, I didn’t really know how hard it would be.

The warning flags based on my reading were about:

  • Food cravings (duh)
  • Flu-like symptoms around Days 3-5 as one’s body runs out of its last sugar stores, and starts “burning fat” for energy
  • Muscle aches and pains from the release of formerly blood-borne toxins that are sometimes stored in a person’s fat
  • Lethargy
  • Insomnia

Sum total: it sounded like a pretty shitty week.  But I don’t want cancer.  And I hadn’t done anything all that weird in a while.  Something that would make the stick-in-the-muds among my friends and family cringe or ogle at me like an exotic species.

So I strategically halted my grocery shopping, and (with the fridge almost perfectly empty) last Sunday night, I ate one final guava…

Then I called an end to that whole “intake of calories” thing.

Believe me, I’m no anorexic.  In one day, seven hours, and 24 minutes – when I can eat again – you’d best believe I’ll be chowing down.

Results May Vary

If there’s one thing that doing the Smart Drug Smarts podcast has taught me, it’s that person-to-person subjective results on almost anything can be all over the map.  Short of sex on the good end, and hand-on-hot-stove on the bad end, individual responses to any given stimulus are less predictable than we’d like in an orderly universe.

With that disclaimer in place, here’s what my fasting experience has been like:


It turns out that hunger is a multi-headed beast.  There’s the feeling of hunger in your stomach, that “we could use some food down here” feeling.  Then there’s the actual rumbling, grinding stomach pain of hunger – which is a different thing.  And then there’s the joy of picking out your next meal, deciding what you’ll eat where, with whom, and in what order.  All of these are distinctly different things, when you suss them apart (as I’ve had occasion to this week).

I was lucky to experience almost no physical hunger pains.  A few growls for a few seconds, primarily on Days 1 and 2.

The psychological aspects were (as of Day 6, still are) a ton more challenging.  I apparently take a lot of pleasure out of anticipating my meals, and the let-down when I have to remind myself that the meal I’ve just started anticipating ain’t gonna happen — that really sucks.


I was forewarned that in Day 3 and beyond, your stamina starts fading.  This was true, and I felt like on Day 5, my strength started fading too.  I haven’t tried any pull-ups in the past few days, but I’ll bet I’m way off my max.

That said, I’ve by no means been bedridden, which is what some reading had led me to semi-expect.  (Admission:  I am now laying in bed as I type this, but I just took a 30 minute walk with no fatigue whatsoever.)  It may be that since my body has now switched on the fat-burning engines, and isn’t fruitlessly scrounging for blood-sugar-that-ain’t-there, I could actually have more energy accessible than I did on Days 2 and 3, but it hasn’t felt that way yet.

Overall, there has been a pretty predictable pattern starting on Day 3:  Peppy mornings, lethargic afternoons and evenings.

Only once (on Day 5) has my vision dimmed as I stood up quickly.  This is a common thing among fasters — but still not a terribly pleasant experience.


You might think that late-afternoon lethargy and sleepiness are birds-of-a-feather.  But not exactly.  The physical depletion I’ve felt hasn’t necessarily made me want to sleep.  And my biorhythms since Day 1 have been all over the place.

My second and third mornings after beginning the fast (after about 28 and 52 hours), saw me wake up at 4:30am and then 3:30am, respectively — obviously without an alarm.  I had no intention of getting up so miserably early, but sleep just wasn’t happening.  Then a few nights ago, I pulled a 10.5 hour sleep-night, which isn’t normal either.

My policy while fasting has been to cut myself slack on everything beyond not-eating and diligently hydrating, so I’ve been rolling with these sleep/wake punches.  But anyone attempting a fast should be warned about the difficulties in maintaining a predictable schedule.


All my life, my brain cells have been powered by glucose.

As I write these words, and for the past few days, my brain cells are being powered by ketone bodies, the once-thought-to-be-toxic breakdown products from the metabolism of fat.

Since I might be “impaired” thereby, and not even realize it, I won’t judge the before/after quality at this early juncture (after all, writers will tell you that first drafts always suck) — but I frankly find it nothing short of amazing that our physiology has an entire redundant backup power system that works just fine, and that most of us never “turn on” to see how it works.

I have definitely felt cognitively “off” this week — although at times very “on,” too.

If I were to generalize, I’d say:

  • Mornings I’ve been clearer-headed than afternoons.
  • My afternoons/evenings haven’t been characterized by dumbness, but by “distractability.”  I’ve lacked the willpower to keep myself effortfully focused on one task.
  • This same “distractability” has often brought with it a slight euphoria.
  • I’m not completely sold on the Willpower is a Depletable Resource idea, but I have certainly felt my ability to push myself has been radically diminished this week, maybe to 30-40 percent of normal levels.  Is that because my willpower has been depleted by resisting food?  Or just because my body and brain have been discombobulated and, as my grandpa said, “you can’t push a rope”?  I leave that answer to future scientists and a double-blind, placebo-controlled study.

Weight Loss

First, losing weight wasn’t part of my goal here.  Very much the opposite; I was concerned about losing muscle mass as well as fat, as my body sought for things-internal to snack on.  (Although, over a comparatively short fast, that’s not too much of a danger.)

But I lost weight precipitously.  Mostly “water weight,” I’m told.  And all of us normally carry 2-3 pounds of  poop or proto-poop in our gastrointestinal systems, and I’ve shed all that by now as well.

I find it nothing short of amazing that our physiology has an entire redundant backup power system that works just fine, and that most of us never “turn on” to see how it works.

But the numbers are still shocking:  On Sunday I was 179 pounds.  Today, six days later, I’m 166 pounds.  And dropping.

Walking down the street today, I noticed my pants sagging; my never-gave-it-much-thought posterior has diminished in size enough to matter functionally.

(Despite these dramatic changes, I’m told to expect an equally-fast regaining of the weight once I start feeding again.)

Odds and Ends

Prior to the fast, I thought that seeing food while I was fasting, and the unencumbered people eating it, would cause me rage, despair, envy, etc.

The truth is very much the opposite.  Seeing food, smelling it — rather than being torturous, is actually a real pleasure.  I’ve been seeking opportunities to sit side-saddle at my friends’ meals so I can waft their food-smells.  I find this way better than the monotony of total sensory abstinence.

The water fast has also changed my beliefs about farting.  I assumed that no eating meant no pooping and, relatedly, no farting.  The pooping part was true.  (Final poop, and a very small one: Day 3.)  But I farted no less than 3 hours ago, after five-and-a-half days.  A modest fart, but a fart nonetheless.  How (and why?) one’s body makes these things, I’m now perversely curious about.

Things I Missed Out On

My personal response to fasting is just a sliver of what I saw in our self-selected group of test monkeys.  Overall, I lucked out.  Some folks had a far rougher time than me.

Among the physiological and perceptual reactions we saw were:

  • Heat waves or flashes felt throughout the body.
  • Inability to fall asleep.
  • Inability to keep warm.
  • All-over aches and pains akin to the flu.
  • Heart palpitations.
  • And this rather poetic explanation: “My thoughts were like the clearest chaos ever. Do you ever get chaotic thoughts when you have a fever? It was like that, but I could focus. And no fever.”

I noticed my pants sagging; my never-gave-it-much-thought posterior has diminished in size enough to matter functionally.

Would I Recommend This?

I guess that’s the same as asking, “Will I do this again next year?”

For me, I think the answer is yes.

Next year’s fast will lack the wonder of a first-timer’s view of a new experience — a definite downside.  But I’m hoping that it will be psychologically easier.  And I may well do a few shorter fasts (1-3 days) throughout this year to keep my system acclimated to such deprivations.

If any study comes out refuting Dr. Seyfried’s idea that “deep ketosis” (that is, living off fat stores with flatlined and minimal blood sugar) may be an effective cancer prophylactic, I’ll certainly look at that carefully.  Seven days of deprivation is too much to incur just for the novelty.  But as of now, my feeling is:  Even if there’s just a 50% chance that this is effective, isn’t a week-long inconvenience once per year, for a 50% reduction in ever getting cancer a pretty great gamble?

But for me right now, the main question on my mind isn’t “Will I do this next year?”…

It’s “What am I going to eat in 26 hours and 29 minutes?”

* It is advised that food be re-introduced slowly to any stomach that’s gone food-free for more than 5 days. I may or may not heed this advice. Currently, being a good little boy in this regard feels unlikely.

This post is unrelated to the normal subjects discussed at Smart Drug Smarts.  It’s just something I wanted to write.  – Jesse

Aside from a few areas of interest peculiar to me, I don’t really follow the news closely most of the time.

Which isn’t to say that bad things are happening in the world and I prefer to hide from them, but more that I don’t trust or enjoy the way on which our world is typically reported.  The hybridization between slasher-movie shock spectacle and over-emoted OMG hyperbole doesn’t do much for my fandom of the species.

That said, I still get major news by way of cultural osmosis, and every now and then a story gets under my skin.

The Charlie Hebdo disaster this past week was such a story.

A ton has been said already and will continue to be said as this brutal story and its cops-and-bad-guys aftermath winds to its conclusion.

It’s terrible for all the reasons we already know intuitively — and that the 24-hour news media remind us like a terror metronome.

I, no less than anyone, am sickened by this crime and feel an obligatory sense of loss, despite the attack being half-a-world away on a small number of people I’d never heard of before.

But why does this story still chew on me after the initial “Damn, that’s awful” washed over? It’s sad to say, but horrible things happen daily in our multi-billion passenger world — from time immemorial, and likely for quite some time to come.

I puzzled about it, and I think my ongoing stomach-churn is a devil’s brew of revulsion both at the crime itself and the way it will be distilled in the mainstream discourse.

We’ll hear the grisly story of the slain journalists, and their killers.  The background of the comics that prompted the attack, the details of the firefight with the French police.  We’ll see the justifiable grief and outrage from mourners and protesters, and finally segue to the latest details of the manhunt for the killers…  And when that invariably concludes, to the masterminds and organizations who trained and fomented them.

Who, What, When, Where…

I remember, drilled into my head at some very early age, my father’s bullet-points for how a “news article” could and should be written.  Five Ws and an H.

Who, What, When, Where, Why, and How?

“Answer all these questions, Jesse, and you’ve got yourself an article.”  A clear sentence or two responding to each would almost certainly fill the two-thirds of a lined notebook page that my teacher would expect.  Mind my spelling and punctuation, don’t smudge the cursive, and an A could be mine – a B at worst.

I can only assume that our major news media’s fathers and mothers imbued them with similar how-to lessons.  And I hear the echoes of this pattern in the Charlie Hebdo news coverage.

Who were these killers?

What sort of people are they, that they could do this?

When and Where did they commit their crimes?

The problem is, it’s a fading echo that I hear, of my grade-school bullet-list.  The first few questions are answered loudly, repeatedly, all-consumingly…  But we barely approach the last couple.

Why did the killers think this act was the best thing they could spend their lives on?

How were they so profoundly misguided?

And How come this will inevitably happen again?  And again and again.

These Hows are the most interesting — and ultimately, the most consequential — of all of the questions we could pose.  And they’re barely addressed in the majority of the pre-digested “we did your thinking for you, and by the way, here’s your emotional response; no need to thank us” media.

If we’re lucky enough to get a How, it’s: “How can we prevent future attacks by Islamic extremists on journalists?”  A legitimate question, sure.  But legitimate in the same way that swatting a mosquito is a legitimate reflex, but of little result if you built your house in a subtropical swamp.

Everyone reading this, at some point, has been held up for ridicule.  But few, probably none of us, have emptied Kalashnikovs into our ridiculers.

I’m on a plane as I type this, with nearly two hundred other people, all of whom endured the obnoxious ritual of having to de-shoe and be frisked by everything-detectors (our tax dollars paying for the privilege), in commemoration of an airplane-related event some 14 years ago.  This same proliferation of minor, maybe-effective-maybe-not inconveniences happens constantly, innumerable times, worldwide, daily.

Are we to now construct similar protective dikes around the political satire industry as we have around airlines?  And when the next disaster strikes, born of extremist insanity, will we mount ad hoc defenses on yet another element of society, then another, then another?

What We “Learn” From the Mainstream Coverage

It’s a sad irony that we’re mostly given cartoonish coverage of the deaths of these cartoonists.

The stories, maybe by necessity (since it’s what the public has been trained to hear), play like an Islamic extremism Mad-Lib.  Our TVs sparkle with scary mugshots, the killers’ nationalities, and repetitious details of the crime itself.

And if/when we finally “zoom out” to the broader implications, the topics are “Who else might be on the fatwa list?”  Or even “How can the West help the Islamic world reform and self-police its believers, without (gulp) offending anyone as it does so?” 

But mostly, it’s Who, What, When, Where, Rinse, Repeat.  Until the story flames out and the next news cycle begins.

Imagine for a moment that you are my friend, and you sit me down and tell me: “Jesse, I just met the most amazing woman, and I’m deeply in love.”  And I respond: “What’s her hair color?  Please show me several photos of her in provocative poses, and how big are her boobs?”

I would be completely, despicably missing the the point.  Whatever has you deeply in love, her physical topography is a side note, at best.  What makes this particular woman amazing?

I submit to you that this absurd missing-of-the-point is what the mainstream Charlie Hebdo coverage is doing.

Like your theoretical love interest, there is something amazing about the main characters in this story — the killers.  It’s a bad amazing.  But it’s what really deserves pondering over, not the case’s superficial details.  Let’s leave that stuff to the French cops.  What society needs to think about, long and hard, is this tragic incident’s Whys and Hows.

Why were these people so threatened by cartoonists that they were willing to take lives (and very likely sacrifice their own) to “solve” a perceived problem?

How do human beings get so remarkably misguided?

I find the latter question absolutely fascinating. 

And to be fair, the mainstream always does address it — sort of — but the answers are as superficial as my interest in your new girlfriend’s hair color.

“How can we prevent future attacks by on journalists?” is a legitimate question in the same way that swatting a mosquito is a legitimate reflex — but of little result if you built your house in a swamp.

“The killers are misguided because fundamentalist Islam is a lousy religion.”

Yeah, okay.  I’ll give you that, but can we go a little deeper?

“They’re misguided because Islam, like it or not, is inherently violent and what we politely call ‘fundamentalism’ is shirking the fact that the whole religion is rotten to the core.”

(At this point, the news anchors generally bring on rival experts to debate the merits of this stance, and we never delve further.)

Or a militant atheist might say: “They’re misguided because of Islam, sure — but are they any more misguided than Christian bombers of abortion clinics, whose actions are a pretty close parallel to what happened in Paris?”

This is a fair point.

But I still feel that fundamentalism-bashing, Islam-bashing, even religion-bashing, are all kind of looking at symptoms rather than an underlying problem.

I try to put myself into the heads of these killers and ask “what would it take to make me think, out of all the things I could do with my life, that killing some cartoonists is my best next move?”

The Story of a Sickeningly Awful Overreaction

I think we can all agree that the victims at Charlie Hebdo were never going to up the ante beyond drawing cartoons.  Certainly their killers were under no direct threat — other than to be, I guess, laughingstocks among the Charlie Hebdo readership.

So why bring guns to a pencil-fight?

The answer to that question is the seed of all overreactions:  Wounded pride.

Every one of us has had our pride attacked, and we know what it feels like.  We’ve all been held up for ridicule.  But few, probably none of us, have emptied Kalashnikovs into our ridiculers.  I think there are two general reasons for this:

1)  Revenge wasn’t worth it to us.  Either the punishment of society, or our own self-flagellation for venturing into such black moral territory, kept us cowed into a lesser response.

Or, that failing:

2)  Despite our pride in our beliefs, we also had confidence in them.  We were so sure we were right, that someone else’s mockery just didn’t matter that much.  Complete certitude has a way of deflecting ego-jabs.  Copernicus got mocked plenty for his wacky “beliefs” about the arrangement of the solar system.  But his pride had the psychic safety-net of confidence backing it.

Mock me all you want, Copernicus could think to himself.  I’ve got an internally-consistent framework for my ideas, matching all the available evidence.  I’m as right as I can be, given my limitations.  And I’m certainly righter than you.

I know that some televised experts examining the Charlie Hebdo story will lean on Islamic doctrine, and say that (maybe) these killers were Quranic literalists, and were thus compelled to act in a certain way — that their own emotions were essentially irrelevant.

To this I say: hooey.

It’s our emotions, largely, that make our decisions for us — and if emotions didn’t compel us, no one would choose to be, for example, a Quranic literalist.  (People who make this choice certainly aren’t doing so on the strength of demonstrable physical evidence.)


I think what we have here — pathetically — is a group of killers with an abundance of pride, and a complete lack of confidence.

If they believed — really, really believed — in the truth of their religion, and that the cartoonists were misguided and idiotic heretics, would they have felt so threatened that they’d choose to sacrifice their own lives just to snuff out Charlie Hebdo?

I’m no psychologist.

And I’m not a religious scholar either.

But I am a guy who has been mocked, and has wanted to lash out.  And I know that my compulsion to anger is profoundly more powerful when I sort of suspect that what I’m being mocked for is actually kind of accurate.

You see what I’m getting at?

What if the killers’ almost-certain “martyrdom” (e.g. suicide-by-cop) is a psychic escape hatch to avoid confronting the profoundly unconvincing worldview their religion dictates?

I don’t think the so-called fundamentalists believed their own schtick.

European cartoonists mocked them, and the best they could come back with was “Shut up, or I’ll hurt you.”

[Mockery continues…]

“Okay, hurting you now.”

While this kind of response is emotionally satisfying in a really juvenile way — it’s never intellectually persuasive.

Not even to the person committing the violence.

We’ve all been kids.  We’ve all been in arguments where the best comeback we could muster was “shut up.”  And we all know, when “shut up” is all we’re left with, that we’ve lost the argument — unless we choose a weapon other than words, and take it to blows.

The horror at the root of this story isn’t that the Parisian killers “took it to blows.”

It’s that adult human beings, even today, live trapped in world-views so unconvincing that when challenged, the only retort they’ve got is “shut up.”  Violence like we saw this past week is the ugliest, and most newsworthy, response to this sort of anger — but anger isn’t even a necessary emotion when a person’s beliefs are backed by true confidence.

We all owe it to ourselves, and our fellow humans, to constantly challenge, and re-challenge, our beliefs.  Ideas that cannot withstand scrutiny — indeed, that cannot withstand open satire and mockery — do not deserve to be embraced.

We should fear and distrust any institutions — religious, cultural, or otherwise — that try to insulate or exempt themselves from the public acid-test of humor.

Failure to do so will lead to more Charlie Hebdos, of a greater or lesser degree, again and again and again.

We don’t let blind people drive cars. Or people who are bad at math program missile guidance systems.

If we did, it’d be “interesting” to see what happened.

But interesting in the “consequential, and most likely disastrous” sense. One that few of us would opt for.

Luckily, we’ve each got a little internal critic – a Jiminy Cricket of Logic, if you will – who aggressively pulls the e-brake on ideas that don’t measure up to his standards of logical prudence. No matter how interesting those ideas might be.

This probably saves our skins numerous times daily. These illogical ideas need someone whack-a-mole-ing them.

There is an exception, though: That class of ideas that is both decoupled from logic, and utterly inconsequential – since they can’t inflict themselves on the real world.

I’m talking about dreams.

Specifically, I’m talking about hypnagogic imagery – those dream-like images and ideas that flit through our minds as we down-spin from consciousness into sleep.

They’re like the thought equivalents of a Rube Goldberg Machine – where a normal idea as you lay down to sleep reminds you of something else… that was sort of like that time… when that song was playing… and what was that one lyric? It always made you think of…

And then you’re asleep.

You’ll probably recall hypnagogic imagery from your own life, because sometimes when you’re in this state, something will jolt you awake – a noise, an errant cat – and after you’ve got your bearings, you’ll realize that moments before, your mind was filled with absolute nonsense.

This past year, I’ve made a habit of trying to harvest ideas from this nonsense.

A Silver Lining on the Sleep You Don’t Get

I’m going to play my “I’m Not A Doctor” card and write something that if I were a medical professional, would be downright irresponsible.

There are certain upsides to being sleep-deprived.

Yeah, it’s not a popular position, and I know the counterarguments. They are logical, weighty, and relevant. But the Rationalization Engine that is my mind, thinks of it this way:

If circumstances are going to sleep-deprive me anyway, can I find a hidden upside?

And I do, in the form of creativity.

Allow me to elaborate.

In an earlier phase of my life, I did a lot of creative writing. I was one of “those people.” And I found, consistently, that I did my best first-draft writing late at night.

Editing was a different story.  At night I didn’t have the logical cojones to hold a double-handful of plot-threads and character arcs and prosaic flourishes all at once. My logical side, Editor-Jesse, was pretty good at that stuff. But he kept strict 9-to-5 hours. That guy wouldn’t stay up late.

“In hypnagogia, the Rails of Logic disintegrate before the Train of Thought itself disappears…”

But the other half of my writer-self, Creative-Jesse, could stay up until the wee hours. And though his command of logic was marginal, he came up with interesting ideas. It took time for him to articulate them, though. The initial burp of an idea-in-progress would sound ridiculous, headed straight for the waste-paper basket. But if he had some time to gum around with it, there was sometimes a worthwhile kernel in there.

(Think of how ugly babies are when they’re first born – but after a week they’re pretty darn cute. Am I wrong?)

Here’s the thing: If Editor-Jesse had the physiological staying-power of Creative-Jesse, and had been present for the late-night sessions, he would have stopped Creative-Jesse’s newborn ideas dead in their tracks.  In fact, this is what happened when I tried writing during the day. I could do it procedurally, but I never got the never-saw-that-one-coming creative breakthroughs that would happen at night.

Your Train of Thought, Minus the Rails

When we’re awake, we’re always thinking about something.  (Except you prodigious meditators, I know, but bear with me here.) We talk about our “Train of Thought” – the one that we lose when we try to remember what we were just talking about, and realize it has jumped the tracks.

All through the day, the Train chugs along… And at night when we sleep, it dissolves into the black of unconsciousness.

But this doesn’t quite happen all at once. The Rails of Logic disintegrate before the Train itself disappears. And unbounded by logic, the train can careen to some interesting destinations in those pre-sleep moments.

We’ve all slept before, so I’ll assume you’re familiar with this. (If you haven’t slept, stop reading right now and try it. You’ll like it.)

“If circumstances are going to sleep-deprive me anyway, can I find a hidden upside?”

While asleep, the brain behaves quite differently.  The motor cortex goes on standby, so you don’t physically act out your dreams. And your hippocampus shuts off your long-term memory’s writing systems, so you don’t remember your dreams, either1.

Thus, we rarely remember the Train of Thought’s itinerary during these off-the-rails, logic-free detours as we enter into a night’s sleep.

Which is a pity – because, unconstrained by logic, I’m at my creative best.

So I’ve been developing a little hack to capture this hypnagogic creativity-uptick…

I’ve become an expert napper.

Naps, You See, Are Hypnagogic Prime-Time

As opposed to night-time sleep, when you generally have to move through all the major sleep-phases before getting to REM (dream-state)… naps can short-cut you in straightaway. (There’s science behind this, but I won’t go into it here.)

I find that the shorter-duration, less-deep sleep of naps makes me better able to remember the content of my dreams, as well as pre-sleep hypnagogic imagery.

But for usefulness, it’s the hypnagogic imagery, not the dreams, where I get the real value.

By the time I’m dreaming, there is no conductor on the Train. The chances that my dreams will be applicable to anything in the real world are next-to-nil. But the hypnagogic state still has vestiges of whatever I was thinking about when I laid down to nap… So if I’m conscientious about it, I can “seed” my hypnagogia with the ideas I want to explore.

Since I’ve gotten good at this, I’ve started taking two, sometimes three naps a day.

I find my hit-rate on half-decent ideas is not bad – maybe something legitimately useful that I might not have otherwise come up with, one day out of two.

What Are Some Examples?

  • I’ve thought of solutions to business problems.
  • I’ve thought of approaches to difficult conversations.
  • I’ve thought of catchy names for boring things I needed catchy names for.
  • I haven’t cured cancer yet, but I’ve only been doing this systematically for about six months.

(And, to be fair, I rarely think about cancer as I’m taking my naps.)

Was I Going to Put a Pro-Sleep-Deprivation Spin on This?

Yes, I was. Here it goes:

I know that many people complain “I can’t nap.”

To which I reply: “If you cut your night-time sleep short, you’ll find daytime napping a heck of a lot easier.” And a few naps throughout the day are great for your alertness and your neurochemistry – whether you’re sleep-deprived or not.

A Hypnagogic How-To

Even if you’re not a fan of sleep deprivation (and I can’t blame you), the fruits of hypnagogia can still be yours.

Here are some shortcuts:

  1. Use the Nap Pose.  My ability to nap has been revolutionized since I discovered the “Nap Pose.” Flat on your back, arms just a little out from your sides, palms facing inward or downward.  If you feel the urge to switch positions, don’t.  Stick with it. The idea is to give your brain time to settle into hypnagogia.  Don’t make it about your body and “trying to find a comfortable position.” Just watch the show going on in your head. Keep watching. It’ll get interesting.
  2. Put a dark blanket over your eyes. Don’t cover your mouth or nose so breathing is difficult. But black out your vision. Not a wimpy blanket, a thick blanket. Keeping stray photons from penetrating your eyelids will make getting sleepy much easier. It also, I believe, makes the hallucinatory images stronger, since they’re not competing with real signals from your optic nerves.
  3. Look deep into the Nothing… I’m going to sound so damned Californian as I write this; please forgive me.  But remember being a kid and lying on your back, looking up at the clouds, trying to decide which ones looked like what animals, or whatever?  For me, a sure-fire way into hypnagogia is to do the same thing with the blackness in front of my closed eyes.  Look for the imperfections in the blackness.  What does that look like…?  What does that remind you of…?  Stir it a little bit, try to add an element, be a movie director…  I find that focusing on my sense of vision, while simultaneously depriving myself of visual input, is a great way of forcing entry into this state.  (This is similar to the idea behind a sensory deprivation tank, minus the auditory and tactile deprivation.)

Currently, I’m only a one-man study.  But I’m guessing that the combination of the above techniques will work for most people, to achieve a harvestable hypnagogic slideshow on a regular basis.

Needless to say, sometimes your hypnagogia will reveal to you nothing of consequence.

Sometimes it might be good, but you’ll forget it anyway.  After all, you’re falling asleep.

But remember, the little logical-you falls asleep first.

That pesky, dogmatic internal critic – who censors ideas he deems unfit for the waking world – falls asleep faster than your creative self.  And if you can catch him napping, you just might be able to smuggle some creative genius across the borders of sleep, back into reality.

1 Wouldn’t it be cool to be able to disable this feature in your brain’s Settings panel?

There’s a word I’ve always felt to be missing from the English language.  It’s a bizarre omission, I think — because if it existed, it would describe pretty much my favorite thing.

You know that “pins and needles” feeling when something really excites you?  Your skin feels electric, like the hairs on your legs and arms are little lightning-rods, pulling in energy from the air around you.

It moves in a slow wave, tingling up your body toward your head and neck.

You can feel the blood in your face.  Maybe you feel compelled to take a breath — like a slow-motion gasp, in recognition of whatever triggered the sensation.

Why is there not a word for this?

The closest thing English offers is the phrase “a sense of wonder” — but that seems too abstract, and misses the very physical, very transient nature of what I’m talking about.

Despite its namelessness, this one sensation is the biochemical carrot that keeps this particular rabbit running.

A Chemical Harpoon

Something is going on for me physically, internally, when I feel this.  I’m sure dopamine — the brain’s “reward” neurotransmitter — is involved, because the sensation is both pleasurable and an automatic inducement to feel it again.

In fact, at the risk of venturing into PG-13 territory, I think there’s a strong analogy between this sensation and an orgasm.  If an orgasm is the physical endgame of sexual arousal, the sensation-of-which-I-speak is the physical culmination of intellectual arousal.  When a sense of understanding gets so big it spills out of your brain into your body’s epidermal nerve endings… well, that’s my layman’s-science sense of what’s going on here.

For me, it’s triggered in a few ways…

  • Sometimes by really “epic” music, almost always timed to some audible crescendo.
  • Sometimes through emotional voyeurism, when I see someone experiencing an emotion, and for a moment I empathize so clearly that the walls of identity between us become semi-permeable.  (“If I were you, I’d feel that way too”; in a profound way.)
  • And lastly, by dawning comprehension.  A feeling not of “I know,” but of “Now I know.  I have figured it out.

It’s this third of these that feels like the heart of the matter; maybe the first two are just specific instances of this general case.  So — so that we have something to call it — I’m going to call this feeling PSoE: the Physical Sensation of Epiphany.  (If you’ve got a better name, email me.)

The Relentless Search for Novelty

Let’s divert for a moment with a hard-drugs parallel and some thoughts about novelty — then we’ll tie all this back to nootropics.

This sensation can be triggered by dawning comprehension.  A feeling not of “I know,” but of “Now I know.  I have figured it out.

I’ve never been a heroin user, and I certainly hope you haven’t been one either.  But I’ve seen enough movies to be familiar with the idea that long-term junkies are always “chasing their first high,” and the only way to re-approach their initial euphoria is to up their dosage.  (In heroin’s case, with predictably disastrous consequences.)

So it is with many forms of pleasure.  A measure of novelty — either a new experience, or a new amount of a familiar experience — is required to deliver flat-lined levels of joy.  Somewhat frustrating, until you think about how adaptive this is.

Think of the joy a little baby feels when he masters the pronunciation of his first syllable, or a toddler feels when he’s able to consistently not crap himself.  If those levels of self-satisfaction didn’t quickly subside, there wouldn’t be much motivation to move past not-crapping-one’s-self to, say, discovering the Laws of Thermodynamics.

So, luckily for the pragmatists among us, good feelings don’t last.  They fade, they fade fast, and they challenge us to re-earn them.  Continually.

So for me, as I chase my drug-of-choice — PSoE — I’m distinctly aware that I won’t feel it by merely revisiting my old Calculus homework and tweaking with stuff that I’ve already figured out.  It’s the figuring it out that brings the rush, not the knowing it.

And this forces me to put myself into circumstances that are conceptually challenging.  As often as I can.  Day in, day out.  Because this is the forest where my game is hunted.

I’m pretty disciplined about this.  My life is no smoothly-oiled machine — not by a long shot — but one thing I will say for it is this: I have effectively banished boredom.  I’m never bored.  I’ve spent maybe 10-15 hours bored in the past two decades.

This is largely by design.

At a fairly young age, I realized I dislike boredom more than misery.  (And believe me, I’m no fan of misery.)  PSoE had something to do with this.  I could feel PSoE when empathizing with loss — with an epic, shattering defeat.  (Think William Wallace being drawn and quartered at the end of Braveheart.)  But I’d never felt even a twinge of PSoE while empathizing with boredom.

Boredom is essentially un-empathizable.  You can’t share the emotion because boredom is a lack of emotion resulting from a lack of engagement.  And epiphany, realization, dawning understanding — whatever you want to call it that sits at the root of PSoE — it comes only in times of intellectual engagement.

So engagement — to put it in math-nerd terms — is a necessary-but-not-sufficient condition.  You may be fully engaged and not have an epiphany.  But you’re never going to have an epiphany if you’re shirking on intellectual engagement.

And Finally, Nootropics

Nootropics help me to maximize my time spent in full intellectual engagement.

If PSoE is the game I’m hunting, and intellectual engagement is the forest where I hunt, then nootropics are a predictable shortcut deeper into the heart of that forest.

I like the focus I feel when I strap on noise-canceling headphones, crank up music with lyrics in a language I don’t speak, drink a black coffee, pop 100mg of Modafinil, and immerse myself into something.  This ritual is like a prizefighter lacing up his gloves, or a concert pianist cracking his knuckles — an intersection between the brass-tacks physicality of the discipline, and a Pavlovian trigger that here we go again, this is what we train for.

Many days I’ll work without the rush of PSoE ever coming.  And that’s okay, because I know it’s a now-and-again thing.

Many days I’ll work without nootropics in my system.  And that’s necessary.  I don’t want to build up a tolerance to my favorite substances any more than a samurai wants to let his katana-blade get rusty.

Many days Mozart would sit down at the piano and compose something unremarkable, and toss it out by mid-afternoon.  This is part of the discipline, maybe the largest part.

But sometimes, a new domino will fall.

Sometimes a hard problem will be solved, and we’ll understand why.

Sometimes the universe will de-riddle itself, just a little.

And when it does, the hairs will go electric.  The blood will rise in our cheeks. Pupils will dilate.  Dopamine will release.  The intermittent, unpredictable burst of pleasure will strengthen our addiction.

And the hunt for the next PSoE rush will have already begun.

I live in a house with a flushing toilet.

This does not amaze me, and it probably does not amaze you.  It does, however, amaze my cat.   To my cat, a flushing toilet  (we have three!) is about the most ceaselessly amazing thing in the universe.

At the risk of seeming like a bad pet owner, I admit the following:  It’s sometimes a struggle, while peeing, to keep my cat from sticking her head into the urine stream so she can get a better look at the soon-to-flush toilet bowl.

Funny anecdote – cats, bodily functions, etc. – but what’s my point?  The point is: My cat is stupid.

She hasn’t caught on to how a toilet works, and she’s in no danger of doing so.  I could teach her that the silver handle makes the toilet-flush happen, but even if she memorized that relationship in a Pavlovian sort of way, she’d never really “get it.”  She’ll never have an ah ha! moment and recognize what that floating ball in the water tank is for, or what the chain attached to the handle does, or any of that “complicated toilet stuff.”

To a certain extent, we are all my cat – how many of us could explain a transistor, or a six-cylinder engine, or tell you the most efficient algorithm for a 2-elevator building to keep people on different floors from waiting any longer than is necessary?  Yet these things are all around us, and we use them every day.  We’re vaguely aware that we owe a lot to prior geniuses within our species, but basically we just expect stuff to work.

And yet, on the other hand…  We are not at all like my cat.  With proper instruction and some intellectual effort, you could figure out my toilet.  You could learn how a transistor works, or an engine.  Excepting those people with real shortcomings, be they genetic, nutritional, or maybe due to some brain trauma, most Homo Sapiens are capable of figuring out these complicated-but-not-intractable systems.

So while you might not exactly understand how your toaster works, you’re not threatened by this. Because you can honestly tell yourself, “Hey, if I ever put my mind to it, intellectual mastery of my toaster is my biological birthright.”  And off you proudly go.

But I’m nervous about a future where this will no longer be true.

It used to be, back in the Enlightenment, if you wanted to be a world-level expert on most realms of human knowledge, it’d take some effort and access to the best books then-available, and probably a few years of time…  But with those ingredients, you could essentially know everything there was to know about a broad subject.  Like, for example, “Biology.”

This clearly isn’t the case any more.  The envelope of human knowledge has radically expanded in the past few centuries.   No one with any sense claims to be a world-class expert on a domain like “Biology” or “Engineering.”  Our species’ best are experts within domains now, not on domains.   A career can easily be spent just developing incomplete understanding of a single molecule — like, for example, nicotine.

(Quoth the Enlightenment-Era biologist: “What’s a molecule?”)

Of course, nature left us ill-equipped for studying distant pulsars or tiny microbes or weather-system models with our built-in tool set.  We needed telescopes and microscopes and computer mainframes first.  One could argue that we needed caffeine first.

Advancing technology has always been necessary to push forward our understanding of the world.  Without it, we’d be in the predicament of my cat — caught under a low biological ceiling limiting comprehension of our everyday environment.

Nowadays, the tools available to intellectual envelope-pushers aren’t just tools of enhanced perception (telescopes, microscopes, etc.), they’re tools of expanded cognition.  From a spreadsheet auto-calculating results tens of thousands of times faster than any human, to a biochemical booster-shot like caffeine, nicotine, or a Racetam, thinking tools are helping discoverers get further and further beyond old-school biological constraints on understanding.

This makes the things they’re learning are not only profound — but sometimes profoundly counterintuitive.

Luckily, metaphor and analogy help us out a lot here.  “This is like that.”   I explain SMS messaging to my dad as “It’s like email, only on your phone.”  Someone in the 1980s could have explained a PC as “It’s like an electric typewriter that allows you to edit words before they’re actually typed.”  These sorts of gross oversimplifications allow the cerebral superstars who actually figure things out to bring back intellectual meat for the rest of the tribe, and cook it up in a way we can digest.

But now, as boundary knowledge is increasingly generated not just by smart people, but by smart people amplified by thinking technologies, the things they figure out are going to be tougher and tougher to wrap mere-human brains around.

Relativity is a great example.  It’s a fantastically complex realization Einstein had, and in 1905, armed with only a pen and a notebook, he changed the bedrock of our understanding of the universe.  Yet now, over 100 years later, most humans can’t explain what relativity “means” in any more than an obscene, cartoonish bastardization of Einstein’s idea.

In other words, while Homo Sapiens can generally lay claim to intellectual mastery over their toasters… relativity is a different story.

With Einstein-level insights, the majority of us are more like my cat with the toilet:  Permanently baffled.

Here’s the good news/bad news:  We’re going to be seeing an upswing of relativity-like discoveries.  But the far edges of human knowledge are getting so fantastically complex that it will no longer just be a matter of instruction and initiation to be able to follow along; a person will have to be pretty damned smart just to understand the dumbed-down analogy.

This, in large part, is why I’m interested in smartening technologies — smart drugs, brain stimulators, man/machine interfaces.  I want to maintain an intellectual foothold in the world being built around us.

I want to maintain my relationship with my toilet.  I never want to have my cat’s relationship with my toilet.

Fair Warning:  This is not a neuroscience article.

But, I’m assuming that those of us who are into nootropics and brain optimization are probably interested because we want to maximize our least renewable resource: time.

So I wanted to share something I’m doing this year — my New Year’s Resolution of sorts — both because it will probably have impact on Smart Drug Smarts, and because the  idea itself may have value to other people.

I call them “Barbell Weeks,” and I’m gambling approximately 25% of my 2014 on them.

Last year, I read Antifragile, a book by Nassim Nicholas Taleb that’s too idea-rich to easily synopsize.  To grossly oversimplify things: it explores how some systems contain feedback loops that make disorder and unpredictability a good thing rather than a bad or neutral thing.

One of several terms Taleb introduces in his book are “barbell” upsides and downsides, derived from the idea of looking at a graph where results are flat-lined in middle-ground likelihoods, then shoot up on a nearly-straight trajectory at some point on the x-axis — much like a barbell if viewed head-on.  Taleb, a former derivatives trader who got wealthy following his own advice, says to look for “barbell upside” opportunities — situations where the worst-case is small-to-middling, but the best-case goes straight up.

(His thesis is: On a long enough time-frame, or with enough total repetitions or chances, you will eventually land on the outlandishly-winning end of this curve, while the other, far more common areas of the curve won’t cost you that much as you land on them for months, years, or decades.)

Needless to say, not all opportunities have barbell upsides.

But one that I’m thinking might, is my plan for…

Barbell Weeks: The Recipe

  • One Week per Month — a full 7 day stretch — I will sweep my normal day-to-day obligations aside and force them to sit and wait.  A full clearing of both the mental and physical desktop.
  • Dedicated Single-Project Focus.  I’ll be working on something, to the exclusion of everything else, for all 7 days.  The “something” I’m not putting any particular parameters on — in some cases it may even be a traditional vacation — but only one something per Barbell Week.  No split-focus, and any multi-tasking will be multiple tasks within the overall project.
  • If Something Isn’t Worth A Week, It’s A “No.”   There are 12 months in a year, so I’ll have 12 chances to say “yes.”  But aside from that, I’ll be Mr. No.  This is where I expect major benefits in the rest of my life (the non-Barbell weeks).  I tend to get distracted by a lot of mini-projects that nibble up an hour here, 3-4 hours there.  In 2014, I’ll be saying no to these things.  If an endeavor doesn’t merit a Barbell Week commitment, it’s not weaseling into my calendar.  Period.

What does any of this have to do with Taleb’s “barbell” lingo?  Here’s my thinking: Most of these weeks will probably have no long-term upside for me.  I’ll be finger-painting or learning Esperanto or something that will seem inane or ill-advised.  But in all cases, I’ll have only lost a week, and probably gotten a nice recharge by shaking off my normal daily schedule.  A week, by my reckoning, is a low-impact, acceptable loss on the “bad end” of Taleb’s results-graph.

But if even one of these weeks yields a project that is a real success (in business, learning, adventure, or whatever), the potential upside is essentially unbounded.  Thus, the “barbell” name.

My first Barbell Week will land at the end of January, and the project focus is going to be a tech start-up gambit that I can’t reveal just yet.  But the name includes “jetpack,” so that’s fun.

Also, a public hat-tip to my friend Marcus, whose off-handed statement “You should take a week off every now and then” spurred this whole idea.

Finally, if you read this and get inspired to implement Barbell Weeks yourself, let me know.  I’d be curious to hear what other people would devote a week to.  And maybe we could set up some sort of post-board or forum to watch for the low-frequency, high-upside barbell win that on a long enough timeline, one of us is sure to get.



PS:  Negative barbell graphs are possible too, and thanks to Murphy’s Law, more common.  So watch out for those.

2013 was the first full year of Smart Drug Smarts.

The podcast toddled, I think it’s fair to say.  In 2014, I intend for it to walk.  And by 2015, hopefully it will be entirely out of diapers and entering its first spelling bees, but that’s looking too far ahead.

Anyhow, being that it’s the end of the year now (2013), I’m feeling, as many of us are, in a “looking back / looking ahead” sort of mood.  I’m also away from family during this year’s holidays, which gives me a rare bushel of free time.

So what can you expect from Smart Drug Smarts in 2014?

I might as well hang some things out there to hold myself to, both as an accountability metric, and so listeners and readers (that means you!) can chime in if I’m way off the mark on what I think might be valuable for you.  The first one is obvious to any repeat-listener: I need to get on a more consistent production schedule.  The podcast was always intended to be weekly, and in 2014 I’ll be redoubling my efforts to make that happen.  But that’s not all…

Nootropic Stack Data Collection + Sharing

Who’s taking what, how much, and how often?  This is a big question for a lot of minds, including me.  I’m guessing the answers are going to vary really, really widely, but that’s just based on my informal straw polls.  I’m excited to get some online data-collection happening in the really near future.  As in, it’s being coded now, by the time you read these words.

A Quickie Guide to Nootropics

What you can expect from different smart drugs as far as…

  1. Subjective Experience
  2. Evidence-Backed Effectiveness
  3. Safety
  4. Time Period of Effects
  5. Ballpark Cost-per-Dosage

Putting a simple resource like this together on the website is something that’s long overdue.

An App

Yep, an app.  In my life outside Smart Drug Smarts, I’m a software developer, and why the heck haven’t I made an app for Smart Drug Smarts yet?

No good reason, that’s why.

I’ll be changing that soon.  Probably nothing too fancy, but a way to make sure you’ve always got the newest podcast episode in your hip-pocket, plus a way to manage your nootropic stack and what you’re taking on what days, for those of you who like to cycle your supplements.  This will be iOS first, Android second, because I’m an Apple guy, so apologies to the ‘droid fans out there.

More Awesome Guests, on a More Regular Schedule

So why can I now predict this with conviction when I’ve tried and failed at this for a year now?

One very good reason, and his name is Ben.

As of about 3 weeks ago, I’ve got a bona fide producer running things on the show: Ben Pomeroy (a man who’s nootropic credentials include accidentally hyper-dosing himself on Modafinil).  Ben will be the engine behind the vast majority of the changes and improvements you see in Smart Drug Smarts over the next 3-6 months, and he’s already been directly responsible for the uptick in publication over the past couple weeks.  So far as that goes, if you have any requests for episode content, drop an email directly to Ben and cut out the now-extraneous middle man (me).

More Futurism

I always thought I was into sci-fi. Well that’s true, I am, but I think I’m really generally just into what if scenarios, and the future is the ultimate what-if.  Not to mention, some version of it (whether we see it coming or not) is real.  One of my favorite episodes thus far, though it touched only lightly on smart drugs, was with philosopher David Pearce on transhumanism and really broad “where are we going as a species” topics.  It’s something I find incredibly intriguing because inevitably, something amazing is going to happen — good, bad, or just weird — and there are too many variables involved to do much more than speculating.  Anyway, expect more episodes with themes along these lines.

That’s five things already, probably enough to stop before my typing fingers start writing checks that my schedule can’t cash.  But rest assured that Smart Drug Smarts is moving way up the priority-pole around here, and I really thank all of you who have been listening and participating up until this point.

This nerd party is just getting started.

Happy Preemptive New Year.


Scroll to top