In Episode #43, we speak to Dr. Nicole A. Vincent and Dr. Emma A. Jane, co-authors of the article “Put Down the Smart Drugs,” discussing the ethics of cognitive enhancement. Jesse weighs in with his own views on the speculative rights and wrongs of nootropics use, and the friendly debate highlights important issues for us to consider in our ever-changing environment.
Key Terms Mentioned
- The Conversation: “Put Down the Smart Drugs”
- This Week in Neuroscience: The Secret to Staying Young Forever?
- Cognitive Enhancement Conference – Delft, The Netherlands
Dr. Jane: Your introduction as someone who makes decisions as an individual using particular medications or interventions without necessarily thinking about the broader ramifications of many many individual decisions is exactly the kind of point we were trying to make in this article. In that while it may seem like a really good idea to take a particular cognitive enhancement medication on an individual basis, what we are trying to do is direct attention to some of the consequences that could flow from many many of those individual decisions. Having an impact that as individuals you may not have readily arrived at in your particular thinking processes.
Jesse: The second and third order consequences are things we sort of do incrementally. But I think that's something like any decision people make can have long term ramifications if you scale that decision up to societal levels. That kind of thing happens all the time.
Dr. Vincent: It can but here's one of the differences. That one of the features that we're trying to draw attention to is that the choices that we make right now, It's not just that they may have long term consequences but rather they may change the kind of decisions that will be rational for us to make in the future. So for instance if all of us engage in this individual taking of smart drugs to increase our productivity, right? Because we want to make sure that we do better, become more competitive. What that does is that it changes the future environment in which we then have to make future decisions. So for instance it is no longer the case that in the future I'll make exactly this decision because the environment would've changed. Right? Everybody else is also doing this. So that just ups the ante. So the effect that our current decisions have on the kind of environment in which we may later on have to make decisions and the fact that certain decisions which we would've liked to have made in the future, may now seem irrational simply because of the fact that we've changed the environment. Right? It's sort of the feature that we're trying to highlight.
Jesse: Couldn't we make the same argument though about something like public education. I mean by choosing to make attendance at primary school or university or any of these things sort of a standard part of most cultures, we've certainly changed the intellectual environment that people are expected to make decisions of and choose their own education options. I don't think many people view that as a problem. I mean would it be better to go back to a time where everybody was raised in a log cabin by their parents based on whatever their parents knew.
Dr. Jane: Log cabins are great. They're very educational.
Jesse: Nothing against log cabins.
Dr. Jane: It's not to say that change in itself is necessarily bad. Just to go back to an earlier point. If you're using smart drugs to get an advantage and you're not necessarily interested in the kind of big picture ethical concerns - say I have; you may want to consider the fact that you know lots of people who are not taking them, you have the option to take one of these medications and you might gain an advantage. But further down the track if as we're speculating, the use of such medications and interventions becomes the new normal then you no longer have the option of taking those to get ahead. You'll need to be thinking of taking them just to keep up. So even if you don't buy into the ethical concerns that we have, we're looking at changing the landscape that would really effect the rationale and the individual's use of these medications and techniques. As well as the end result of taking them.
Dr. Vincent: Let me just add another point. So the way that you phrased the question like is that it's a question about - what's the difference? We already accept public education and it's not like we want to go back to the log cabin, we would want to keep public education going. I think that's the wrong way to approach the problem that we're trying to gesture at Jesse. So think of it in terms of coffee. Somebody recently said to me, "But look we already use coffee so what's the big deal? Surely there's no great distinction here. If this stuff is more effective than coffee and it has fewer side effects what are you on about?" And I think the real question is - Do we want to end up in a certain kind of society? Which is the thing that Emma is again drawing attention to. It's not that there is necessarily going to be a huge difference between particular kinds of enhancements but there may however, be differences between particular kind of societies. And particular kind of relations that we end up creating for ourselves. So at the individual level do I necessarily think that there's a difference between coffee and enhancers? Well, no. But I do find it a little bit coercive that the fact that in the morning I just do have to have my coffee in order to do the job that I do.
Jesse: But nobody is enforcing, that's coming from your own feeling that you need to have some coffee. Right now there's no employer that's giving you a breathalyzer test to ensure that you've had your coffee in the morning or anything like that.
Dr. Jane: I've always passed that particular test with flying colors.
Dr. Vincent: Let me use another example - painkillers and also medications for cold and flu. There is this medication called Codral cold and flu tablets. This thing is a combination of paracetamol (acetaminophen), codeine and pseudoephedrine. Now somewhat not surprising when people take this medication when they have the flu, they suddenly feel fine as the good old commercial used to go - It would say "soldier on with Codral". What was the idea there. Well, you too can go to work, doesn't matter that you've got a cold. Doesn't matter that in a past life you would've just laid down for a couple of days and gotten better. What you can do and actually what you ought to do, because this is what everyone does, right - is that you ought to take this particular pain killer or this particular medication. Now, the very fact that this was sung to a happy little jingle and "soldier on with Codral" with images of people who wake up feeling awful but then because of this medication they can now do the responsible thing. To me that sounds like an irresponsible thing, to be doing this. I mean do you see this or not?
Jesse: I can see arguments on both sides. It depends on what the extent of their responsibilities are. If it's somebody who needs to get up and take care of their children because there's no one else to do it. You know the Cuban missile crisis and JFK has to go to work whether he has a cold or not. There are instances where soldiering on might be exactly the right thing to do. But I think that's a very case by case basis.
Dr. Vincent: Absolutely. The important thing to draw attention to is the fact that if in every case I become expected to take Codral. So for instance the garden variety "No Nicole just go into work." It's not that something particularly important is at stake but look boss has come to expect that if you've got a cold you'll pop this tablet. Maybe not formally but informally. I mean come on there are these medications so why the hell aren't you taking them? Being able to distinguish the patients where it is actually important to take them - that these are the precious important occasions that we would like to save these medications for. Rather than it just becoming an expectation when you're normal, that in every context you'll do this.
Jesse: Could you give me a sort of example which we can see in the real world now of explicit or implicit coercion by society for people to do something like this, not smart drugs as we all agree that problem is not there yet, but sort of a similar analogy which we can see in our present world?
Dr. Jane: Nicole perhaps we can speak about the health document that we looked at for the article?
Dr. Vincent: Yes.
Dr. Jane: Well coercion is a very strong word and I think it might be worth unpacking our use of the word 'coercion' because we're not talking about big bad employer standing over their employees with a big stick saying that you must take this. For instance we looked at some guidelines from the department of health documenting one of the Australian states here and it was encouraging health professionals who were doing very long night shifts, to take really quite large amounts of caffeine via coffee. And so once again I don't know if everyone would say that is coercion but it is certainly pressure to take a substance in a way that makes them better placed to stay awake during their shift but it may not be the best thing for them individually or the best thing for humane workplace cultures.
Jesse: I can think of so many examples though that everybody would think is completely reasonable where certain jobs make physical requirements on their employees. It's like if you're going to be a police officer, at least when you go through your initial training you're expected to be in pretty good shape. If you're a fireman same thing. I'm not sure about the exact legality of this or if what I'm about to say is true but one of the things they say is that Modafinil which is a smart drug that a lot of people are aware of, is used by the U.S military for pilots that are doing very long flight missions. I'm not sure if they're required per-se to take Modafinil but I know it's pretty much expected of them. And if you're going to be flying an airplane for 20 hours or so that's a fairly reasonable job expectation. How would you feel about things like that?
Dr. Jane: But you see these things are unfolding together. One of my concerns is that when we're deciding what is a reasonable thing for a pilot to do? If those decisions are being made alongside an increasing use of drugs that can keep people awake for longer, it will become more reasonable to say - "Hey the average flight we require you to stay awake for was 8 hours, but let's make it 16 or maybe 32." These expectations about what regular humans are capable of doing - they are going to change as these drugs are being used.
Dr. Vincent: An important point here is to note that whether or not there exists a better social arrangement? Of course if the situation is such that we're constrained and we only have this one pilot and either they can fly this whole long stretch unaided or they can aid themselves and make sure that what we have is a safe flight, then of course take the Modafinil. I can feel the arguments for it. But what's important is to step back and say why are we allowing this kind of situation in the first place? Where what we have to do is tell the pilot to take Modafinil rather than saying from an industrial relations perspective what's important is to maybe put on two pilots. That may not be the best solution.
Dr. Jane: That would be a radical move, wouldn't it? Employ enough regular humans to do the job unaided.
Jesse: We've got enough regular humans to go around.
Dr. Vincent: This is the kind of thinking that we would like to foster. To step back from a situation and recognize that there maybe a very legitimate place for Modafinil in society and for other drugs or for trans-cranial direct current stimulation. Think first whether we're simply legitimizing exploitative work practices in an exploitative industrial relations setup rather than promoting a better life. A better way of human flourishing.
Jesse: Interesting! I think I'm beginning to understand the crux of the argument a little bit more.
Dr. Vincent: So could I return to the question of garden variety examples. I brought up the example of coffee and medical professionals. So there's couple of points here. 1. When a public authority comes out and makes statements like this. Well people turn to public authorities in negligence cases or at least alleged negligence cases. They will say what's the darn thing? What does public authorities say is the responsible thing to do. The responsible thing for the surgeon who knew they felt fatigued is either to a) Take naps or b) Drink up to 6 cups of coffee. That sets a precedent which says that if you haven't drunk your six cups of coffee you're being negligent. So the first point here is that here's this garden variety example of caffeine that's being peddled by health authorities - that says "well this is a reasonable solution to a problem." That's really asinine. The reasonable solution to the problem is put on more medical staff or let's at least try to put on more medical staff in the first place rather than jumping to the chemical solution. But the other one is that it's just a short vignette. So in the philosophy department at work in Georgia State University, which is an awesome place to work for by the way - in the coffee room I open the cupboard and there's coffee in the cupboard and next to coffee there's this huge jar of painkillers. And I thought wow! This is kind of interesting because what the painkillers are there for. One way to look at it is how nice; the employer is providing me with something to make sure they take away the pain. But in the other way what they're saying is - "Nicole don't be pressured, if you've got a headache just pop one of these and you can keep working."
Dr. Vincent: So is that the right solution or would the right solution would be to say no actually a much more pleasant, a much more humane society to live in, one that I would actually like to live in would be one where there's a restroom so that I can lie down when I have a headache. And which didn't treat the headache as just an annoyance that the body generates but maybe a symptom that I need to slow down and maybe not inflict as much punishment on my body.
Jesse: Speaking of some assumptions about what those painkillers are there for. Is this Orwellian employer trying to make sure that you can work through anything or maybe someone legitimately thinks, "Oh my employees are in pain. Maybe I can do something to help them" and it might be a Pollyannaish nice thing and it's hard to jump to conclusions one way or another.
Dr. Vincent: So Emma's just written a book about conspiracy theories and the last thing I wish to do is peddle a conspiracy theory about an employer, right? But the effect that all of these medications or this particular thing have is that they say, "Here's this medication, here's this solution." So by presenting people with this particular solution rather than another solution we make it harder for people to say no. We make it harder for people to say, "Actually boss, is it okay if I just close my office door, lay my mattress down underneath my desk where it's nice and dark and just take a bit of a rest, take a bit of a nap." It's the way that our choice making gets weighed in from different directions.
Jesse: Let me take a different example that doesn't necessarily involve employment. If people with bad vision who need to wear eye glasses to get a driving license in order to drive around. In the potential future that you're talking about that might be more ethical, you know there's always plenty of people around that can drive, that have perfectly good vision and who don't require contact lenses or glasses or anything like that. In theory you could probably find one of those people to drive you where you need to go. But does that make it unethical that you're required by law if you have a certain level of bad vision to wear glasses or contact lenses before you jump in your car and drive?
Dr. Jane: Well I hope not because I need to put on my glasses before I'm allowed to get into my car and drive and I don't feel like I'm driving unethically. And I know what you mean. One of my fixations with this particular issue is that I think we need to settle on a kind of baseline that we call normal. So if we're using interventions like my very attractive pair of red glasses that I use for driving, then for me it's not giving me an unfair advantage. It's kind of bringing me up to a baseline normal level. And that's how I feel. A lot of cognitive medical enhancements that exist in other context for the treatment of other conditions. And that they are being re-purposed in these new ways. I feel much more comfortable in using these medications to assist people that may need some help to get up to whatever we saw is the baseline normal, than helping people who are already doing well get further and further ahead. Potentially thereby leaving those people who are not able to function at "normal level". My concerns is that it would actually increase those levels of inequality that already exist.
Jesse: Since there is a real danger in establishing what normal is. Establishing what normal is has some bad historical references of people who like that idea. To use the horrible example that everyone always uses in arguments like - "Aah that's cheating can't use the Nazis", but then the Nazis tried to establish what normal should be and just to say that this is a baseline to which things should stick, has bad precedence.
Dr. Jane: It does have bad precedence but I think it is more dangerous if we don't draw a line at all. And therefore so that while it is completely normal to expect people to stay up indefinitely to do their job because that's just what is normal these days. I think there are indeed risks in drawing a line on sand and that needs to be done very carefully but I think it's pretty obvious with driving. I'm comfortable for instance when I go to get my license and they pull up that chart with the letters on it which get smaller and smaller. That's what we decided, that to be able to read up to those bottom lines is the baseline for being able to drive safely. If I can't do that then I need some extra help in the form of my cute red glasses. I'm comfortable with that. I think these are difficult conversations particularly when it comes to intellectual conversations, but I think they are the ones we need to have.
Jesse: 150 years ago though wasn't it perfectly normal to be illiterate? I mean if we were having this conversation back then and you guys had been in-charge then that would have been a reasonable thing to establish that one out of three people having what we now consider normal levels of literacy as the "normal". And as we tended towards a more literate society we would've been hewing away from what you guys were proposing.
Dr. Vincent: So I really want to get back to this question about how we settle on the new normal. So the Hitler example right? The problem isn't that we settled on a normal or that somebody settled on a normal. It's that somebody settled on a normal using a pretty bad set of political and moral values. That particular norm embodied so many horrible obnoxious norms - that was the problem. And what we're trying to do is to say we'd like to pool focus on various ethical and political considerations that inform us in setting a new normal. So we are trying to address this problem that a lot of your listeners are concerned about because after all who is to say how we should settle on a new normal? We're suggesting by paying attention to these political, ethical, social, inter-personal considerations rather than being blinkered to them.
Dr. Jane: Let's also not forget to say that once we've settled on what we consider is a reasonable normal for 2014, is that isn't something we need to revisit further down. It's not to say that things shouldn't change and we should decide now and never think of it or speak of it again as times change. Car driving is an excellent example. The norms around what we think is acceptable and safe for driving vehicles have only existed as long as there have been vehicles.
Jesse: That's a really interesting point you make there because there's this false belief actually before cars were a thing that if something ran at over 45 miles per hour people's blood vessels would rupture. It was a completely false belief. We all drive over 45 miles an hour all the time. It was kind of like the idea of the speed of sound barrier. That there was actually a sort of physics based physical barrier that we would not be able to get beyond a certain speed safely. Which is course totally false.
Dr. Jane: There is sort of long history of hysterical reactions to new technologies and that's a great example. Even things like novels. When they became widely available there was an outcry about how they would corrupt young women and cause prostitution and all sorts of dreadful things.
Dr. Jane: That is a true story. The trouble is then there is a temptation to say that any kind of urging that we slow down and reflect on new technologies. Yet more of that hysteria and techno-phobic hysteria. The fact that these new technologies come along and yes there have been what in hindsight seems like over-the-top hysterical reactions, doesn't mean that we should then assume that everything from this technology will be just wonderful.
Dr. Vincent: What's important to understand is what Emma just said - we don't just want to suggest that people should say no and we don't want to say let's just slow things down for the sake of slowing things down. We both think that there are some amazing possibilities that can emerge from technologies. But what we're urging is that it's important to slow down to be able to consider the kinds of futures that we'll end up creating and then to plan ahead to think of what strategies can we use to avoid the bad consequences coming up. And to even push things in a positive direction. Does that make sense?
Jesse: Yeah. I'm just trying to think of examples when that has been effective in the past. I want to keep pulling out historical examples. So I was thinking, what about telephones? Right now technically there is no law requiring anybody to have a telephone, there's a great social pressure to do so. As an employer, you know I employ people, I pretty much on general principle wouldn't hire somebody who didn't own a phone because how would I reach them? Because if something goes wrong during off hours and I need to get in touch with them then I need to be able to call them. If somebody said Jesse I don't have a phone that would be the end of the interview. I think that seems like a pretty reasonable stance to most people in this day and age but in 1920 that wouldn't be the case at all.
Dr. Vincent: Okay. So what you're pointing to there is that we all have to do certain things for instance for our jobs. So in this is it reasonable to expect people to own telephones or is it reasonable for people to say that actually I would like this job but I choose not to have a mobile phone. A lot of such questions hinge on ultimately none of the stances that people disagree and disagreements on this change all the time. Back in the 80's when phones were huge one kg devices that you could bludgeon somebody to death with; and they were very expensive. Well at that point of time it wasn't reasonable to expect people to have phones. Is it reasonable to expect people to have this approach now? Well, possibly not reasonable for all the reasons that you would cite. But not the way in which our norms have changed over time. Partly in response to the evolution of the technology itself but also partly in response to evolution of the sorts of practices that we now have. The practices that were only created as a consequence of us starting to use these technologies.
Dr. Vincent: When we start using certain technologies that itself bootstraps new practices into existence. Those particular practices may come to be very beneficial. Further promoting the use of the technology may then be something that becomes a norm. So I fully agree with you on this.
Jesse: I think you said something really interesting there. At least we sort of touched on it. Which is that lot of things within society are serendipitous offshoots. We think about things like cellphone holders and text messaging and the fact that like Twitter which is this hugely popular platform is based on 144 characters. Which is this weird thing because text messages used to only allow a maximum of 144 characters. It was like this random arbitrary number that now has this value in a completely different platform. Anyway, so things like that I think are very common where things get used for other than their originally intended purposes and there's a lot of good that results from that. And I feel that if we start establishing normals then we'll be sort of throwing out these serendipitous babies with the bathwater.
Dr. Jane: This is a particular area of interest for us. It's kind of unintended or unexpected consequences in some cases can be just fantastic. In other cases they can be relatively neutral in other cases they can be really problematic. And you're right, it is very difficult to anticipate these kind of ramifications early on. But I think it's important for us to look at some of the patterns that have emerged with these technologies in the way - the good and the bad and the neutral; the ardent and beautiful consequences and offshoots that have flown out of these. And think about what we can learn from those as we're thinking about new technologies that are just emerging.
Dr. Vincent: Right. Because it's not that we're suggesting that we should put in place a whole bunch of norms which then restrict people in how they use these technologies, which then restrict the possibilities of some of these amazing uses that people have invented. Does that make sense?
Jesse: Say it one more time because it does sound like that's what you're proposing. To restrict these things.
Dr. Vincent: Oh okay. It's good that you hung onto this. it's not that we're suggesting that we should restrict people's use or experimentation but rather we should think about the ways in which people might experiment with various technologies. To try and predict some of the more obvious ways in which some experimentation may lead to bad consequences. The suggestion is that if we look at the interaction of other technologies. If we look at history, which is something that you've been doing through this interview, of how other technologies have been introduced in the past and how they've had, on some occasions, bad outcomes. If we look at that and use that as inspiration and try to think through how emerging technologies might impact on the evolution of society. To try and forecast some of the problems and only on those occasions where we see that there is a particular problem that may arise, only on those occasions engage in a little bit of normativity. Or maybe engage in developing artifacts and technologies that will be immune to generating those particular bad outcomes.
Jesse: Let's highlight a technology that has had an overtly bad outcome. Obviously there are examples but I think most of us would agree that there are good and bad offshoots from any particular technological development. But maybe if you could give me some favorite bad technology horror story so that we have a specific example to tie some of these ideas too.
Dr. Jane: Well Jesse, I just tend to pull away from the all-or-nothing position, that a technology is all good or all bad. or that regulating technology and the use of technology is all good or all bad. We've been talking about mobile phones. Text messaging for me is one of those astoundingly unexpected and surprising consequences that we would use these devices that were primarily designed for us to speak to each other with voices. And to have people obsessed with returning to these old-school text approach. It's kind of delightful. But at the same time we have some recent research in the States that shows teenagers who are using mobile phones in vehicles for calling or texting are having accidents and fatal accidents on par with the number of accidents caused when people have been drinking. Surely that is an example where regulation is necessary.
Dr. Vincent: And furthermore that's what we are talking about Jesse. You asked for a concrete example and let me use cognitive enhancement as an example. There's a really nice article written by researchers from the University of Queensland where they drew comparisons between old-school enhancers and what we've currently got. The examples they used were cocaine and amphetamines. The argument they were on was - if you look at the history of what happened with cocaine and amphetamines; and people said, "Wow! These things can help us do things." And then society responded by saying, "Alright we'll deregulate them." What happened then was large public health disasters and then we started regulating things more strongly. The conclusion that they draw is that let's take heed of this and say that, "No, let's not make cognitive enhancements available to everyone." I think that they jumped to that conclusion much too quickly. In part because it was this all-or-nothing approach which Emma and I wish to step away from because we don't think that's the right approach. But just as important is the way the problems in those particular cases arose was because of the fact that we have two particular drugs which we didn't monitor how people are going to administer them. So for instance whether people are going to inject them, whether they are going to snort them or drop them. We didn't monitor the quantities in which they were self-administering. We didn't monitor their health. So we didn't ask them to come back and check to see that they are using these medications that they've been prescribed or they've purchased. And whether those particular medications were working for them. Now, what's the moral of this story? The moral of this story is what I think we ought to do is to design these medications in such a way that we can monitor people, simply to make sure that they don't harm themselves. Then we can also say that, "Maybe Modafinil doesn't work for you. Maybe you're one of those people who get that all-over body rash when you use it." Let's make sure that people can go to a doctor and find the best thing for them. Now in this particular case I'm talking about medical side-effects and how we might be able to avoid having medical side-effects. Namely by creating drugs and by creating an environment in which there is both monitoring of how people administer, how much they administer and also of monitoring the health effects. But all that we're suggesting is if we can find social problems, then let's not just say no to these medications. But let's think about how we can redesign the regulations that we currently have or something else in such a manner that we don't get the social problems occurring.
Jesse: I think that you have a tremendous amount of faith in smart people's ability to look at the present circumstances and make very accurate analysis about the future based on things that we're never going to notice. Based on second or third order consequences. Emma made the point a moment ago about the use of text messaging and how that's become globally the new norm. But who would've predicted that based on the early cellphones of the late 80's early 90's. You could have been as smart as you want to be and never would've necessarily seen what an enormous trend that was going to be. I just don't think our ability, even as smart people, to accurately predict our multi-variable society and what directions it can skew off in is nearly what I think you're implicitly saying it is.
Dr. Vincent: So the last thing that I would want to do is make "perfect" in to the enemy of "the good". So what I mean by that is that - Yes, our ability to predict is going to be limited. I mean we're pretty limited creatures, unless we take Modafinil or other drugs.
Jesse: But even then because I do take Modafinil and other drugs.
Dr. Vincent: I couldn't resist the urge to say that. So we are limited creatures but the fact that we're limited doesn't give us an excuse to not try a little bit harder, at least try to forecast some of the things. So one of the reasons why it is so important to look at history and look at the way in which the interactions of new technologies have had this unexpected consequences. Some of which have been wonderful, some of which have been horrible. Some of which have been, "Wow! What? I don't even know what sense to make of this." Like strange. The reason why it's important to look at history is precisely to get some inspiration about what sorts of things might go wrong with the technology. So you're right. Our ability to forecast this stuff is sort of limited but I do think that doesn't excuse us from not investigating, Not getting into this imagination.
This Week in Neuroscience: The Secret to Staying Young Forever?
iTunes Listener Review thank-yous
Intros to Nicole and Emma and an invitation to join the public forum at the Cognitive Enhancement Conference in Delft, The Netherlands
Decisions of the individual and effects on the broader environment
Keeping up with “the new normal”
Coffee drinking as a comparison to normalizing a larger range of nootropics
Pain-killers and cold medicine: “Soldier on with Codral”
Pressure to perform in the workplace trumping individual well-being
Military use of Modafinil and the existence of a better social arrangement?
Thoughts on the definition of workplace negligence and perceived expectations placed on employees
Bad vision, driving, and bringing everyone up to a baseline of full functionality
The dangers of establishing what normal should be
Deciding on a “cognitive baseline” and how guidelines change with the times
Hysterical reactions to new technology, and planning ahead for a positive future
Placing reasonable expectations on individuals
Allowing for serendipitous offshoots of new inventions
Considering how different types of experimentation might lead to negative results
Mobile phones: the positives and the negatives
Comparison between “old-school” and modern enhancers and the benefits of regulating use
Our ability to make realistic decisions based on our predictions
An argument for further investigation: Don’t make “the perfect” the enemy of “the good”
Ruthless Listener-Retention Gimmick: Blueberries and neurogenesis