Inflammatory Behavior: Stanford Researchers Turn to Brain Inflammation in Fight against Alzheimer’s Disease

Tags

The protein amyloid-beta (Aβ) was long considered the primary “bad guy” of Alzheimer’s disease, accumulating into plaques and causing plenty of trouble for nearby cells. These days, though, we know that Aβ doesn’t act alone. One of its possible co-conspirators? A malfunctioning brain immune system.

To safeguard the brain from invading microbes, passage from the bloodstream into the brain is tightly restricted; even cells of the immune system don’t generally go there. Instead, the brain employs a local system that serves as both police force and sanitation department: the microglial cells. These cells rove throughout the brain maintaining order. They respond to threats, manage inflammation, and, when necessary, clear out toxic materials. Recent work suggests that microglia also play an important role in the brain’s response to, and destruction of, Aβ. Long-term exposure to Aβ, by contrast, makes microglia less effective. But what mediates this microglial response, and is there any way to harness it for treatment?

Enter Dr. Jenny Johansson and colleagues of Stanford University, authors of a study published this January in the Journal of Clinical Investigation. The researchers were inspired by population studies showing that NSAIDs – a class of anti-inflammatory drugs that includes ibuprofen and aspirin – prevent development of Alzheimer’s disease. Still other studies indicate that NSAIDs block production of the protein prostaglandin, which can interact with microglia through a receptor protein called EP2. EP2 sits on the outside surface of microglial cells, waiting for prostaglandin to bind to it and thereby initiate a signaling cascade – imagine prostaglandin as a snowball that starts an avalanche. The authors hypothesized that EP2 signaling suppresses normal microglial responses to threats like Aβ, and that if one could turn EP2 off, the microglia would be protective again.
As it turns out, when we’re older, our microglia don’t work as well. When the researchers introduced Aβ into cells from both old and young mice, the EP2 response was more robust in the older mice, and associated with an inflammatory response. The older mice also produced fewer of the proteins that break down Aβ. In other words, the older a mouse was, the less capable its microglia were of managing threats and the more violently they responded – not so good for the brain.

And what if one gets rid of EP2? The researchers looked next at mice genetically engineered to develop symptoms of Alzheimer’s disease; when these mice lacked EP2 from birth, the plaques in their brains were smaller, and around each plaque were more microglia than in mice with EP2. Johansson et al. also examined the acute response to Aβ by injecting the protein directly into the brain of a non-Alzheimer’s mouse. In normal mice, this procedure increased inflammatory signaling. When the researchers repeated the procedure in mice lacking EP2, they found that not only were many of those inflammatory signals reduced, but that several factors that promote cell survival after brain injury were increased. EP2 deletion didn’t just block inflammation: it supported a more constructive cellular response to Aβ.

The kicker, of course, is that it didn’t matter when the mice lost EP2, nor how they were exposed to Aβ: EP2 deletion improved the mice’s performance in memory tests on which EP2-positive mice exposed to Aβ normally do poorly. Now that’s a powerful little receptor protein! Could modifying EP2 be a quick fix for Alzheimer’s disease? It seems reasonable to suppose, doesn’t it?

If you’re wondering why this magic anti-inflammatory bullet hasn’t cured Alzheimer’s yet, well, you’re right to be skeptical. Despite the abundance of evidence from epidemiological studies suggesting that NSAIDs prevent Alzheimer’s disease, not one clinical trial has borne out the promise of such treatments. And there have been multiple trials.

Though the Stanford findings are novel and exciting, a notorious challenge of Alzheimer’s research is that drugs that succeed in mice often fail in human trials, in part because we can’t treat humans in time. Current studies suggest that, by the time someone’s decline merits a trip to the neurologist, disease progression has been underway for decades. While many scientists are working to develop reliable pre-clinical markers, this skewed treatment window may help explain why epidemiological studies suggest benefits where clinical trials find none. Perhaps at-risk subjects in these studies are, quite by accident, pre-treating themselves well before disease onset. And if that’s true, then once we optimize an early-detection system, perhaps anti-inflammatory drugs deserve another chance in the clinic.

Until then, an aspirin a day probably won’t keep the doctor away – at least not in Alzheimer’s disease.

Further reading:
Various groups have published several detailed reviews

 

Advertisements

A Very Merry Erin Go Brain Christmas

Ah, the winter holiday season: a time of carols and cheer, of love and friendship, a time characterized by multicultural potlucks, one too many cookies (if that’s possible) and, for graduate students, hammering out who’s going to come in to feed the cells over the break (see you New Year’s Eve, little buddies!). Now, if you’re a last-minute gift-giver or struggling with what to get your quirky-but-lovable nerdy friends, you’re probably delighted that Buzzfeed published its annual list of best gifts for science geeks. Gizmodo has another one, and if you’re really having trouble, you can’t go wrong with pretty much anything on ThinkGeek. The Internet’s got your back, friend of nerds! Never fear! And so you breathe a sigh of relief, and trust that your gift will arrive by your holiday of choice.

But what if you’re looking for a more focused gift? What if you want to find a special something for your favorite neuroscientist, clinical neurologist, or enthusiastic neuro-nerd? Despite the apparent absence of a list detailing such gifts – I even Googled to check – there exists a multitude of smaller artists and designers making beautiful, fun, neuroscience-themed work, and these folks are worth supporting. Some of them presented at the Society for Neuroscience Art Show this year, while others I’ve simply stumbled across while browsing.

Promote oxytocin. Give neuroscience.

  1. Neurocomic. This lovely little graphic novel is the product of two neuroscientists:

    Amazon.com

    Dr. Hana Ros, a neuroscience research fellow at University College, London, and Dr. Matteo Farinella, a neuroscientist-turned-scientific illustrator also based out of London. Together, they narrate (and illustrate) a journey through the brain in which the reader is guided by Nobel laureates, Pavlov and his dog, and a host of other neuroscientists, psychologists, and even model organisms. This delightful little tome, while targeted for a lay audience, would  make a lovely addition to a neuroscientist’s coffee table. $18.12 on Amazon.com.

  2. Neuron bottle openers.
    Neuron Bottle Opener.  neuroscience Beer Bottle Opener, hand printed magnetic bottle opener..

    Etsy.com

    A key accessory for any graduate student or aspiring host(ess), these magnetic bottle openers from Etsy retailer MyWifeYourWife feature a print of a pyramidal neuron, complete with axon(s) and dendrites. Attach it to your fridge and never forget where your bottle opener has gone! $5.62 on Etsy.

  3. The neuroscience tie. As long as we’re talking neuroscience accessories, let’s

    Etsy.com

    get swanky and sartorial! And if we’re getting sartorial, don’t miss Etsy retailer Cyberoptix. Feel free to drool a little over the wonderfully geek chic scarves and ties in their online storefront, and then check out the Nervous Energy and Insomnia ties; respectively, the ties feature silkscreen prints of neurons (complete with nodes of Ranvier) and of the brain rhythms recorded in different sleep stages by electroencephalography (EEG). Perfect for a night out on the town… or a thesis defense. And if you’re more of a bow tie guy (or gal!), then don’t worry: you can get a sulcus ’round your neck! $30 for Nervous Energy/$40 for Brainstorm bow tie/$45 for silk Insomnia tie on Etsy.

  4. Anything at all from Backyard Brains. ANYTHING.

    BackyardBrains.com

    If you don’t know Backyard Brains yet, then my goodness, you should. Started by graduate students at the University of Michigan in 2009 with the goal of making neuroscience learning more accessible, the company has since expanded internationally, run a successful Kickstarter campaign to build a the RoboRoach, and exhibited at the annual Society for Neuroscience conference to increasing acclaim. With their various kits, you can control the movements of a cockroach, measure electrical responses in your muscles, observe action potentials in insects and other invertebrates, and more. Their tagline is “Neuroscience for Everyone,” and they certainly mean it; their products (as well as suggested experiments) are now available for purchase by classrooms, museums, and individual learners. Prices vary (but start at $99.99).

  5. Caffeine necklace. Caffeine is the world’s
    Caffeine Molecule necklace-  Silver Coffee Cup pendant

    Etsy.com

    most widely used psychoactive drug, and here it’s been beautifully worked into necklace form by Etsy artist Delftia. Witness America’s favorite alkaloid in sterling silver, sitting slightly off-center in a little coffee cup. I’m ready for my next cup already! Check out the rest of her store for other neat scientific accessories, including a gold-plated Golden Ratio necklace, a horizontal section of the human brain, the tree of life, and more. $48 on Etsy.

  6. 3-D printed neurons. Got a 3-D printer, or a friend

    @Kristin_Muench, Twitter

    you could bribe with beer or baked goods and who might let you use her printer? Then head on over to Yale SenseLab’s 3DModelDB and download a plethora of printable neuronal models. The models are available as an extension of the ModelDB project, and feature the proper code to print a variety of neuronal types and morphologies. Maybe spray-paint them gold, if you’re into that. Or electric blue! Or use them as ornaments on your favorite multicultural holiday shrubbery. FREE(!) from Yale’s SenseLab project, unless you feel the need to purchase a 3D printer to be able to print these – in which case, VERY EXPENSIVE.

  7. Advice for a Young Investigator, by Santiago Ramon y Cajal. Santiago Ramon y Cajal was a Spanish neuroscientist and Nobel laureate,

    Amazon.com

    and was (and is) considered the father of modern neuroanatomy. Advice for a Young Investigator, first published in 1897, endeavors to educate the new scientist in how he should practice his art, and is full of practical advice and no small amount of humor. While some of the material is a little dated (for example, the description of the ideal wife), the book itself is a quick and enjoyable read, and a perfect pick-me-up from one of the original greats of neuroscience. Trust yourself and your own abilities, and question everything, RyC advises; we promise that we shall do so. $14.39 on Amazon.com.

  8. Phosphorescent knit neuron. If you’re the artsy-craftsy type, then you might

    Ravelry.com

    want to check out this quick little knitting pattern on Ravelry, the knitting and crochet database, from Gabrielle Thériault. Knit your very own neuron for friends, for yourself, or perhaps just to hang in your window to keep out pseudoscience. That’s how that works, right? Pattern is free on Ravelry; cost is in knitting materials only. As the neuron can be made from yarn of any weight desired, this project would be a great use for leftover yarn that you might have lying around from other projects.

  9. Rice krispie brains. Because no gift-giving occasion is complete without a few
    Original Rice Krispie Brains (12)

    Etsy.com

    delicious snacks here and there, your next sweet treat is a no-brainer: rice krispie brains from Etsy retailer CrazyBrainChocolate. These cute little guys may not be totally anatomically accurate – krispie matter versus pink matter isn’t quite how it works – but don’t let the neuroanatomy get in the way of your enjoyment. Stuff them in a stocking, extract them from their box as though you’re a ravenous zombie… whatever makes you happy! Also available as Nutella-walnut brains, cake brains, and more! Prices vary for the various brain sweets sold by the shop, but start at $12 on Etsy.

  10. A piece of Greg Dunn artwork. As far as I’m concerned, Dr. Greg Dunn’s phenomenal work is the apex of neuroscience-inspired science/art.
    Cortical Columns medium no watermark

    gregadunn.com

    He’s a neuroscientist-turned-artist and has been commissioned to create pieces for research institutes, universities, non-profit organizations, and more. His latest forays into the beautifully complicated technique of microetching need to be seen in person to be believed (trust me – it’s a totally different experience than seeing it online). If you can’t afford to shell out the $4000 for a gold leaf painting, or the approximately $35,000 for a microetching, then head over to his prints and, if you’re a graduate student, ask for the graduate student discount. You won’t regret it.

Happy holidays, all! -EGB

Wednesday Open Reading Frame: “Category Fail”

Tags

, ,

Today’s Wednesday ORF comes to us from science journalist Virginia Hughes. She’s a wonderful writer with a particular focus on (among other things) neuroscience, and when the news broke recently that she’d been tapped to head the new science and health desk at BuzzfeedNews, the science writing community was justifiably excited. Her blog, Only Human, is published weekly at National Geographic‘s “science salon” Phenomena.

The context.
Many modern neuroscience researchers devote themselves to unearthing the mechanisms that underlie a whole host of cognitive disorders. At the recent annual Society for Neuroscience conference in Washington, D.C., quite literally hundreds, if not thousands, of poster presentations were submitted under the general heading “Disorders of the Nervous System” – which is, itself, broken up into 162 subtopics. Mood disorders? Sure, did you want to learn about antidepressants, or are biomarkers more your style? Schizophrenia? If you lik, you can learn how that presents in a specific subtype of cases. Alzheimer’s disease? Stroke? Addiction? If you can name it, there’s someone studying it. And if you look closely, you’ll see that, for almost every disorder studied, there’s at least one poster session about animal models.

But why should that be? Why should researchers, with so much technology and knowledge available to them, still need to optimize the models they use in their studies? The answer is simple:

Because studying the brain is hard.

The article.
In a recent piece titled “Category Fail,” Ms. Hughes succinctly addresses some of the challenges inherent in developing an animal model of psychiatric disease – in this case, autism (though the lessons are more widely applicable). The problem comes down, in part, to the more basic issue of diagnosis. According to the 5th Edition of the Diagnostic and Statistical Manual of Mental Disorders, or DSM – essentially, it’s the handbook of psychiatric diagnoses – autism spectrum disorders are diagnosed according to two major criteria: whether or not a person exhibits deficits in social behavior and communication, and whether or not that person also exhibits repetitive behaviors (PDF fact sheet here). Hughes observes that, while a variety of animal models have been developed that recapitulate these broad criteria, the human cases are typically much more varied, both in symptoms and in their degree, and often present with other, non-cognitive conditions (for example, gastrointestinal distress in some individuals). In other words, there’s a two-fold challenge here: the challenge of making an accurate diagnosis, given a wide range of symptoms (and whether or not it’s socially useful to do so), and the challenge of developing an informative, accurate animal model by which to study, and perhaps address in some tangible way, the diagnostic criteria. Both of these are phenomenally difficult tasks, and unfortunately, they’re often in opposition to each other – for perfectly legitimate scientific reasons. Read on for more detail.

Consider Rett syndrome – a rare neurodevelopmental disorder that exclusively affects girls and often dovetails with certain autism-like behaviors. Its cause has been linked primarily to a spontaneous mutation in one gene: MECP2, which is believed to modify the way genetic material organizes into chromosomes. When the cause of a disorder can be linked to a simple gene, the job is (relatively) straightforward: introduce the mutation into a mouse model and study its effects. The mice used in such experiments typically come from very inbred lineages, which reduces the chance that intrinsic genetic variability between individuals will muddy the waters and contribute to an uncertain result. While this uniformity makes it easier to determine what specific aspects of a disorder arise as a result of any one mutation, and to narrow down possible methods by which to address these pathologies, the limited genetic background necessarily can’t reflect the same degree of variability that one sees in humans.

With autism spectrum disorders, which have genetic risk factors galore but no single gene responsible, the problem of clarity versus authenticity in animal models is compounded. Researchers have bred mice that mimic the diagnostic criteria for autism – as closely as mice can, anyway – but it’s far more challenging to recapitulate the incredible variety in human autism spectrum disorders that make them so difficult to diagnose, study, and treat. Why would one person have a risk gene variant and be diagnosed, and another not? It’s most likely due to a combination of genetic complement and environmental factors (or, oversimplified, a combination of nature and nurture), but researchers can’t reliably perform basic science while also introducing such phenomenal variability. In a good scientific experiment, as many variables as possible must be controlled in order that the truth might reveal itself – and even then, only through careful experimental design.

This is one of the many reasons why, from an outside perspective, science can appear to move slowly – because it is slow, and with some exceptions, it is careful. We study a variable at a time whenever we can, and once we’ve answered the questions surrounding it in a satisfactory way, we move on to the next variable. Slowly but surely, we come toward answers – but in the meantime, we do the best we can.

Wednesday Open Reading Frame: “The never-ending Ph.D.”

Tags

,

Today’s Wednesday ORF comes to us from prolific life-in-science chronicler Adam Ruben, writing this week at Science Careers. Ruben authors the column “Experimental Error” with some regularity and has also written the (humor) book Surviving Your Stupid, Stupid Decision to Go to Grad School. (I can’t tell you whether or not it’s a good book; I’m waiting to read it until I have my degree in hand and see how true it rings to my overall experience.)

The context.
If you’ve spent even a little time with me or with any other end-stage Ph.D. student, you’ve probably discovered one particularly radioactive line of questioning – the one you now know to avoid at any cost. You’ve learned, perhaps by hard experience, that any inquiry on this topic is strictly verboten. If you do dare to broach the question, your subject may simply be stunned that you had the chutzpah to ask at all. I’m speaking, of course, of the doctoral equivalent of Harry Potter’s Voldemort, the Question-That-Shall-Not-Be-Named: “So when are you done?”

This question really riles a lot of graduate students. But why not? one might ask. Why can’t you know? Didn’t so-and-so’s apocryphal best friend from high school finish a Ph.D. in four years? Or was that med school? Oh, wait… you’re not that kind of doctor, right? Six years is a long time, after all. You must have at least some sense of how it’ll shake out. And while yes, we do have at least that, there are a variety of reasons why graduate students a) don’t like this question and b) don’t often give a straight answer to it.

The article.
I’ve seen a number of friends post this article recently, generally paired with sentiment to the effect of “It hurts ’cause it’s true.” And while there are aspects of Adam Ruben’s “The never-ending Ph.D.” that are a little over-the-top, a lot of it seems pretty on-point to the experience that I and others have had as graduate students. Namely this: there’s a surprising degree of luck involved. We all work hard. But maybe she finds the right lab, at the right time, just as they’re well-funded and about to start a line of inquiry that yields Big Results. Maybe everything just goes right for her, and she can say with confidence that she’ll be out in five years, and can start to plan her career. Maybe he’s close to the end and suddenly lands a job, forcing an earlier defense date than what he’d otherwise schedule. On the other hand, maybe things don’t go well. Maybe he has an assay (experimental procedure) that he spends six months trying to optimize, and it still doesn’t quite work, and he has to scrap the whole thing. Maybe her thesis committee wants more papers from her than she thinks her data can reasonably make. Or, worst of all, maybe her boss doesn’t get tenure, or leaves the university to join an institute across the country, and she has to re-invent her project from afar. These varied outcomes are difficult to predict, and not always contingent upon how hard a graduate student works.

The principal challenge here, I think (and one about which I’ve written before), is that finishing a Ph.D. involves a lot of subjective benchmarks. Few graduate students will ever totally finish a project and be able to tie it up with a neat little bow; that’s why principal investigators have active labs for decades. The questions are never fully answered! That uncertainty, that long future, can be incredibly exciting – how much is left to discover! – while also being somewhat frustrating. At some point, all graduate students must simply agree to be done, whether or not the data are clean and complete. We don’t know when that experiment is going to work, and yet, we still strive for the best possible story, or we finally get amazing data after five or six or seven years of struggling to produce anything, and we graduate in a flurry of activity and fireworks.

The take home point is this: until you see your friendly neighborhood graduate student standing proudly in a tam and wizard’s robes, with a “Dr.” on a business card and a thesis proudly shelved in the department library, don’t make assumptions about when she should be done. She’ll tell you. Promise.

Wednesday Open Reading Frame: “Five Ways to Lie with Charts”

Tags

,

*EDIT- an earlier version of this article had the wrong link to get to the Nautilus article. It’s fixed now. Thanks to my friends at CauseScience for catching that! Go check them out. They post awesome science stuff AND fix broken links.

Today’s Wednesday ORF comes to us, once again, from Nautilus magazine, and homes in on the topic of data presentation. (I swear, I swear, I’m not a shill for Nautilus; it’s just actually that good of a magazine. Plus, it’s a reliable source of ORF material on which I can rely when I’m in post-conference recovery mode. I wasn’t presenting my research this year – I was mostly at the Society for Neuroscience conference for network and science tourism – but it’s still pretty exhausting. Stay tuned for a couple Neuroscience run-down posts later this week.)

The context.
Over the course of her training, one of the single most important skills that a scientist learns is how to interpret data. Now, half the time, we’re interpreting our own data. I might ask, What’s the signal to noise ratio of a particular reading? Are the data too variable to glean any useful information from them? Are the data significant, in the scientific, statistical sense (meaning that there’s a less than 5% possibility that an experiment would yield a particular result by chance)? And more importantly – even if the data are statistically significant, are they real? Ideally, with each new experiment we perform, we subject our own work to rigorous skepticism. Ideally. It doesn’t always happen, because we’re human and imperfect and sometimes we’d really like to believe that our pet theories are borne out by data that just… isn’t quite there yet.

A happy side effect of learning to interpret one’s own data is learning how to see the data produced by other people, and to approach their data with similar skepticism. If a journal article claims a dramatic increase in Factor X, do the data support that claim? If a presenter claims that a particular procedure increases fluorescence in a cell, does the photographic evidence agree with the interpretation? And importantly – does the presentation of the data differ from the actual numbers? Once one learns to ask these questions, one starts to observe that, worryingly, data outside of scientific journals – and sometimes even within them – can be presented in confusing, even misleading, ways. There are spurious correlations (e.g. if X goes up and Y also goes up, then they must be related – see Tyler Vigen’s site Spurious Correlations for great examples). There are graphs with misleading, mislabeled, and even absent axes. There are alarmist reports of increased percentage risk (e.g., for a particular type of cancer, and so on) that never refer back to the original percentage risk… I could go on. And at another time, I will. At length. The point is, data in the news, in science, and in popular culture can be confusing, and can lead us to make assumptions that may not be wholly accurate.

The article.
So how can we demystify data? Becca Cudmore, writing at Nautilus on November 6, has some ideas. Her article – “Five Ways to Lie with Charts” – gives a great overview on various ways that graphical data can be spun to promote an interpretation favorable to the presenter. Now, I think the title itself is a little overly dramatic; technically speaking, none of these charts (presumably) is actually presenting false data – i.e., lying – for the purpose of the example.  And in fact, even scientists will try to present their data – which is often messy! – in the best light, and there’s nothing wrong with that. What these graphs do, however, is present data in a very visually leading way, and if one is unused to looking at data in this format, one might easily be swayed.

As I said above – I plan to write much more about this topic in a future (non-ORF) post. But if you’re looking for a take-home, I’d recommend going into any discussion of data with two questions in mind:

  1. What do the numbers actually say? For example, a graph might show a 30% increase in the height of a bar… but if the axis doesn’t start at zero, that representation might be over-emphasizing a small change. Maybe that amount of change is relevant for this field. Maybe it’s not. Always check.
  2. How do I feel about this graph? What’s your first impression? What are you perceiving? Our brains are great at pattern recognition – that’s why we can “see” constellations! – but they are also easily fooled by visual illusions. Check your gut feeling, and see whether your initial perception of the data leads you to believe something that may not be right.

More on this to come. Go be skeptical, dudes and dudettes.

Wednesday Open Reading Frame: “Scientists Outshine Arts Students with Experiments in Creative Writing”

Tags

, , ,

Today’s Wednesday ORF comes to us from The Guardian, and consists of a) food for thought and b) a challenge! I’ll get to that in a minute, and I encourage my scientist friends, science blogger friends (ahem: CauseScience!) and lay reader friends alike to complete the challenge with me.

The context.
Whenever we, as a society, talk about scientists communicating, one particular misconception tends to surface time and again: that they can’t do it. That when a scientist speaks, her language is peppered with jargon, incomprehensible, and her manner is scatter-brained at best. I’ve written previously about scientific stereotypes, and I’m sure I will again. Tim Radford, the former science editor for The Guardian, agrees that this myth doesn’t really have legs; in order to get grants, write papers, and present their research, scientists actually have to be quite good at communicating their work in a focused manner  – at least, to their peers. Contrary to popular belief, many practicing scientists are also quite skilled at public engagement, as well, and apply their scientific knowledge and economy of language to the popular sphere.

For those less skilled in this practice, one imagines that may be because a) various important people in their field discourage such engagement (even the much-beloved Carl Sagan was often criticized for his public work), or b) because they’re simply unused to speaking or writing to that kind of audience. While many graduate programs offer extramural opportunities that help develop these skills, they’re rarely formalized within the curriculum. And, let’s be honest – the rhetoric, oratory, or public speaking course has fallen somewhat out of favor as staple of the liberal arts education.

The article.
Last week, novelist and teacher Aifric Campbell wrote in The Guardian about one way in which Imperial College London has taken a multi-disciplinary approach to a scientific education. The piece, “Scientists Outshine Arts Students with Experiments in Creative Writing,” describes how ICL students – who, though still undergraduates, primarily take STEM/business courses – have the option of taking a creative writing curriculum to supplement their scientific training. Campbell observes that these students are often quite good writers, and that creative writing encourages them to think differently about scientific problems, to express themselves clearly, and to consider more deeply the societal impact that their science may have.

Now, I’ll freely admit to some personal bias when I read this article; as an aspiring writer and communicator of science, the skill set that creative writing training develops is near and dear to my heart, and generally pretty complementary to and supportive of the scientific skill set. To pursue either discipline successfully requires clarity of thought, creativity, and an excellent work ethic. As we continue to discuss ways in which to hone the communication skills of young investigators – whether through training in improvisational theater, or through writing workshops, or through public outreach, or perhaps through some combination of these – it’s worthwhile to consider creative outlets like writing as a useful tool for a scientist to have in her toolbox.

A challenge.
I have fond memories of one particular course from my undergraduate education, in which students were expected to write 300-word “themes” five out of seven days per week for the entire semester. The constant creative engagement helped us feel more creative, better-equipped as writers, and forced us to notice things in our language and in our environment that were often processed quickly, with little awareness.

In the spirit of creativity, I challenge all readers – particularly those in analytical fields, but all are welcome to participate – to take the time this week to write a short, 300-word (or more, if you like) creative piece. It can be science-related, or entirely unrelated. I’ll post mine on the blog by next Friday and tag with with the hashtag #creativescientist. You should do the same! Feel free to leave a comment if you’d like me to give you a prompt; sometimes that structure is helpful.

Wednesday Open Reading Frame, part II: “If You Think You’re a Genius, You’re Crazy”

Tags

, ,

Part II of today’s Wednesday ORF comes to us (once again) from the excellent science magazine Nautilus. Seriously. Go subscribe. Go subscribe to the print edition. You will want to keep it on your coffee table so your guests have interesting, delightfully-written science to read and gorgeous artwork to go along with it.

The context.
For some reason I can’t entirely fathom, articles on psychopathy (or on people with certain psychopathic traits) have made several hysterical rounds of the media the past couple of years. First, there was this feature in The New York Times Magazine about a 9-year-old boy who may or may not be an actual psychopath (see The Last Psychiatrist’s excellent dissection of some of the article’s shakier claims here). It was distributed widely, and people reacted as one might expect – with fear and confusion (see: any and all comments on the original article, including shaming the parents for having the gall to procreate again). More recently, The Atlantic and other media outlets ran a feature on University of California, Irvine neuroscientist Dr. James Fallon, who now self-identifies as a “non-violent psychopath”. And yet, in shining the spotlight on Dr. Fallon, one indirectly illuminates one of Dr. Fallon’s many research interests: the neural circuitry underlying creativity, a fascinating and contentious subject.

The article.
Neuroscientist Dr. Dean Keith Simonton of University of California, Irvine encourages us to think a little more deeply about various types of psychopathology (for example, schizophrenia) as they relate to creativity. In an article published last week on the Nautilus website – “If You Think You’re a Genius, You’re Crazy” – Dr. Simonton explores the connection between creative genius and psychopathology, and explains that the primary link is in the shared process of “cognitive disinhibition” – that is, in the tendency to notice things that most people might file away as unimportant, and to pay attention to the apparently irrelevant when the average brain’s attentional machinery might ignore it. Pretty interesting stuff.

I’d especially like to call attention to the Pandora’s box of neuroethics that this article opens. If we accept the premise that certain types of creativity correlate with mental illness, do we then choose to live with the illness in favor of encouraging creative production, and risk further cognitive decline? Or do we treat the illness, restore suffering individuals to health and happiness, and at the same time lose some creative potential? And more broadly, in a way that ties back to the original question of psychopathy – what value do we place on “normal” (if there even is such a thing), and on “normalizing” people who appear unlikely to harm either others or themselves? No easy answers here, unfortunately.

Got the questions, though? Good. Go chat with your neighbor about them.

Wednesday Open Reading Frame, part I: “What ISN’T Science Communication?”

Tags

, ,

Apologies for the miss last week; a hectic day of experiments kept me away from the blog. To make up for it, today’s Wednesday ORF will be a double header, featuring two short but interesting articles, both of which are worth a read! This first article comes to you from the NatureJobs blog (a personal favorite of mine for career advice).

The context.
I’ve written on this blog once before about the dim career prospects facing early-career scientists, and linked to some different proposals that have been put forth for managing the apparent input-output issue we face as biomedical researchers. As (nearly) Ph.D.s, we’re presented with a potentially dizzying number of post-graduate options: industry research, perhaps, or teaching, or consulting, or editing, or any number of other applied-science fields. And often, there’s not all that much guidance from the academy for those students who may wish to pursue one of those other tracks. While that’ll merit its own article, in time, today we focus on one particular parallel career path: that of science communication. If you’re having trouble thinking of science communicators, consider that Carl Sagan, Neil deGrasse Tyson, and Bill Nye might all be well-described by that moniker.

The article.
What Isn’t Science Communication?” is yet another dispatch from the popular, successful, and now-annual NatureJobs Career Expo in London (see all articles under the hashtag #NJCE14). The article offers a summary of a panel that took place during the Expo, and that featured professional science communicators from a variety of industries. As I’ve been looking into becoming a “professional communicator” myself, I find their insights both heartening and grounding. The take-home point: science communication is a lot easier to define by what it ISN’T than by what it IS. It ISN’T just print media any longer, it ISN’T a sure thing, and it ISN’T “the easy alternative” to a research career. It’s often grueling, time-consuming, and promises at least as insecure a future as academia. It’s something with no set path, that you may cobble together from a job here, some freelance work there, and a good portion of your own dogged determination. Sound hard? It certainly will be. I don’t doubt there will be days when I want to turn back. But what worthwhile enterprise isn’t worth a little blood, sweat, and tears?

Note from EGB: I simply adore the NatureJobs blog and wish I’d been able to attend this conference in person. If you, as I, don’t happen to live across the UK or Europe, though, don’t fret; the NatureJobs Career Expo is coming to us! The first Stateside NJCE took place late last spring, and was successful enough to merit a return to Boston on May 20, 2015. Registration isn’t open yet, but I’ll certainly keep you updated as I learn more. If you’re a graduate student or post-doc considering other paths – or even just curious what those paths are – you might want to block out a day on your calendar.

Wednesday Open Reading Frame: “Dust to dust”

Tags

, ,

Today’s Open Reading Frame comes to us from the Nature editorial pages, and serves as a good reminder to those of us who drink science with our morning coffee – scientists and laypeople alike! – that we should be especially cautious when we are met by those claims we most desperately wish to believe.

The context.
Earlier this year, a team of collaborators from a number of different research universities reported an exciting result from the South Pole: the BICEP2 experiment had produced solid proof of the theory of cosmic inflation. Within hours, the interwebs were abuzz, as the interwebs are wont to do when Physics of Great Import makes its way into the news (less so biology – a story for another time, there, perhaps?). We (well, I) watched press footage of one of the BICEP2 researchers surprising one of the Fathers of Inflation, Dr. Andrei Linde, at his home with the findings. We celebrated the clarity of Jorge Cham’s excellent visual explanation of the research. Perhaps we even tried to pore over more rigorous science in our efforts to understand. We, the lovers of science, the writers and learners and teachers and doers, let out a collective cheer at a longstanding and beautiful theory proved right.

…but did we celebrate too quickly? The preliminary report was released amidst a media circus. The findings have not been confirmed, and now come with increasing caveats; there is even a question as to whether one of the biggest recent discoveries in cosmology may be reduced to dust. As skeptical, analytically-minded folk, as science writers, even as scientists – were we too quick to see the work as a finished product, rather than a “work in progress”?

The article.
Dust to dust” was published as last week’s Nature editorial in advance of an upcoming meeting of the Council for the Advancement of Science Writing, at which the lessons of the overblown BICEP2 media hype will be discussed. I confess that I’m as culpable as anyone; I was just as excited about the BICEP2 data as any other universe-loving non-cosmologist. While the article is written from the perspective of science writers who were, perhaps, a little overzealous at the time the findings were reported, the caution the editors advise is more generally applicable. They remind us that, as intelligent producers and consumers of science, we must always be prepared to question those theories most dear to us; that even when we have good cause to be optimistic, we should retain that measure of practiced skepticism that characterizes a good logical thinker.  Ultimately, the article recalls one of the best-known messages of beloved astrophysicist and science communicator Carl Sagan: “Extraordinary claims require extraordinary evidence.”

TL; DR – Look for the evidence.

Further reading.
This relatively high-level blog post from mathematician Dr. Peter Woit, at Columbia University, details some of the controversy surrounding the BICEP2 findings.

If you’d rather go whole hog on the physics, check out BICEP2’s interwebs home here.

The New Yorker takes on the science of Ebola

Tags

,

As various and sundry science writers and policy wonks, including my friends at CauseScience, have noted time and again over these last few weeks, much of the coverage of Ebola has been alarmist, politicized, and, as news anchor Shepard Smith accused recently, terrifically irresponsible. Since the Ebola virus entered the United States, the hype machine has become even more gluttonous, feeding upon people with limited understanding of the finer points virology and epidemiology, encouraging panic where no panic should b e. It’s gotten to the point where I can rarely bring myself to read news coverage on the virus’s progress (and on the progress of the CDC’s interventions).

The New Yorker this week offers a breath of fresh air amidst all the fear-mongering and political gamesmanship. This long but excellent article from Richard Preston, “Inside the Ebola Wars,” explores the chronology of the Ebola epidemic and poignantly details its human consequences; it also offers one of the best explanations I’ve seen thus far of the science of infection, genetic monitoring, and vaccine development. Preston, who also authored The Hot Zone – a thorough chronicling of Ebola’s discovery and spread on the African continent – writes with scientific clarity and with empathy for the human cost of the epidemic. If you want to understand what’s going on in Africa and how Ebola works, this article is essential reading.

Is it neuroscience-related? No. But it’s certainly a bit of science and the natural world that’s profoundly affecting global society at present, so if you have some time, it’s worth a read.

Please feel free to post in the comments or on Facebook if there are any aspects of the science Preston covers that you’d like explained in more detail or at a slightly different level.