Skip to content
February 1, 2013 / neurograce

The Only Constant is Brain Change: A blog series on neuroplasticity for the Dana Foundation

Hey do you know about the Dana Foundation? It’s a New York-based philanthropic organization dedicated to funding brain research, education, and outreach. They organize fun things like global Brain Awareness Week and online brain resources for kids. They also host a blog aimed at describing aspects of neuroscience to the general public, and today marks the first in a series of three monthly guest posts I’ll be doing for them!

The topic I chose for this series is neuroplasticity. Plasticity, the brain’s ability to adapt and change, is an area of huge research. Developmentally, there are questions about what inputs are required (and at what times) in order to properly prune and shape connections in the early brain. On the molecular scale, we want to know what proteins are involved in signaling and inducing changes in synapses. At the systems level, how the relative timing of spikes from different cells affects synaptic weights is being investigated. On the more practical side, translational workers want to harness the beneficial aspects of the brain’s response to injury while minimizing the negative ones. Clearly the topic of plasticity provides a depth and breadth of questions for neuroscientists–and thus plenty of material for a blog.

But beyond simply stressing the variety of ways in which the brain is plastic, I’d like to go a step further. I would say that plasticity is the brain. Changing connections and synaptic weights isn’t merely something the brain can do when necessary; it is the permanent state of affairs. And thankfully so. Plasticity on some level is required in order to do many of the things we consider crucial brain functions –memory, learning, sensory information processing, motor skill development, and even personality. The brain is always reorganizing itself, synapse by synapse, in order to best process its inputs and create appropriate outputs. This kind of adaptive behavior keeps us alive–say, by creating a memory of a food that once made us sick so that we know to avoid it in the future. And makes us good at what we do–changes in motor cortex, for example, are why practice makes perfect in many physical activities. There is no underestimating the power, importance, or complexity of neuroplasticity.

In order to cover all these majestic aspects of plasticity, I’ve divided the posts into the following topics (links will be added as they’re available):

Developmental Plasticity and the Effect of Disorders. How the brain responds to a lack of input during development, and to a disorder that impairs plasticity itself.

Plasticity in Response to Injury–a Blessing and a Curse. The brain’s attempt to fix itself after stroke, and some weird effects of losing a limb.

Short-term Plasticity and Everyday Brain Changes. Changing enviornmental demands call for quick and helpful changes in the brain.

Enjoy!

January 18, 2013 / neurograce

Currently Necessary Evil: A (vegan’s) view on the use of animals in neuroscience research

All research methodologies have their challenges. Molecular markers are finicky. Designing human studies is fraught with red tape. And getting neural cultures to grow can seem to require as much luck as skill. But for those of us involved in animal-based research, there is an extra dimension of difficulty: the ethical one. No matter how important the research, performing experiments on animals can stir up some conflicted feelings on the morality of such a method.  This only intensifies when studying the brain, the very seat of what these animals are and how (or how much) they think and feel. Even the staunchest believer in the rightness of animal research still has to contend with the fact that a decent portion of society finds what they do unethical. It is not a trivial issue and shouldn’t be treated as such.

But, like with our use of animals for food, clothing, and a variety of other needs, experimenting on animals dates back millennia. The first recorded cases came from ancient Greece and have continued ever since. Now while historical precedent is not sufficient evidence of ethicality, the history of animal testing does allow us to recognize the great advances that can come from it. Nearly all of our tools of modern medicine, our knowledge about learning and behavior and our standards for food safety and nutrition would be gone with the absence of animal research. I certainly wouldn’t be writing a blog called Neurdiness, because Neuroscience wouldn’t exist as a field. It is clear that animal testing has proven crucial to the advancement of our society, and perhaps even to its very survival. And that value cannot be overlooked or underappreciated when having a discussion about morality.

So, we seem to come to an impasse. The instinctive distaste for replacing freedom with suffering in animals is pitted against the knowledge that this practice will prevent suffering in humans. And how to equate the two is unclear. Is an effective treatment for Alzheimer’s or Parkinson’s worth the lives of countless rats, mice, and pigs? What about the potential for such a treatment? Does our desire to understand higher cognitive functions—which may not lead directly to medical advances, but could change our notions of who we are and how we think—justify experiments on non-human primates, the only animals from which we can glean such information?  Beliefs exist at either extreme: so-called “abolitionists” claim no exploitation of animals is ever justified, while at the other end is the feeling that animals lack moral standing and their needs are thus subordinate to those of humans.

Personally, I view the use of animals in research as something of a necessary evil. For the majority of human history, people felt free to capture, kill, or enslave animals for a variety of purposes. For food, for powering agricultural tools, for transportation, for materials. We’ve since outgrown the need to use animals for many of these things, but before modern technology their use seemed perfectly justified and even required. Indeed, civilization would be nothing like it is now without our willingness to utilize animals. The present state of neuroscience research is something like that of early humans. I’m vegan because the state of modern food production and distribution means I can be healthy without harming animals. But I’m an animal researcher because in the present state of Neuroscience I know we cannot progress the field without them. We don’t yet have the technology to free ourselves of a dependence on animals, and our ability to reach that point requires their continued use.

Working in this moral gray zone leaves many neuroscientists feeling uneasy about discussing their methodology with the general public. Of course divulging any specific details about animal suppliers or where the animals are housed, etc. is dangerous due to the risk of it falling into the wrong hands, but speaking openly about engaging in animal research should not be outside the realm of comfort of a researcher. The right balance of ensuring safety while still defending your position is clearly a hard one to strike (as the results of this Nature survey suggest). But the voices of scientists involved in the work is crucially needed in the public debate and thus the proper practices need to be established by research institutions so as to disallow the prevention of this participation by fear.

Of course the best way to encourage support for the use of animals in your research is to ensure that you’re doing good science. To most people, animal research is acceptable on the grounds that its providing a significant benefit and experimenters need to keep this mandate in mind. Importantly, doing good science also means adhering to the guidelines for proper care and treatment of laboratory animals. Those concerned about the treatment of animals in labs would be happy to know that there are a plethora of agencies overlooking the design of animal experiments, how animals are housed, and ensuring that the least amount of pain is inflicted as possible (an extensive list of related resources can be found here). An important tenet is the “three R’s” of animal testing. This framework, first put forward by Russell and Burch in 1959, urges reduction (use only the amount of animals needed to significantly verify a finding, and use them wisely and carefully so as to reduce unneeded waste), refinement (use the most humane housing, anesthetic, and experimental techniques available, avoiding invasive and painful procedures if possible), and replacement (seek alternatives to animal use whenever possible; options include culturing tissues in a dish, computational modeling, and the use of lower animals or plant life). Adhering to such rules and guidelines from over-seeing agencies is important for both the continued operation of a lab as well the maintaining a good public perception. Furthermore, the proper treatment of lab animals is not merely a means of appeasing animal rights activists; it is crucial for attaining accurate results, especially in neuroscience. It is known that physical and psychological stress can have huge impacts on the brain and an unhealthy animal is likely to produce unreliable results. Additionally, keeping in line with proper practices can reduce the unease that a researcher may feel about their work with animals.

In the realm of neuroscience, animal research is, in no uncertain terms, a necessity. But at the same time, we are making strides in the implementation of the three R’s, specifically with replacement. The ability to grow neural cultures is widely used when appropriate. Realizing the potential of lower animals to answer questions normally posed to higher ones is also a promising trend. For instance, social behavior in flies and decision-making in rodents are being explored to a greater extent. Computational modeling is also becoming ever more utilized, and while it is far from fully replacing an animal, it can at least predict which experiments have the most potential to be of use, thus reducing wasteful animal use And with time, and the refinement of all these techniques, animals will continue to be used more wisely and with less frequency. So, if researchers don’t become too myopic or complacent, and continue to view their work in the larger ethical context, designing experiments to reduce moral dilemmas, then we can progress in a way that is both humane and fruitful.

ResearchBlogging.org
Editors (2011). Animal rights and wrongs Nature, 470 (7335), 435-435 DOI: 10.1038/470435a

January 9, 2013 / neurograce

Sequencing the Connectome: An entirely new approach to neuroanatomy at the finest level

Neuroanatomy can happen at many scales. At the highest end, we can ask if certain areas of the brain have connections between them: for example, does the lateral geniculate nucleus (LGN) send projections to primary visual cortex? (hint: yes). Through electrical stimulation and tract-tracing methods, we’ve gotten pretty good at finding this out. We can then look at connections within these areas: which layers of visual cortex connect to each other? Cell-staining and microscopy make this investigation possible. And we can even go further and try to learn about what kind of connections exist within a single layer of cortex (an area that is only fractions of a millimeter thick). Advanced, automated imaging techniques have allowed much progress here. Not only that, we can even look across scales by investigating, for example, which layer of LGN sends connections to which layer of visual cortex. Importantly, the tight relationship between structure and function in the brain means that learning about all these connections provides functional insights in addition to purely anatomical ones.

Now, taking this connectivity quest to its logical extreme, the most we could ask to know is every connection made by every cell in the brain. This information is called the connectome, and it has been causing quite a buzz recently in the neuro-world. What that level of detail could tell us about how the brain computes and how it differs across species and individuals is an area of hot debate. Some people feel the effort to investigate the connectome is a waste of resources, and useless in any case because a single network with constant connectivity can still show vastly different behavior under different conditions (inputs, modulators, etc.). Others feel that much information about a network and its functions is stored in how cells connect to each other. The connectome’s most prominent proponent, Sebastian Seung has almost religious-like zeal for the connectome as the be all and end all in determining who we are.

For the imaging approach, programs like NeuroTrace automatically scan layers of EM images trying to trace neuronal projections.

But if Seung’s mantra of “we are our connectome” is true, then the vast majority of us are going through a major identity crisis. The fact is, there is only one creature for which we have deciphered the connectome: C. elegans. The 7000 synapses between the 302 neurons of this little worm took over 50 person-years to obtain using available imaging techniques. Advances in automated analysis of electron microscopy data are occurring rapidly and can speed up and simplify this process. But scaling this up to, say, a mouse brain (~100 billion synapses) or a human brain (~100 trillion synapses) still seems pretty impractical. Our desire to know the connectome is nowhere near our ability to obtain it.

Enter Anthony Zador, a Cold Spring Harbor researcher with a different approach. His recent paper outlines the delightfully acronymed BOINC (barcoding of individual neuronal connections) method of seeing synapses. Actually, the value of BOINC is in the fact that it takes the “seeing” out of the process. Rather than relying on imaging methods to determine the existence of a synapse, BOINC harnesses the power of genetic sequencing. As the paper points out, DNA sequencing speeds have been increasing nearly as quickly as the prices have been dropping. This powerful combination makes it ideal for such an immensely large project as connectome mapping.

So how does it work? As the name suggests, the process requires each neuron be artificially tagged with its own “barcode”, or sequence of nucleotides.  Hosting these barcodes in specific types of viral vectors inside the neuron can allow them to be transferred via synapses to other neurons. Thus, a single neuron will contain its own barcode as well as the barcodes of all the neurons with which it synapses. Next, these barcode sequences will need to be joined inside the cell so that their association can be made known later via sequencing. So, if in the process of sequencing you come across a chunk of DNA with cell A’s unique nucleotide sequence followed by cell X’s unique nucleotide sequence, then you can infer that there’s a connection between cell A and cell X. Do this for all the DNA chunks, get all the connections, and you’ve made yourself a connectome. (Fittingly, the word ‘connectome’ was actually inspired by ‘genome.’ This method should’ve been obvious!).

A schematic of the new approach (BOINC)

Now of course this idea is only in its very early stages. The exact implementation is yet to be determined and plenty of questions about the specifics already abound. To start, giving each neuron a unique DNA label is not a trivial problem. The authors suggest a similar method to the combinatorial one used in creating all the different fluorescent colors for Brainbow, but replacing the florophores with DNA sequences. The next stage, the act of transferring the barcodes across synapses, is luckily not as complicated as it may seem. Viruses specialize in spreading themselves, and rabies and pseudorabies viruses have been used in conjunction with dyes and other markers to trace neural connections for years. This method has its difficulties of course, such as  ensuring that the barcode-carrying virus stops transporting itself once it reaches the post-synaptic cell (lest it replicate and invade all cells, giving a lot of false positives). And once the synapse-jumping is accomplished, there is the matter of getting the barcodes to join together in the right way to ensure that which cell is pre- and which is post-synaptic remains decipherable.

And even if all these specifics are successfully tackled, the method itself has its limitations. Essentially what it provides is a connectivity matrix, a list of cells which are defined solely by their connections. We can’t say much else about the type of cells (their location, the neurotransmitter they use), the type of synapse (excitatory v. inhibitory, it’s location on the post-synaptic cell, the morphology), or activity levels. All these aspects are potentially important to our understanding of neuronal wiring. But that’s no reason to dismiss BOINC. The current methodology also has its limiatations and if BOINC works it will be a vast improvement in terms of output rates. It also has the potential to be combined with other traditional techniques to investigate the above properties, yielding a far more complete picture than we have right now.

Overall, we don’t know how important knowing the connectome is for our understanding of average brain function or differences amongst individuals. But making claims about its role when we know so little about it is illogical. Even “anti-connectomists” should recognize that getting results which validate their dismissal of such fine-level anatomical analysis requires that we have the fine-level anatomy to test in the first place. Basically, when it comes to the connectome, we don’t know enough to say that we don’t need to know it. Furthermore, the sequencing approach promises a smaller time and money commitment, quieting those who worry about the resources going into connectomism. So breakthroughs in approaches like this should be applauded by all, both for their potential to advance the field and for encouraging others to think a little outside of their methodological box.

ResearchBlogging.org
Zador, A., Dubnau, J., Oyibo, H., Zhan, H., Cao, G., & Peikon, I. (2012). Sequencing the Connectome PLoS Biology, 10 (10) DOI: 10.1371/journal.pbio.1001411

December 31, 2012 / neurograce

Blurring the Line: A collection of advice for completing a PhD

Earning a PhD is a lot like having a baby. It’s time-consuming, messy, and can cause a lot of sleepless nights. Importantly, it also doesn’t come with an instruction manual. Grad students start their program with a list of course requirements and the charge to “do research.” But what that actually means and how to do it well is conveniently, and possibly intentionally, left out. Essentially, the process of earning your PhD is meant to teach you how to do research. But, in the spirit of New Year’s resolutions, I think it would be helpful to offer some more concrete tips that can make the process a little smoother. Now, since I have a whole one semester of grad school under my belt (hold your applause), I may not be in the best position to give advice. I am, however, perfectly-suited for pestering older grad students and PhD holders for their words of wisdom and scouring the internet for help (by the internet I mostly mean Quora. Do you Quora? I Quora. If you don’t Quora, you should really Quora). So I’ve collected some of the main themes agreed to be necessary for a successful PhD by the people who have done them and present them to you here (in no particular order):

Organize. Your notes, your lab book, your time. Organization is a fuel for productivity. It doesn’t matter if you spend all night running experiments if your lab book is so messy that the next morning you don’t even know what variables you were manipulating. And there’s not much point in frantically taking notes during a lecture or while reading a paper if you have no meaningful way of referencing them later. Also, in grad school, just like in life, time is your most valuable resource so you want to make sure you’re spending it wisely.

Luckily, in this digital age we live in, there are plenty of tools to help you organize just about everything. I like Mendeley for papers, because it can actually save notations you add, is accessible from anywhere, and makes it super easy to compile citations. Endnote and Citeyoulike have been highly-recommended for paper and note-organizing as well. I’m a big fan of toodledo or other online todo lists sites for keeping track of tasks and nothing beats good old Google calendar for a visual representation of your time. Of course, all these services only work to the extent that you use them and use them well. So remember to actually import your papers to Mendeley, categorize your notes appropriately, and set realistic time tables for your tasks. And don’t get too carried away with all the technology, or you may need to get a service to organize your organizational services.

Stay Healthy. Mentally and physically. It is quite possible for graduate students to get so wrapped up in their work to the point of neglecting important things like eating, sleeping, and moving. While this dedication seems potentially beneficial, the fact is that such habits are not sustainable and will, sooner or later, catch up to you and your PhD progress. You are unlikely to think up any significant breakthroughs when stressed, hungover, and/or depressed. If it helps, think of yourself as a piece of expensive lab equipment that needs proper care and maintenance to produce good results.

Having a few hobbies completely unrelated to your work can keep you grounded, sane, and in-shape. Something like team sports or dance lessons have the dual benefit of physical exercise, a well-known stress reliever, and some (probably much-needed) social interaction with non-scientists. Engaging in external interests can also spark your creativity within your own line of work and guarantees that your happiness isn’t completely tied to the current state of your research. Maybe you haven’t gotten an experiment to work in a month, but you did learn how to whittle an awesome spoon and planned the perfect birthday dinner for your significant other (yes, they count as hobbies). This will keep you energized and motivated enough to keep at your work. So basically, for the sake of your PhD, you need to have some occasional time away from it.

Read. Keeping up with your field is an important part of being a researcher. You don’t want to waste time working on a question that’s already been answered or developing a method that already exists. So read papers. Within your exact line of research read all kinds of papers, old and new, and especially the ones that your adviser constantly references. More broadly, know the seminal works in your field and keep up to date with current trends. Subscribe to email or RSS feeds from the main journals and actually look at them. Try to read papers completely outside your field if you think there is some aspect to them that might be relevant to you, or if you’re just interested. Basically, you should get really good at being able to read any paper, assessing its merit, and understanding its importance to your work and/or the field as a whole.

Of course, all this reading can conflict with your above-mentioned attempts at time management. So another important skill for grad students is to know when to stop. You can’t know every experiment that’s ever been done, and eventually you need to start doing your own. Pick your papers wisely, get good at reading them quickly, and learn to accept when you’ve learned enough. Attending talks can be a good way of getting a lot of information fed to you in about an hour, but they should also be chosen wisely. Since you can’t skim a talk to get to the good parts, you should avoid those with only limited potential for keeping your interest. And if you do take the time to attend them, make sure to stay attentive and awake (not that I’ve ever had any kind of problem with that whatsoever).

Write. The main output of any research is a journal article. If you can’t write about your work then you haven’t finished the job. And if you can’t write a thesis then you haven’t finished your PhD. So you need to get good at writing. And the only way to do that is practice, practice, practice. Take any opportunity to write. One great idea is to periodically create a writeup on your project as you’re working on it. Not only does this give you practice, but it keeps you focused on how what you’re doing at that time fits into your overall narrative, and lightens the work load at the end of your project when you’re looking to submit. A key component to good writing in science is, of course, to convey complicated concepts in as simple a way as possible. And the best way to practice that is to try explaining what you do to someone completely outside the field (family usually works well for this). These kinds of exercises don’t just help your writing, they help your science. As Einstein said, “If you can’t explain it simply, you don’t understand it well enough.”

Think. Possibly one of the simplest yet profoundest pieces of advice for a young researcher. It can be tempting, especially as a graduate student with a weekly meeting with an adviser looming, to simply churn out whatever kind of results you can possibly attain, without giving much thought, for example, to why you’re doing a particular kind of analysis or what the results really mean. This kind of rote work rarely leads to meaningful findings. So, allow yourself time devoted to just really thinking about your work. No multi-tasking, just quiet thought. Think about what you’re doing and why and how it fits in with your knowledge of the rest of the field. Brainstorm potential solutions to the problem you’re working on, and ways of testing each of them. The ability to entertain multiple possibilities at once and coping with ambiguity are important traits in successful researchers. Engaging in deep thought (and writing down those thoughts) helps develop these and other meaningful skills in ways that superficial work never could.

Ask Questions. Ask questions at talks, ask questions in class, ask questions to your adviser, ask questions during journal club, ask questions to other grad students. Just, for the love of whatever you think is holy, ask questions whenever you have them. There is a common fear, especially amongst students, that the question you want to ask is going to betray your ignorance. The answer, you assume, is so obvious to everyone else in the room that your very asking of it is going to get you kicked right out of graduate school. The odds of this happening, however, are very small. The odds of someone else in the room having a similar question that they are equally afraid of asking is probably much higher. So be the hero and ask it. There is no way to get answers otherwise. And if graduate school isn’t the time that you get to ask unlimited (possibly stupid) questions, then when is? For younger grad students its especially beneficial to get information from older students that you can’t get elsewhere: advice on classes, advisers, administrative hurdles, etc. So worry less about looking smart, and actually get smart by asking as many questions as you need.

Network. Yes, that terrible n-word. To most scientists “networking” brings up images of people in suits exchanging business cards over power lunches. To us, there is just something phony about it. But the fact is that collaborations are a huge part of science, and you can’t have them without the ability to branch out to other people in your field. It’s easy enough to get to know people you work with regularly, but it’s also important to meet people with similar interests outside of your institution. Conferences are a great opportunity for that. The key is to follow through concretely. Get the email address of the person you had a really engaging conversation with at SfN. Follow the work of the labs you like and email authors when you have a question about their paper. As difficult as it can be for a typically solitary scientist, socialization and self-promotion are necessary. You want to get your name out there for when opportunities in your field open up, and you want to know who to contact when you need someone with a specific set of skills. Take your natural desire to talk about work that interests you and push it just a little bit further into the realm of “networking.”

Overall, it’s important to remember that you are not a PhD-producing robot. You are a person doing a PhD, and so the habits you develop need to be sustainable and keep you interested while still allowing you to achieve your goals in a timely manner. You will need to work hard, since at this level intelligence is a given. But if you’ve gotten to the point of working on a PhD, the research questions themselves should drive you. As Arnold Toynbee said, “The supreme accomplishment is to blur the line between work and play.”

December 17, 2012 / neurograce

Thoughts on Thinking: What is and is not necessary for intelligence

robot-thinkerThought is a concept that is frequently relegated to the Stewart-esque realm of “I know it when I see it.” To think isn’t merely to churn data and spit out output for each input; we instinctively believe there is more behind it than that.  When a friend pauses and stares off for a minute before answering a question, we allow them that time to think, having some notion of the process they’re going through. But we are not so empathetic when we see the colorful spinning wheel or turning hourglass after asking something of our computers.

A lack of a hard definition, however, makes studying thought a confusing endeavor.  We would ideally like to break it down into component parts, basic computations that can conceivably be implemented by groups of neurons.  We could then approach each independently, assuming that this phenomenon is equivalent to the sum of its parts. But the reductionist approach, while helpful for getting at the mechanisms of those individual computations, seems to strip the thought process of something essential, and is likely to lead to less than satisfying results.

Some of the great thinkers of cognition came together recently at Rockefeller University to both celebrate the 100th birthday of Alan Turing, and discuss this notion of what makes thought thought, how we can study it, and how (or, if) we can replicate it. The talks were wide-reaching and the crowd represented a variety of viewpoints, but common themes did emerge.  One of the most striking ones was the requirement of novelty.  That is, to have true intelligence requires the ability to answer novel questions based on currently-known information.  Recalling a fact is not intelligent.  Recalling a fact and integrating it into a new line of reasoning that was built purely from other recalled facts is.  The production of this new line of reasoning makes it possible to answer a new kind of question. And, as John Hopfield pointed out, the number of novel avenues of reasoning that can be explored and new questions an intelligent being can answer grows exponentially with the amount of information known.

This kind of extension of given information into new dimensions requires mental exploration. Intelligent beings seem to be able to walk down different paths and get a sense of which is the right one to follow. They can build hypothetical worlds and predict what will happen in them. They can abstract situations away to the highest level and use that to find deep similarities between superficially disparate things.  Josh Tenenbaum used this example from a 1944 psychological experiment to demonstrate just how instinctively we deviate from just processing the information that was given to us and start to add new layers of understanding.

The fact that most (decent) people come away from that video with a sense of ill-will towards a shape shows how wild our mental exploration routinely gets. But what is it that allows us to make these new connections and forge novel cognitive paths? And is it possible to design entities that do it as well?

Currently, much of AI research is focused on the creation of artificial neural networks that can extract meaningful information from a small amount of input and use that to be able to categorize future input of a similar kind. For example, by discovering commonalities across the shape of letters in hand-writing samples, a computer can identify letters in a new hand-writing  sample and turn it successfully into text, even though it had never “seen” that exact sample before. It is important to note, however, that while the computer is answering something novel in the sense that it is telling us the text of a new hand-writing sample, the question itself is not novel. Identifying letters is exactly what the computer was trained to do and exactly what it does; nothing new there. Many people would argue, that these kinds of networks, no matter how complex, could never end up creating the intelligence we experience in ourselves. Pattern recognition simply doesn’t capture enough of what intelligence requires. There is no deeper understanding of why the patterns exist. There aren’t abstract connections between concepts. There is no possibility for novel exploration.

But what if we made it more complicated, by say training networks to do pattern recognition on patterns themselves? Or combined these networks with other AI technology that is better suited for learning causal relationships, such as Bayesian inference? That combinatorial approach is what helped IBM’s Watson become a Jeopardy! whiz. Watson got the frequently-ambiguous and complex clues as textual inputs and displayed amazing language analysis skills by coming up with the correct response. He was drawing connections between the words in the clue and concepts that are external of it. An incredibly impressive feat, but we still don’t say that Watson thought about the clues or understood the answer. Watson just did. He did as he was programmed to do. That gut response suggests that we believe that automation and intelligence must be mutually exclusive.

But such a conviction is worrisome. It means that the bar we’re measuring intelligence by inches up anytime a machine comes close to reaching it.  Somehow in the process of understanding thought, we seem to discredit any thought processes we can understand. Certainly, Watson wouldn’t pass the Turing test. He had a limited skill set and behavioral repertoire. He couldn’t tell a personal story, or laugh with the rest of us when he occasionally came up with an absurd response to a clue. But he was very capable in his realm and the idea that he was not at all “thinking” seems in someway unfair. Perhaps not unfair to Watson, but unfair to our level of progress in understanding thinking. The idea that someday a robot could be built which does by all appearances perfectly replicate human cognition can be unsettling to some people. We don’t want to think of ourselves as merely carbon-based computing machines. But we shouldn’t let hubris stand in the way of progress. We may miss the opportunity to explain thought because we are too afraid of explaining it away.

December 12, 2012 / neurograce

Uncharted Territory: How the processing of smell differs from other senses

unmappable

Anyone who has used a map (or as it’s more commonly called today, a smart phone) knows that it’s not so hard to turn the three-dimensional world into a useful 2-D representation. In fact, our brains do it instinctively. Spatial information enters via the retina, is carried along through the optic nerve and a few synapses, and eventually creates a wave of activity in the occipital lobe of the brain. This allows the external world to be mapped out in two dimensions across the surface of the visual cortex (with cells next to each other encoding areas of the visual field that are next to each other). The auditory system has an analogous process, although it may seem less intuitive. The cochlear, the snail-shaped organ in your inner ear, is responsible for creating a map of the frequencies of sounds. As that information gets passed along, low frequency stimuli end up being processed in one end of the auditory cortex and high frequencies at the other. The somatosensory cortex is similarly mapped; cortical areas getting input from your hand are next to those getting input from your wrists, etc. But when it comes to smell, everything is a bit…muddled.

See, the organization of the cells that process smells isn’t quite so straightforward. There is not what we would call an “olfactory map” in the cortex, at least not one that scientists can recognize. The pipeline of getting information from the nose into the brain is actually a quite intriguing one. It starts with the olfactory receptor neurons in the nose coming in contact with whatever molecules are floating about in the air. Different olfactory receptor neurons will bind to different molecules, and when this happens these receptor cells become active. They then project to big chunks of cells in the olfactory bulb called glomeruli, and in turn activate these cells. Each glomerulus will only get inputs from olfactory receptor cells that respond to the same odor molecule. So there is a great convergence of information, with each glomerulus responding to specific odorants. But it all appears to be for naught, because at the very next stage of processing things get all tangled up again. The cells in the glomeruli project to the piriform cortex, but in a random way which destroys the order and segregation they initially had. Cells in the piriform cortex thus get input from a variety of receptor types and respond to specific odors based on that input. However, we don’t have the same kind of physical relationship amongst cells that we see in other sensory modalities. That is, one cell’s response to a stimulus may have nothing in common to the response of the cell directly next to it. Cell A can be highly active when there is a hint of vanilla in the air and its neighbor cell B may fire in response to a whiff of vinegar.

This is odd because we generally view maps in cortex as serving some kind of purpose. To start, It’s energetically advantageous to have most of your connections be to nearby cells. If these connections are excitatory then it would nice if those nearby cells are trying to send a similar message. With this setup, local connections can be used to strengthen the signal, and with the addition of inhibitory interneurons that project farther away, weaken an opposing signal that’s a little way down the cortex. Basically, it’s easier to enhance the response of similarly-responsive cells and suppress the activity of unrelated cells if there is some meaningful layout to the cell landscape. But from our current vantage point, olfactory cortex looks like a complete jumble with no structure to be found.

But then again, what kind of structure would we expect? With visual cortex it’s easy: areas that are physically close to each other in the external world should be represented by cells that are physically close to each other in the cortex. A similar notion is true for auditory cortex: frequencies that are near each other should activate cells that are near each other. But what makes a smell “near” another smell? The molecules that activate olfactory receptors, the physical things that make up what we know as smells, can have complex molecular structures that vary in numerous dimensions. One could compare, for example, molecular size, the number of carbon atoms, or what functional groups are attached to the end of a molecule. Haddad et al.  actually came up with 1,664 different metrics to describe an odorant. Given all these different scales, to say that one odor molecule is similar to another doesn’t have much practical meaning. Especially when we consider the fact that molecules which are incredibly similar physically can elicit completely different perceptual experiences. For example, carvone is a molecule that can smell like spearmint, but if it’s structure is switched to the mirror opposite form, it elicits the spicy scent of caraway seeds. So the notion of the brain being able to create a map of smells presupposes the existence of a simple relationship amongst odorants, which simply doesn’t exist.

As with most mysteries of science, though, it presents a great opportunity. That is, the opportunity to be wrong. Perhaps piriform cortex does in fact have a grand organizing principle that is simply alluding our detection. The Sobel lab has produced some interesting work connecting odorant structure, receptor activity, and the perceived pleasantness of an odor. If they are able to find some kind of odor pleasure map, it would represent a new kind of topography for sensory cortex: one that is based on internal perception, not just external physical properties. This would produce a flood of new questions about how such organization develops, and the universality of smell preferences across species and individuals. What currently looks to our electrodes as evolution’s mistake, could in fact be a very well planned-out olfactory citymap. And learning how to read that map could lead to a whole new way of understanding our olfactory world.

December 3, 2012 / neurograce

Brain Donation: Removing the stigma in order to advance the science

According to 2012 National Donor Registration Report Card, 101.4 million people in the US are registered organ, eye, and tissue donors. With nearly one third of Americans exhibiting such bodily generosity, we don’t seem to have a cultural problem with the notion of tissue donation. But your standard, back-of-the-driver’s-license donation commitment doesn’t cover what happens to your brain after you die. And the number of people who do seek out a specific brain donation plan is much, much smaller.

The reasons behind this “brain drain” are probably varied. For one, since we don’t have any Frankenstein-esque brain transplant technology (yet!), brain donations don’t have a direct ability to save lives. Thus, they provide less of a feel-good incentive for participants. Also, the organizations that collect brains for medical research aren’t highly publicized. So, many people simply don’t realize it’s an option, or that they have to make a separate commitment for it. But there is also the special nature of the brain that I am inclined to believe makes brain donation a uniquely difficult concept for the general public to sign on to. Most people don’t feel a strong connection between their sense of self and their left kidney. But not so with the brain. The notion of the seat of your consciousness being cut out, shipped to a lab, and treated like any other growth on a petri dish creates some understandable discomfort. We want to believe that our identity is somehow more resilient than that, that it can’t (or at least shouldn’t) simply be dissected.

But the fact is that no matter your philosophical or religious beliefs, that three pound mass of cells isn’t going to do much for you once you’re gone. It can, however, do a lot for scientists studying human neurological diseases. Even if you die with a completely healthy brain, your donation provides important controls against which neuroscientists can compare pathological brains. Healthy blood-relatives of diseased patients are especially useful for this. In fact, the brain bank...get it?Harvard Brain Bank has previously complained of a lack of normal brains with which to compare their more ample supply of diseased brains. So pick your most-hated neurological or psychological disorder, commit your brain to the study of it, and you’ll be able to rest in peace assured that you’re contributing to its cure. Here’s a list of some of your options:
Alzheimer’s
The Taub Institute at Columbia University
Boston University Alzheimer’s Disease Center
NYU Alhzeimer’s Disease Center
Penn Memory Center
Parkinson’s
Parkinson’s Disease Foundation
Queen Square Brain Bank
Progressive supranuclear palsy
CurePSP
Multiple Sclerosis
National Multiple Sclerosis Society
Mental Illness (Schizophrenia, Alcoholism, etc)
Southwest Brain Bank
Using our Brains
Autism
MIND Institute
Narcolepsy
Stanford School of Medicine
Huntington’s
PREDICT-HD
Frontotemporal Degeneration
Association for Frontotemporal Degeneration
Brain Trauma
VA CSTE
Restless Leg Syndrome (no offense to the RLS people, but I imagine they have a hard time competing for brains against these other diseases)
RLS Foundation
General Brain Banks (these repositories process and store tissue and send it to a variety of labs upon request)
Brain Endowment Bank
University of Pittsburgh Medical Center
Harvard Brain Bank
The Human Brain and Spinal Fluid Resource Center
The Brain Observatory

The Brain Observatory is my personal favorite. The name is great,  they do some really nice imaging work with their specimens, and I’ve always wanted to have a professional photo shoot. Also, one year ago today they took on the charge of dissecting and imaging the brain of the infamously memory-impaired patient HM. Who wouldn’t want to be treated to the same star-quality experience?

A few procedural notes: Some of these banks are limited to only taking local donations (including the Brain Observatory, sadly). You may also be worried about how donating your brain might affect normal funeral procedures. The short answer is that it won’t. The removal process can be done at the hospital and is “minimally invasive” (although that description seems a bit generous). But it doesn’t cause any delays or prohibit an open casket. Importantly, since technically it is the next-of-kin that ultimately allows the donation to happen, your family has to be aware of your wishes. So sit them down, preferably not over dinner, to let them know your intentions. Or just write a post about it on your blog (hi family!).

Furthermore, your decision to donate is best made early, since some studies may want to collect some pre-mortem data from you. As the movie Head Games discusses, the VA CSTE lab at Boston University has recruited a number of current and former NFL players to participate in cognitive testing in addition to their commitment to donate. Their repeated head injuries can cause a lot of ongoing neurological changes throughout their life. This kind of longitudinal study can make the post-mortem contribution even more powerful.

To me, the urge to donate should be especially strong amongst fellow neuroscientists. We are the ones working to understand the brain. And we know the satisfaction of getting access to a lot of good quality data about it. We also know what is truly possible when we have that data. If you’ve spent your life donating the activity of your brain to neuroscience, why not give one last contribution to the field?

%d bloggers like this: