Skip to content
February 11, 2013 / neurograce

The Right Tool for the Job: How the nature of the brain explains why computational neuroscience is done

Recently, I was charged with giving a presentation to a group of high schoolers preparing for the Brain Bee on the topic of computational approaches to neuroscience. Of course, in order to reach my goal of informing and exciting these kids about the subject, I had to start with the very basic questions of ‘what’ and ‘why.’ It seems like this task should be simple enough for someone in the field. But what I’ve discovered–in the course of doing computational work and in trying to explain my work to others–is that neither answer is entirely straightforward. There is the general notion that computational neuroscience is an approach to studying the brain that uses mathematics and computational modeling. But as far what exactly falls under that umbrella and why it’s done, we are far from having a consensus. Ask anyone off the street and they’re probably unaware that computational neuroscience exists. Scientists and even other neuroscientists are more likely to have encountered it but don’t necessarily understand the motivation for it or see the benefits. And even the people doing computational work will come up with different definitions and claim different end goals.

So to add to that occasionally disharmonious chorus of voices, I’d like to present my own explanation of what computational neuroscience is and why we do it. And while the topic itself may be complicated and convoluted, my description, I hope, will not be. Basically, I want to stress that computational neuroscience is merely a continuation of the normal observation- and model-based approach to research that explains what so many other scientists do. It needn’t be more difficult to justify or explain than any other methodology.  Its potential to be viewed as something qualitatively different comes from the complex and relatively abstract nature of the tools it uses. But the choice of those tools is necessitated simply by the complex and relatively abstract nature of what they’re being applied to, the brain. At its core, however, computational neuroscience follows the same basic steps common to any scientific practice: making observations, combining observations into a conceptual framework, and using that framework to explain or predict further observations.

That was, after all, the process used by two of the founding members of computational neuroscience, Hodgkin and Huxley. They used a large set of data about membrane voltage, ion concentrations, conductances, and currents associated with the squid giant axon (much of which they impressively collected themselves). They integrated the patterns that they found in this data into a model of a neural membrane, which they laid out as a set of coupled mathematical equations each representing different aspects of the membrane. Given the right parameters, the solutions to these equations matched what was seen experimentally. If a given amount of current injection made the squid giant axon spike, then you could put in the same amount of current as a parameter in the equations and you would see the value of the membrane potential respond the same way. Thus, this set of equations served (and still does serve) as framework for understanding and predicting a neuron’s response under different conditions. With some understanding of what each of the parameters in the equations is meant to represent physically, this model has great explanatory power (as defined here) and provides some intuition about what is really happening at the membrane. By providing a unified explanation for a set of observations, the Hodgkin-Huxley model does exactly what any good scientific theory should do.

It may seem, perhaps, that the the actual mathematical model is superfluous. If Hodgkin and Huxley knew enough to know how to build the model, and if knowledge of the what the model means has to be applied in order to understand its results, then what is the mathematical model contributing? Two things that math is great for: precision and the ability to handle complexity. If we wanted to, say, predict what happens when we throw a ball up in the air, we could use a very simple conceptual model that says the force of gravity will counteract the throwing force, causing the ball to go up, pause at its peak height, and come back down. So we could use this to predict that more force would allow the ball to evade gravity’s pull for longer. But how much longer? Without using previous experiments to quantify the force of gravity and formalize its effect in the form of an equation, we can’t know. So, building mathematical models allows for more precise predictions. Furthermore, what if we wanted to perform this experiment on a windy day, or put a jetpack on the ball, or see what happens in the presence of second planet’s gravitational pull, or all of the above? The more complicated a system is, and the more its component parts counteract each other, the less likely it is that simply “thinking through” a conceptual model will provide the correct results. This is especially true in the nervous system, where all the moving parts can interact with each other in frequently nonlinear ways, providing some unintuitive results. For example, the Hodgkin-Huxley model demonstrates a peculiar ability of some neurons: the post-inhibitory rebound spike. This is when a cell fires (counterintuitively) after the application of an inhibitory input. It occurs due to the reliance of the sodium channels on two different mechanisms for opening, and the fact that these mechanisms respond to voltage changes on a different timescale. This phenomenon would not be understandable without a model that had the appropriate complexity (multiple sodium channel gates) and precision (exact timescales for each). So, building models is not a fundamentally different approach to science; we do it every time we infer some kind of functional explanation for a process. However, formalizing our models in terms of mathematics allows us to see and understand more minute and complex processes.

Click here for a full explanation

A Hodgkin-Huxley simulation showing post-inhibitory rebound spiking.

Additionally, the act of building explicit models requires that we investigate which properties are worth modeling and in what level of detail. In this way, we discover what is crucial for a given phenomenon to a occur and what is not. In many regards, this can be considered a main goal of computational modeling. The Human Brain Project seeks to use its 1 billion Euro prize to model the human brain in the highest level of detail and complexity possible. But, as many detractors point out, having a complete model of the brain in equation form does little to decrease the mystery of it. The value of this simulation, I would say, then comes in seeing what parameters can be fudged, tweaked, or removed entirely and still allow the proper behavior to occur. Basically, we want to build it to learn how to break it. Furthermore, as with any hypothesis testing, the real opportunity comes when the predictions from this large-scale model don’t line up with reality. This lets us hunt for the crucial aspect that’s missing.

Computational neuroscience, however, is more than just modeling of neurons. But, in the same way that computational models are just an extension of the normal scientific practice of modeling, the rest of computational neuroscience is just an extension of other regular scientific practices as well. It is the nature of what we’re studying, however, that makes this not entirely obvious. Say you want to investigate the function of the liver. Knowing it has some role in the processing of toxins, it makes sense to measure toxin levels in the blood, presence of enzymes in the liver, etc when trying to understand how it works. But the brain is known to have a role in processing information. So we have to try, as best we can, to quantify and measure that. This leads to some abstract concepts about how much information the activity of a population of cells contains and how that information is being transferred between populations. The fact that we don’t even know exactly what feature of the neural activity contains this information does not make the process any simpler. But the basic notion of desiring to quantify an important aspect of your system of interest is in no way novel. And much of computational neuroscience is simply trying to do that.

So, the honest answer to the question of what computational neuroscience is is that it is the study of the brain. We do it because we want to know how the brain works, or doesn’t work. But, as a hugely complex system with a myriad of functions (some of which are ill- or undefined), the brain is not an easy study. If we want to make progress we need to choose our tools accordingly. So we end up with a variety of approaches that rely heavily on computations as a means of managing the complexity and measuring the function. But this does not necessarily mean that computational neuroscientists belong to a separate school of thought. The fact that we can use computers and computations to understand the brain does not mean that the brain works like a computer. We merely recognize the limitations inherent in studying the brain, and we are willing to take help wherever we can get it in order to work around them. In this way, computational approaches to neuroscience simply emerge as potential solutions to the very complicated problem of understanding the brain. Kaplan, D. (2011). Explanation and description in computational neuroscience Synthese, 183 (3), 339-373 DOI: 10.1007/s11229-011-9970-0

February 1, 2013 / neurograce

The Only Constant is Brain Change: A blog series on neuroplasticity for the Dana Foundation

Hey do you know about the Dana Foundation? It’s a New York-based philanthropic organization dedicated to funding brain research, education, and outreach. They organize fun things like global Brain Awareness Week and online brain resources for kids. They also host a blog aimed at describing aspects of neuroscience to the general public, and today marks the first in a series of three monthly guest posts I’ll be doing for them!

The topic I chose for this series is neuroplasticity. Plasticity, the brain’s ability to adapt and change, is an area of huge research. Developmentally, there are questions about what inputs are required (and at what times) in order to properly prune and shape connections in the early brain. On the molecular scale, we want to know what proteins are involved in signaling and inducing changes in synapses. At the systems level, how the relative timing of spikes from different cells affects synaptic weights is being investigated. On the more practical side, translational workers want to harness the beneficial aspects of the brain’s response to injury while minimizing the negative ones. Clearly the topic of plasticity provides a depth and breadth of questions for neuroscientists–and thus plenty of material for a blog.

But beyond simply stressing the variety of ways in which the brain is plastic, I’d like to go a step further. I would say that plasticity is the brain. Changing connections and synaptic weights isn’t merely something the brain can do when necessary; it is the permanent state of affairs. And thankfully so. Plasticity on some level is required in order to do many of the things we consider crucial brain functions –memory, learning, sensory information processing, motor skill development, and even personality. The brain is always reorganizing itself, synapse by synapse, in order to best process its inputs and create appropriate outputs. This kind of adaptive behavior keeps us alive–say, by creating a memory of a food that once made us sick so that we know to avoid it in the future. And makes us good at what we do–changes in motor cortex, for example, are why practice makes perfect in many physical activities. There is no underestimating the power, importance, or complexity of neuroplasticity.

In order to cover all these majestic aspects of plasticity, I’ve divided the posts into the following topics (links will be added as they’re available):

Developmental Plasticity and the Effect of Disorders. How the brain responds to a lack of input during development, and to a disorder that impairs plasticity itself.

Plasticity in Response to Injury–a Blessing and a Curse. The brain’s attempt to fix itself after stroke, and some weird effects of losing a limb.

Short-term Plasticity and Everyday Brain Changes. Changing enviornmental demands call for quick and helpful changes in the brain.


January 18, 2013 / neurograce

Currently Necessary Evil: A (vegan’s) view on the use of animals in neuroscience research

All research methodologies have their challenges. Molecular markers are finicky. Designing human studies is fraught with red tape. And getting neural cultures to grow can seem to require as much luck as skill. But for those of us involved in animal-based research, there is an extra dimension of difficulty: the ethical one. No matter how important the research, performing experiments on animals can stir up some conflicted feelings on the morality of such a method.  This only intensifies when studying the brain, the very seat of what these animals are and how (or how much) they think and feel. Even the staunchest believer in the rightness of animal research still has to contend with the fact that a decent portion of society finds what they do unethical. It is not a trivial issue and shouldn’t be treated as such.

But, like with our use of animals for food, clothing, and a variety of other needs, experimenting on animals dates back millennia. The first recorded cases came from ancient Greece and have continued ever since. Now while historical precedent is not sufficient evidence of ethicality, the history of animal testing does allow us to recognize the great advances that can come from it. Nearly all of our tools of modern medicine, our knowledge about learning and behavior and our standards for food safety and nutrition would be gone with the absence of animal research. I certainly wouldn’t be writing a blog called Neurdiness, because Neuroscience wouldn’t exist as a field. It is clear that animal testing has proven crucial to the advancement of our society, and perhaps even to its very survival. And that value cannot be overlooked or underappreciated when having a discussion about morality.

So, we seem to come to an impasse. The instinctive distaste for replacing freedom with suffering in animals is pitted against the knowledge that this practice will prevent suffering in humans. And how to equate the two is unclear. Is an effective treatment for Alzheimer’s or Parkinson’s worth the lives of countless rats, mice, and pigs? What about the potential for such a treatment? Does our desire to understand higher cognitive functions—which may not lead directly to medical advances, but could change our notions of who we are and how we think—justify experiments on non-human primates, the only animals from which we can glean such information?  Beliefs exist at either extreme: so-called “abolitionists” claim no exploitation of animals is ever justified, while at the other end is the feeling that animals lack moral standing and their needs are thus subordinate to those of humans.

Personally, I view the use of animals in research as something of a necessary evil. For the majority of human history, people felt free to capture, kill, or enslave animals for a variety of purposes. For food, for powering agricultural tools, for transportation, for materials. We’ve since outgrown the need to use animals for many of these things, but before modern technology their use seemed perfectly justified and even required. Indeed, civilization would be nothing like it is now without our willingness to utilize animals. The present state of neuroscience research is something like that of early humans. I’m vegan because the state of modern food production and distribution means I can be healthy without harming animals. But I’m an animal researcher because in the present state of Neuroscience I know we cannot progress the field without them. We don’t yet have the technology to free ourselves of a dependence on animals, and our ability to reach that point requires their continued use.

Working in this moral gray zone leaves many neuroscientists feeling uneasy about discussing their methodology with the general public. Of course divulging any specific details about animal suppliers or where the animals are housed, etc. is dangerous due to the risk of it falling into the wrong hands, but speaking openly about engaging in animal research should not be outside the realm of comfort of a researcher. The right balance of ensuring safety while still defending your position is clearly a hard one to strike (as the results of this Nature survey suggest). But the voices of scientists involved in the work is crucially needed in the public debate and thus the proper practices need to be established by research institutions so as to disallow the prevention of this participation by fear.

Of course the best way to encourage support for the use of animals in your research is to ensure that you’re doing good science. To most people, animal research is acceptable on the grounds that its providing a significant benefit and experimenters need to keep this mandate in mind. Importantly, doing good science also means adhering to the guidelines for proper care and treatment of laboratory animals. Those concerned about the treatment of animals in labs would be happy to know that there are a plethora of agencies overlooking the design of animal experiments, how animals are housed, and ensuring that the least amount of pain is inflicted as possible (an extensive list of related resources can be found here). An important tenet is the “three R’s” of animal testing. This framework, first put forward by Russell and Burch in 1959, urges reduction (use only the amount of animals needed to significantly verify a finding, and use them wisely and carefully so as to reduce unneeded waste), refinement (use the most humane housing, anesthetic, and experimental techniques available, avoiding invasive and painful procedures if possible), and replacement (seek alternatives to animal use whenever possible; options include culturing tissues in a dish, computational modeling, and the use of lower animals or plant life). Adhering to such rules and guidelines from over-seeing agencies is important for both the continued operation of a lab as well the maintaining a good public perception. Furthermore, the proper treatment of lab animals is not merely a means of appeasing animal rights activists; it is crucial for attaining accurate results, especially in neuroscience. It is known that physical and psychological stress can have huge impacts on the brain and an unhealthy animal is likely to produce unreliable results. Additionally, keeping in line with proper practices can reduce the unease that a researcher may feel about their work with animals.

In the realm of neuroscience, animal research is, in no uncertain terms, a necessity. But at the same time, we are making strides in the implementation of the three R’s, specifically with replacement. The ability to grow neural cultures is widely used when appropriate. Realizing the potential of lower animals to answer questions normally posed to higher ones is also a promising trend. For instance, social behavior in flies and decision-making in rodents are being explored to a greater extent. Computational modeling is also becoming ever more utilized, and while it is far from fully replacing an animal, it can at least predict which experiments have the most potential to be of use, thus reducing wasteful animal use And with time, and the refinement of all these techniques, animals will continue to be used more wisely and with less frequency. So, if researchers don’t become too myopic or complacent, and continue to view their work in the larger ethical context, designing experiments to reduce moral dilemmas, then we can progress in a way that is both humane and fruitful.
Editors (2011). Animal rights and wrongs Nature, 470 (7335), 435-435 DOI: 10.1038/470435a

January 9, 2013 / neurograce

Sequencing the Connectome: An entirely new approach to neuroanatomy at the finest level

Neuroanatomy can happen at many scales. At the highest end, we can ask if certain areas of the brain have connections between them: for example, does the lateral geniculate nucleus (LGN) send projections to primary visual cortex? (hint: yes). Through electrical stimulation and tract-tracing methods, we’ve gotten pretty good at finding this out. We can then look at connections within these areas: which layers of visual cortex connect to each other? Cell-staining and microscopy make this investigation possible. And we can even go further and try to learn about what kind of connections exist within a single layer of cortex (an area that is only fractions of a millimeter thick). Advanced, automated imaging techniques have allowed much progress here. Not only that, we can even look across scales by investigating, for example, which layer of LGN sends connections to which layer of visual cortex. Importantly, the tight relationship between structure and function in the brain means that learning about all these connections provides functional insights in addition to purely anatomical ones.

Now, taking this connectivity quest to its logical extreme, the most we could ask to know is every connection made by every cell in the brain. This information is called the connectome, and it has been causing quite a buzz recently in the neuro-world. What that level of detail could tell us about how the brain computes and how it differs across species and individuals is an area of hot debate. Some people feel the effort to investigate the connectome is a waste of resources, and useless in any case because a single network with constant connectivity can still show vastly different behavior under different conditions (inputs, modulators, etc.). Others feel that much information about a network and its functions is stored in how cells connect to each other. The connectome’s most prominent proponent, Sebastian Seung has almost religious-like zeal for the connectome as the be all and end all in determining who we are.

For the imaging approach, programs like NeuroTrace automatically scan layers of EM images trying to trace neuronal projections.

But if Seung’s mantra of “we are our connectome” is true, then the vast majority of us are going through a major identity crisis. The fact is, there is only one creature for which we have deciphered the connectome: C. elegans. The 7000 synapses between the 302 neurons of this little worm took over 50 person-years to obtain using available imaging techniques. Advances in automated analysis of electron microscopy data are occurring rapidly and can speed up and simplify this process. But scaling this up to, say, a mouse brain (~100 billion synapses) or a human brain (~100 trillion synapses) still seems pretty impractical. Our desire to know the connectome is nowhere near our ability to obtain it.

Enter Anthony Zador, a Cold Spring Harbor researcher with a different approach. His recent paper outlines the delightfully acronymed BOINC (barcoding of individual neuronal connections) method of seeing synapses. Actually, the value of BOINC is in the fact that it takes the “seeing” out of the process. Rather than relying on imaging methods to determine the existence of a synapse, BOINC harnesses the power of genetic sequencing. As the paper points out, DNA sequencing speeds have been increasing nearly as quickly as the prices have been dropping. This powerful combination makes it ideal for such an immensely large project as connectome mapping.

So how does it work? As the name suggests, the process requires each neuron be artificially tagged with its own “barcode”, or sequence of nucleotides.  Hosting these barcodes in specific types of viral vectors inside the neuron can allow them to be transferred via synapses to other neurons. Thus, a single neuron will contain its own barcode as well as the barcodes of all the neurons with which it synapses. Next, these barcode sequences will need to be joined inside the cell so that their association can be made known later via sequencing. So, if in the process of sequencing you come across a chunk of DNA with cell A’s unique nucleotide sequence followed by cell X’s unique nucleotide sequence, then you can infer that there’s a connection between cell A and cell X. Do this for all the DNA chunks, get all the connections, and you’ve made yourself a connectome. (Fittingly, the word ‘connectome’ was actually inspired by ‘genome.’ This method should’ve been obvious!).

A schematic of the new approach (BOINC)

Now of course this idea is only in its very early stages. The exact implementation is yet to be determined and plenty of questions about the specifics already abound. To start, giving each neuron a unique DNA label is not a trivial problem. The authors suggest a similar method to the combinatorial one used in creating all the different fluorescent colors for Brainbow, but replacing the florophores with DNA sequences. The next stage, the act of transferring the barcodes across synapses, is luckily not as complicated as it may seem. Viruses specialize in spreading themselves, and rabies and pseudorabies viruses have been used in conjunction with dyes and other markers to trace neural connections for years. This method has its difficulties of course, such as  ensuring that the barcode-carrying virus stops transporting itself once it reaches the post-synaptic cell (lest it replicate and invade all cells, giving a lot of false positives). And once the synapse-jumping is accomplished, there is the matter of getting the barcodes to join together in the right way to ensure that which cell is pre- and which is post-synaptic remains decipherable.

And even if all these specifics are successfully tackled, the method itself has its limitations. Essentially what it provides is a connectivity matrix, a list of cells which are defined solely by their connections. We can’t say much else about the type of cells (their location, the neurotransmitter they use), the type of synapse (excitatory v. inhibitory, it’s location on the post-synaptic cell, the morphology), or activity levels. All these aspects are potentially important to our understanding of neuronal wiring. But that’s no reason to dismiss BOINC. The current methodology also has its limiatations and if BOINC works it will be a vast improvement in terms of output rates. It also has the potential to be combined with other traditional techniques to investigate the above properties, yielding a far more complete picture than we have right now.

Overall, we don’t know how important knowing the connectome is for our understanding of average brain function or differences amongst individuals. But making claims about its role when we know so little about it is illogical. Even “anti-connectomists” should recognize that getting results which validate their dismissal of such fine-level anatomical analysis requires that we have the fine-level anatomy to test in the first place. Basically, when it comes to the connectome, we don’t know enough to say that we don’t need to know it. Furthermore, the sequencing approach promises a smaller time and money commitment, quieting those who worry about the resources going into connectomism. So breakthroughs in approaches like this should be applauded by all, both for their potential to advance the field and for encouraging others to think a little outside of their methodological box.
Zador, A., Dubnau, J., Oyibo, H., Zhan, H., Cao, G., & Peikon, I. (2012). Sequencing the Connectome PLoS Biology, 10 (10) DOI: 10.1371/journal.pbio.1001411

December 31, 2012 / neurograce

Blurring the Line: A collection of advice for completing a PhD

Earning a PhD is a lot like having a baby. It’s time-consuming, messy, and can cause a lot of sleepless nights. Importantly, it also doesn’t come with an instruction manual. Grad students start their program with a list of course requirements and the charge to “do research.” But what that actually means and how to do it well is conveniently, and possibly intentionally, left out. Essentially, the process of earning your PhD is meant to teach you how to do research. But, in the spirit of New Year’s resolutions, I think it would be helpful to offer some more concrete tips that can make the process a little smoother. Now, since I have a whole one semester of grad school under my belt (hold your applause), I may not be in the best position to give advice. I am, however, perfectly-suited for pestering older grad students and PhD holders for their words of wisdom and scouring the internet for help (by the internet I mostly mean Quora. Do you Quora? I Quora. If you don’t Quora, you should really Quora). So I’ve collected some of the main themes agreed to be necessary for a successful PhD by the people who have done them and present them to you here (in no particular order):

Organize. Your notes, your lab book, your time. Organization is a fuel for productivity. It doesn’t matter if you spend all night running experiments if your lab book is so messy that the next morning you don’t even know what variables you were manipulating. And there’s not much point in frantically taking notes during a lecture or while reading a paper if you have no meaningful way of referencing them later. Also, in grad school, just like in life, time is your most valuable resource so you want to make sure you’re spending it wisely.

Luckily, in this digital age we live in, there are plenty of tools to help you organize just about everything. I like Mendeley for papers, because it can actually save notations you add, is accessible from anywhere, and makes it super easy to compile citations. Endnote and Citeyoulike have been highly-recommended for paper and note-organizing as well. I’m a big fan of toodledo or other online todo lists sites for keeping track of tasks and nothing beats good old Google calendar for a visual representation of your time. Of course, all these services only work to the extent that you use them and use them well. So remember to actually import your papers to Mendeley, categorize your notes appropriately, and set realistic time tables for your tasks. And don’t get too carried away with all the technology, or you may need to get a service to organize your organizational services.

Stay Healthy. Mentally and physically. It is quite possible for graduate students to get so wrapped up in their work to the point of neglecting important things like eating, sleeping, and moving. While this dedication seems potentially beneficial, the fact is that such habits are not sustainable and will, sooner or later, catch up to you and your PhD progress. You are unlikely to think up any significant breakthroughs when stressed, hungover, and/or depressed. If it helps, think of yourself as a piece of expensive lab equipment that needs proper care and maintenance to produce good results.

Having a few hobbies completely unrelated to your work can keep you grounded, sane, and in-shape. Something like team sports or dance lessons have the dual benefit of physical exercise, a well-known stress reliever, and some (probably much-needed) social interaction with non-scientists. Engaging in external interests can also spark your creativity within your own line of work and guarantees that your happiness isn’t completely tied to the current state of your research. Maybe you haven’t gotten an experiment to work in a month, but you did learn how to whittle an awesome spoon and planned the perfect birthday dinner for your significant other (yes, they count as hobbies). This will keep you energized and motivated enough to keep at your work. So basically, for the sake of your PhD, you need to have some occasional time away from it.

Read. Keeping up with your field is an important part of being a researcher. You don’t want to waste time working on a question that’s already been answered or developing a method that already exists. So read papers. Within your exact line of research read all kinds of papers, old and new, and especially the ones that your adviser constantly references. More broadly, know the seminal works in your field and keep up to date with current trends. Subscribe to email or RSS feeds from the main journals and actually look at them. Try to read papers completely outside your field if you think there is some aspect to them that might be relevant to you, or if you’re just interested. Basically, you should get really good at being able to read any paper, assessing its merit, and understanding its importance to your work and/or the field as a whole.

Of course, all this reading can conflict with your above-mentioned attempts at time management. So another important skill for grad students is to know when to stop. You can’t know every experiment that’s ever been done, and eventually you need to start doing your own. Pick your papers wisely, get good at reading them quickly, and learn to accept when you’ve learned enough. Attending talks can be a good way of getting a lot of information fed to you in about an hour, but they should also be chosen wisely. Since you can’t skim a talk to get to the good parts, you should avoid those with only limited potential for keeping your interest. And if you do take the time to attend them, make sure to stay attentive and awake (not that I’ve ever had any kind of problem with that whatsoever).

Write. The main output of any research is a journal article. If you can’t write about your work then you haven’t finished the job. And if you can’t write a thesis then you haven’t finished your PhD. So you need to get good at writing. And the only way to do that is practice, practice, practice. Take any opportunity to write. One great idea is to periodically create a writeup on your project as you’re working on it. Not only does this give you practice, but it keeps you focused on how what you’re doing at that time fits into your overall narrative, and lightens the work load at the end of your project when you’re looking to submit. A key component to good writing in science is, of course, to convey complicated concepts in as simple a way as possible. And the best way to practice that is to try explaining what you do to someone completely outside the field (family usually works well for this). These kinds of exercises don’t just help your writing, they help your science. As Einstein said, “If you can’t explain it simply, you don’t understand it well enough.”

Think. Possibly one of the simplest yet profoundest pieces of advice for a young researcher. It can be tempting, especially as a graduate student with a weekly meeting with an adviser looming, to simply churn out whatever kind of results you can possibly attain, without giving much thought, for example, to why you’re doing a particular kind of analysis or what the results really mean. This kind of rote work rarely leads to meaningful findings. So, allow yourself time devoted to just really thinking about your work. No multi-tasking, just quiet thought. Think about what you’re doing and why and how it fits in with your knowledge of the rest of the field. Brainstorm potential solutions to the problem you’re working on, and ways of testing each of them. The ability to entertain multiple possibilities at once and coping with ambiguity are important traits in successful researchers. Engaging in deep thought (and writing down those thoughts) helps develop these and other meaningful skills in ways that superficial work never could.

Ask Questions. Ask questions at talks, ask questions in class, ask questions to your adviser, ask questions during journal club, ask questions to other grad students. Just, for the love of whatever you think is holy, ask questions whenever you have them. There is a common fear, especially amongst students, that the question you want to ask is going to betray your ignorance. The answer, you assume, is so obvious to everyone else in the room that your very asking of it is going to get you kicked right out of graduate school. The odds of this happening, however, are very small. The odds of someone else in the room having a similar question that they are equally afraid of asking is probably much higher. So be the hero and ask it. There is no way to get answers otherwise. And if graduate school isn’t the time that you get to ask unlimited (possibly stupid) questions, then when is? For younger grad students its especially beneficial to get information from older students that you can’t get elsewhere: advice on classes, advisers, administrative hurdles, etc. So worry less about looking smart, and actually get smart by asking as many questions as you need.

Network. Yes, that terrible n-word. To most scientists “networking” brings up images of people in suits exchanging business cards over power lunches. To us, there is just something phony about it. But the fact is that collaborations are a huge part of science, and you can’t have them without the ability to branch out to other people in your field. It’s easy enough to get to know people you work with regularly, but it’s also important to meet people with similar interests outside of your institution. Conferences are a great opportunity for that. The key is to follow through concretely. Get the email address of the person you had a really engaging conversation with at SfN. Follow the work of the labs you like and email authors when you have a question about their paper. As difficult as it can be for a typically solitary scientist, socialization and self-promotion are necessary. You want to get your name out there for when opportunities in your field open up, and you want to know who to contact when you need someone with a specific set of skills. Take your natural desire to talk about work that interests you and push it just a little bit further into the realm of “networking.”

Overall, it’s important to remember that you are not a PhD-producing robot. You are a person doing a PhD, and so the habits you develop need to be sustainable and keep you interested while still allowing you to achieve your goals in a timely manner. You will need to work hard, since at this level intelligence is a given. But if you’ve gotten to the point of working on a PhD, the research questions themselves should drive you. As Arnold Toynbee said, “The supreme accomplishment is to blur the line between work and play.”

December 17, 2012 / neurograce

Thoughts on Thinking: What is and is not necessary for intelligence

robot-thinkerThought is a concept that is frequently relegated to the Stewart-esque realm of “I know it when I see it.” To think isn’t merely to churn data and spit out output for each input; we instinctively believe there is more behind it than that.  When a friend pauses and stares off for a minute before answering a question, we allow them that time to think, having some notion of the process they’re going through. But we are not so empathetic when we see the colorful spinning wheel or turning hourglass after asking something of our computers.

A lack of a hard definition, however, makes studying thought a confusing endeavor.  We would ideally like to break it down into component parts, basic computations that can conceivably be implemented by groups of neurons.  We could then approach each independently, assuming that this phenomenon is equivalent to the sum of its parts. But the reductionist approach, while helpful for getting at the mechanisms of those individual computations, seems to strip the thought process of something essential, and is likely to lead to less than satisfying results.

Some of the great thinkers of cognition came together recently at Rockefeller University to both celebrate the 100th birthday of Alan Turing, and discuss this notion of what makes thought thought, how we can study it, and how (or, if) we can replicate it. The talks were wide-reaching and the crowd represented a variety of viewpoints, but common themes did emerge.  One of the most striking ones was the requirement of novelty.  That is, to have true intelligence requires the ability to answer novel questions based on currently-known information.  Recalling a fact is not intelligent.  Recalling a fact and integrating it into a new line of reasoning that was built purely from other recalled facts is.  The production of this new line of reasoning makes it possible to answer a new kind of question. And, as John Hopfield pointed out, the number of novel avenues of reasoning that can be explored and new questions an intelligent being can answer grows exponentially with the amount of information known.

This kind of extension of given information into new dimensions requires mental exploration. Intelligent beings seem to be able to walk down different paths and get a sense of which is the right one to follow. They can build hypothetical worlds and predict what will happen in them. They can abstract situations away to the highest level and use that to find deep similarities between superficially disparate things.  Josh Tenenbaum used this example from a 1944 psychological experiment to demonstrate just how instinctively we deviate from just processing the information that was given to us and start to add new layers of understanding.

The fact that most (decent) people come away from that video with a sense of ill-will towards a shape shows how wild our mental exploration routinely gets. But what is it that allows us to make these new connections and forge novel cognitive paths? And is it possible to design entities that do it as well?

Currently, much of AI research is focused on the creation of artificial neural networks that can extract meaningful information from a small amount of input and use that to be able to categorize future input of a similar kind. For example, by discovering commonalities across the shape of letters in hand-writing samples, a computer can identify letters in a new hand-writing  sample and turn it successfully into text, even though it had never “seen” that exact sample before. It is important to note, however, that while the computer is answering something novel in the sense that it is telling us the text of a new hand-writing sample, the question itself is not novel. Identifying letters is exactly what the computer was trained to do and exactly what it does; nothing new there. Many people would argue, that these kinds of networks, no matter how complex, could never end up creating the intelligence we experience in ourselves. Pattern recognition simply doesn’t capture enough of what intelligence requires. There is no deeper understanding of why the patterns exist. There aren’t abstract connections between concepts. There is no possibility for novel exploration.

But what if we made it more complicated, by say training networks to do pattern recognition on patterns themselves? Or combined these networks with other AI technology that is better suited for learning causal relationships, such as Bayesian inference? That combinatorial approach is what helped IBM’s Watson become a Jeopardy! whiz. Watson got the frequently-ambiguous and complex clues as textual inputs and displayed amazing language analysis skills by coming up with the correct response. He was drawing connections between the words in the clue and concepts that are external of it. An incredibly impressive feat, but we still don’t say that Watson thought about the clues or understood the answer. Watson just did. He did as he was programmed to do. That gut response suggests that we believe that automation and intelligence must be mutually exclusive.

But such a conviction is worrisome. It means that the bar we’re measuring intelligence by inches up anytime a machine comes close to reaching it.  Somehow in the process of understanding thought, we seem to discredit any thought processes we can understand. Certainly, Watson wouldn’t pass the Turing test. He had a limited skill set and behavioral repertoire. He couldn’t tell a personal story, or laugh with the rest of us when he occasionally came up with an absurd response to a clue. But he was very capable in his realm and the idea that he was not at all “thinking” seems in someway unfair. Perhaps not unfair to Watson, but unfair to our level of progress in understanding thinking. The idea that someday a robot could be built which does by all appearances perfectly replicate human cognition can be unsettling to some people. We don’t want to think of ourselves as merely carbon-based computing machines. But we shouldn’t let hubris stand in the way of progress. We may miss the opportunity to explain thought because we are too afraid of explaining it away.

December 12, 2012 / neurograce

Uncharted Territory: How the processing of smell differs from other senses


Anyone who has used a map (or as it’s more commonly called today, a smart phone) knows that it’s not so hard to turn the three-dimensional world into a useful 2-D representation. In fact, our brains do it instinctively. Spatial information enters via the retina, is carried along through the optic nerve and a few synapses, and eventually creates a wave of activity in the occipital lobe of the brain. This allows the external world to be mapped out in two dimensions across the surface of the visual cortex (with cells next to each other encoding areas of the visual field that are next to each other). The auditory system has an analogous process, although it may seem less intuitive. The cochlear, the snail-shaped organ in your inner ear, is responsible for creating a map of the frequencies of sounds. As that information gets passed along, low frequency stimuli end up being processed in one end of the auditory cortex and high frequencies at the other. The somatosensory cortex is similarly mapped; cortical areas getting input from your hand are next to those getting input from your wrists, etc. But when it comes to smell, everything is a bit…muddled.

See, the organization of the cells that process smells isn’t quite so straightforward. There is not what we would call an “olfactory map” in the cortex, at least not one that scientists can recognize. The pipeline of getting information from the nose into the brain is actually a quite intriguing one. It starts with the olfactory receptor neurons in the nose coming in contact with whatever molecules are floating about in the air. Different olfactory receptor neurons will bind to different molecules, and when this happens these receptor cells become active. They then project to big chunks of cells in the olfactory bulb called glomeruli, and in turn activate these cells. Each glomerulus will only get inputs from olfactory receptor cells that respond to the same odor molecule. So there is a great convergence of information, with each glomerulus responding to specific odorants. But it all appears to be for naught, because at the very next stage of processing things get all tangled up again. The cells in the glomeruli project to the piriform cortex, but in a random way which destroys the order and segregation they initially had. Cells in the piriform cortex thus get input from a variety of receptor types and respond to specific odors based on that input. However, we don’t have the same kind of physical relationship amongst cells that we see in other sensory modalities. That is, one cell’s response to a stimulus may have nothing in common to the response of the cell directly next to it. Cell A can be highly active when there is a hint of vanilla in the air and its neighbor cell B may fire in response to a whiff of vinegar.

This is odd because we generally view maps in cortex as serving some kind of purpose. To start, It’s energetically advantageous to have most of your connections be to nearby cells. If these connections are excitatory then it would nice if those nearby cells are trying to send a similar message. With this setup, local connections can be used to strengthen the signal, and with the addition of inhibitory interneurons that project farther away, weaken an opposing signal that’s a little way down the cortex. Basically, it’s easier to enhance the response of similarly-responsive cells and suppress the activity of unrelated cells if there is some meaningful layout to the cell landscape. But from our current vantage point, olfactory cortex looks like a complete jumble with no structure to be found.

But then again, what kind of structure would we expect? With visual cortex it’s easy: areas that are physically close to each other in the external world should be represented by cells that are physically close to each other in the cortex. A similar notion is true for auditory cortex: frequencies that are near each other should activate cells that are near each other. But what makes a smell “near” another smell? The molecules that activate olfactory receptors, the physical things that make up what we know as smells, can have complex molecular structures that vary in numerous dimensions. One could compare, for example, molecular size, the number of carbon atoms, or what functional groups are attached to the end of a molecule. Haddad et al.  actually came up with 1,664 different metrics to describe an odorant. Given all these different scales, to say that one odor molecule is similar to another doesn’t have much practical meaning. Especially when we consider the fact that molecules which are incredibly similar physically can elicit completely different perceptual experiences. For example, carvone is a molecule that can smell like spearmint, but if it’s structure is switched to the mirror opposite form, it elicits the spicy scent of caraway seeds. So the notion of the brain being able to create a map of smells presupposes the existence of a simple relationship amongst odorants, which simply doesn’t exist.

As with most mysteries of science, though, it presents a great opportunity. That is, the opportunity to be wrong. Perhaps piriform cortex does in fact have a grand organizing principle that is simply alluding our detection. The Sobel lab has produced some interesting work connecting odorant structure, receptor activity, and the perceived pleasantness of an odor. If they are able to find some kind of odor pleasure map, it would represent a new kind of topography for sensory cortex: one that is based on internal perception, not just external physical properties. This would produce a flood of new questions about how such organization develops, and the universality of smell preferences across species and individuals. What currently looks to our electrodes as evolution’s mistake, could in fact be a very well planned-out olfactory citymap. And learning how to read that map could lead to a whole new way of understanding our olfactory world.

%d bloggers like this: