Skip to content
November 27, 2017 / neurograce

Modeling the Impact of Internal State on Sensory Processing: An Introduction

Current wisdom says that–with the exception of those that go on to great scientific fame–a PhD student’s thesis will be read by at most the 5 or so professors on their dissertation committee. Because most of the content in a thesis is already or soon-to-be published as separate papers this is not much of a loss. However, the introduction to a thesis is something that is usually written special-purpose for that document and rarely has another outlet for publication. These introductions, however, offer a space for young researchers to catalogue their journey through the scientific literature and share some accumulated wisdom from years of learning to do science. So to get a little more mileage out of my own thesis introduction and encourage others to do the same, I’ve published it here, along with links to where elements of each chapter of the thesis work are available online.

Thesis Abstract: Perception is the result of more than just the unbiased processing of sensory stimuli. At each moment in time, sensory inputs enter a circuit already impacted by signals of arousal, attention, and memory. This thesis aims to understand the impact of such internal states on the processing of sensory stimuli. To do so, computational models meant to replicate known biological circuitry and activity were built and analyzed. Part one aims to replicate the neural activity changes observed in auditory cortex when an animal is passively versus actively listening. In part two, the impact of selective visual attention on performance is probed in two models: a large-scale abstract model of the visual system and a smaller, more biologically-realistic one. Finally in part three, a simplified model of Hebbian learning is used to explore how task context comes to impact prefrontal cortical activity. While the models used in this thesis range in scale and represent diverse brain areas, they are all designed to capture the physical processes by which internal brain states come to impact sensory processing.

Thesis Introduction

This thesis takes a computational approach to the task of understanding how internal brain states and representations of sensory inputs combine. In particular, mathematical models of neural circuits are built to replicate and elucidate the physical processes by which internal state modulates sensory processing. Ultimately, this work should contribute to an understanding of how complex behavior arises from neural circuits.

Cognition is the result of internal state and external influences

While the neural mechanisms that underly it remain an open question to this day, the observation that internal mental state causes changes in sensory perception dates back millennia [Hatfield, 1998]. One of the earliest such observations comes from Aristotle in  350 B.C.E. in his treatise On Sense and the Sensible, wherein he remarks that “…persons do not perceive what is brought before their eyes, if they are at the time deep in thought, or in a fright, or listening to some loud noise.” The notion that internal state can be purposefully controlled in order to enhance processing was also noted by Lucretius in the first century B.C.E: “Even in things that are plainly visible, you can note that if you do not direct the mind, the things are, so to speak, far removed and remote for the whole time.” Philosophers continued to make these observations for centuries, with Descartes, for example, writing in 1649 that, “The soul can prevent itself from hearing a slight noise or feeling a slight pain by attending very closely to some other thing…” While these early documentations provide evidence that this phenomenon is universal and perceptually relevant, a more direct link to the field of experimental psychology comes in the early 18th century through the work of Gottfried Leibniz. While Leibniz’s work posits many elements considered outside the realm of today’s science, he does provide insights on the role of memory (plausibly a form of internal state) on sensory processing: “It is what we see in an animal that has a perception of something striking of which it has previously had a similar perception; the representations in its memory lead it to expect this time the same thing that happened on the previous occasion, and to have the same feelings now as it had then” [Leibniz, 2004]. He also, through the notion of “apperception,” expounded on the ways in which motivation and will influence perception. But the particular significance of Leibniz’s work for modern psychology comes through his influence on Wilhelm Wundt. Wundt, who founded what is considered the first experimental psychology lab in 1879, is explicit about the role of Leibniz’s work in his own thought. With a particular focus on the notion of “apperception,” Wundt took up the task of scientifically measuring and studying central mental control processes [Rieber and Salzinger, 2013]. Modern studies of internal state and sensory processing are direct descendants of his initial work on developing the field of experimental psychology.

Through centuries of experimental research, a myriad of ways in which internal state can impact processing have been documented. Arousal levels, for example, have been shown to impact perceptual thresholds and reaction times in an inverted-U manner [Tomporowski and Ellis, 1986]; that is, beneficial effects on perception come from moderate levels of arousal, while too low or too high arousal can impair performance. When awakened from sleep (and presumably in a state of low arousal), people are slower to respond to auditory stimuli [Wilkinson and Stretton, 1971]. Under conditions of sleep deprivation, responses to visual stimuli are slower and misses are more common [Belenky et al., 2003]. In the study of human psychology, mood and emotional state have also been related to changes in sensory processing. For example, patients with major depressive disorder showed higher thresholds for odor detection than healthy controls, but this difference went away after successful treatment [Pause et al., 2001].

Selective attention differs from arousal and mood in that it is controllable and directed to a subset of the perceptual experience. When participants expect a stimulus in a given sensory modality (e.g. a visual input), they are slower to respond to a relevant stimulus in a different modality (e.g. a tactile one) [Spence et al., 2001]. When cued to attend to a subset of the input within a sensory modality, similar benefits and costs are found for the attended and unattended stimuli, respectively [Carrasco, 2011]. In a particularly well-known example of “inattentional blindness,” subjects asked to count the number of basketball passes in a video did not report awareness of a person in a gorilla suit walking across the frame [Simons and Chabris, 1999].

Interestingly, the internal state generated by a stimulus in one sensory modality may also alter the perception from another. For example, hearing animal noises prior to image presentation increases detection of animal images and lowers reaction time [Schneider et al., 2008]. In addition, certain forms of memory and stimulus history within a modality can impact sensory processing. For example, trial history has complex effects on future behavior that are at least in part due to changes in stimulus expectation as well as low-level sensory facilitation [Cho et al., 2002].

While this ability of internal state to alter perception and decision-making seems perhaps a hallmark of mammalian, or even primate, neurophysiology, it has been observed across the evolutionary tree [Lovett-Barron et al., 2017]. For example, being in a food-deprived state alters the response of C. elegans to chemical gradients [Ghosh et al., 2016].

Different types of internal state modulation are believed to have different neural underpinnings. Overall arousal levels, for example, have broad impacts on various sensory and cognitive functions. A likely candidate for such modulation is thus the brainstem, as it contains nuclei that send diffuse connections across the brain [Sara and Bouret, 2012]. The axons from these areas release a cocktail of neuromodulators that can have diverse impacts. For example, noradrenaline released from the locus coeruleus is believed to play a role in neural synchronization [Sara and Bouret, 2012]. Switching between tasks that have different goals or require information from different sensory modalities, however, requires more targeted manipulations that can impact different brains areas separably. In a study that monitored fMRI activity during switches between auditory and visual attention, activity in frontal and parietal cortices were correlated with the switch [Shomstein and Yantis, 2004]. Further along this spectrum, selective attention within a sensory modality implies a targeting of individual cell populations that represent the attended stimulus. Such fine-grained modulation by attention has been observed [Martinez-Trujillo and Treue, 2004], and is assumed to be controlled by top-down connections originating in the frontal cortex [Bichot et al., 2015].

The alteration of sensory processing by internal state is an important component of the cognitive processes that lead to adaptive behavior. Allowing perception to be influenced by context, goals, and history creates a more flexible mapping between sensory input and behavioral output. This can be viewed as a useful integration of many different information sources for the purposes of decision-making. The importance of this is made clear by the cases in which it goes wrong. An underlying cognitive deficit in schizophrenia, for example, is the inability to incorporate context into perceptual processing [Bazin et al., 2000].

Circuit modeling as an approach for connecting structure and function

The notion that structure begets function in the brain has appeared throughout the history of neuroscience. Even without any significant evidence, phrenologists proposed different anatomical foci for different cognitive functions [Parssinen, 1974], so natural is the structure-function relationship. Some of the earliest examples of observed structure being related to function came in the late 19th century from Santiago Ramón y Cajal. Through careful anatomical investigation of a variety of neural circuits, Cajal came to hypothesize—correctly—that a “nervous current” travels from the dendritic tree through the soma and out through the axon [Llinás, 2003]. This relationship between structure and function extends beyond individual neuron morphology to the structure of entire circuits. The presence of a repeating laminar circuit motif—with, for example, inputs from lower areas targeting cells in layer 4, which project to cells in layers 2/3, which send outputs to layer 5—is frequently cited as evidence that such structure is a functional unit of the brain [Douglas and Martin, 2004]. A more direct investigation of how the structure of neural connections leads to functionally-interpretable activity came from Hubel and Weisel. Particularly, they documented two different types of neurons in primary visual cortex—simple cells (which respond to on- and off-patterns of light with spatial specificity) and complex cells (which have more spatial invariance in their responses to light patterns)—and came to the conclusion that the responses of the complex cells could be understood if it is assumed that they receive input from multiple simple cells, each representing a slightly different spatial location [Hubel and Wiesel, 1962]. Thus, the connections of the neural circuit were mapped to functional properties of the neural responses. This level of understanding should ultimately be possible for all neural responses, insofar as all are the result of the neuron’s place in a circuit.

While experimental results have facilitated an understanding of the importance of structure in neural circuits, such descriptive approaches have limitations. To truly understand a neural circuit, as Richard Feynman would say, we must be able to build it. Mathematical models allow for the precise formulation and testing of a hypothesis. In neural circuit modeling, neurons are represented by an equation or set of equations that represent how the neuron’s inputs are combined and transformed into an output measure, such as firing rate. A weight matrix dictates the impact any given neuron’s activity has on other neurons in the network. When designed to incorporate facts about the connectivity and neural response properties of a particular brain area, circuit models can serve as powerful mechanistic explanation of neural activity. As such, they can be used to test and generate hypotheses about the relationship between structure and activity. In neuroscience, where tools for observation and manipulation are limited and/or expensive, being apply to perform experiments in silico can be of immense value. Furthermore, mathematical analysis and simulation is of particular use when working with large and complex systems, which can display counterintuitive and difficult-to-predict behavior.

Circuit modeling exists within a larger set of quantitative approaches that comprise computational/theoretical neuroscience. Other approaches in this category focus on devising advanced tools for data analysis tailored to the problems of neuroscience. Another subset of methods involves more abstract mathematical analysis for the purpose of deriving statements about qualities such as optimality, stability, or memory capacity. While these other quantitative approaches have much to offer the field, circuit modeling is particularly well-suited for incorporating and explaining data. Theoretical constructs are only useful insofar as they can be related to biologically observable values, and circuit models are built to be directly comparable to existing biological structures. Therefore, predications from a circuit model are straightforward to interpret, and lead to predictions for the data. Practically, certain predictions from circuit models may be difficult to explore experimentally due to technical limitations. However, this creates a role for theory in driving the development of tools, as circuit models make clear which components of the biology are most worth measuring.

To encourage an integration of experimental and computational work, it is important for there to be a common language and set of ideas. Practically speaking, this can be achieved by building models that have explicit one-to-one correspondence with biological entities. It is also helpful to design models in a way that allows for the same set of analyses to be performed on the data as well as the model. In this case then, even if a one-to-one correspondence isn’t possible, derivative measurements can still be compared directly. This thesis contains examples from along this spectrum. A tension that always comes with model building, however, is the desire to make a model that is both detailed and accurate while also conceptually useful. Highly detailed, complex models may be good at capturing the data but can be unwieldy and do not open themselves up for easy mathematical, or even informal conceptual, inspection. While simpler models can be worked with and interrogated more easily, the rich dynamics of the brain is unlikely to be captured by a simple model. Again, this thesis includes models from across this spectrum.

Thesis overview

The parts of this thesis are arranged according to the brain area studied as well as the type of internal state being explored.

In the first, the impact of task engagement on responses in mouse auditory cortex is explored. The modeling approach used in this chapter allows for a direct comparison of the firing rates of different neuronal subtypes in the model with those found experimentally under two circumstances: during an active tone discrimination task and during passive exposure to tones. The aim of this model is to understand the physical structure of the circuit and the input signals that allow for different neural responses to the same tones under different conditions. To read more about this work, see the following paper: Parallel processing by cortical inhibition enables context-dependent behavior

In the second part, selective visual attention is the focus. In particular, the mechanisms that allow for certain visual features to be enhanced across the visual field are recapitulated in a large scale model of the visual system. While modeling of this type doesn’t allow for a direct comparison to data on the neural level, it has the benefit of providing a behavioral output. Thus, this model is used to understand how voluntary shifts in selective visual attention lead to changes in performance on complex visual tasks. To learn more about this work, see this video: Understanding Biological Visual Attention Using Convolutional Neural Networks – CCN 2017. The second chapter in this part includes an extension to these models that is meant to make them more biologically-realistic and thus more comparable to data.

Finally, the third part more directly addresses the ability of context to alter the mapping from sensory inputs to behavior. Here, again, task types are changed in blocks. These different task conditions alter the way in which visual stimuli are encoded in prefrontal cortical neurons, which then allows for a more flexible mapping to behavioral outputs. To understand how these encoding changes come to be, a simplified model that includes Hebbian learning is introduced and analyzed in comparison to analysis of the data. To learn more about this work, see the following paper: Hebbian Learning in a Random Network Captures Selectivity Properties of the Prefrontal Cortex

References

Gary Hatfield. Attention in early scientific psychology. Visual attention, 1:3–25, 1998.
Gottfried Wilhelm Leibniz. The principles of philosophy known as monadology. Jonathan
Bennett Edition, 2004.
Robert W Rieber and Kurt Salzinger. Psychology: Theoretical–historical perspectives.
Academic Press, 2013.
Phillip D Tomporowski and Norman R Ellis. Effects of exercise on cognitive processes: A
review. Psychological bulletin, 99(3):338, 1986.
Robert T Wilkinson and M Stretton. Performance after awakening at different times of
night. Psychonomic Science, 23(4):283–285, 1971.
Gregory Belenky, Nancy J Wesensten, David R Thorne, Maria L Thomas, Helen C Sing,
Daniel P Redmond, Michael B Russo, and Thomas J Balkin. Patterns of performance
degradation and restoration during sleep restriction and subsequent recovery: A sleep
dose-response study. Journal of sleep research, 12(1):1–12, 2003.
Bettina M Pause, Alejandra Miranda, Robert Göder, Josef B Aldenhoff, and Roman Ferstl.
Reduced olfactory performance in patients with major depression. Journal of psychiatric
research, 35(5):271–277, 2001.
Charles Spence, Michael ER Nicholls, and Jon Driver. The cost of expecting events in the
wrong sensory modality. Perception & Psychophysics, 63(2):330–336, 2001.
Marisa Carrasco. Visual attention: The past 25 years. Vision research, 51(13):1484–1525,
2011.                                                                                                                                                    Daniel J Simons and Christopher F Chabris. Gorillas in our midst: Sustained inattentional
blindness for dynamic events. Perception, 28(9):1059–1074, 1999.
Till R Schneider, Andreas K Engel, and Stefan Debener. Multisensory identification of
natural objects in a two-way crossmodal priming paradigm. Experimental psychology,
55(2):121–132, 2008.
Raymond Y Cho, Leigh E Nystrom, Eric T Brown, Andrew D Jones, Todd S Braver, Philip J
Holmes, and Jonathan D Cohen. Mechanisms underlying dependencies of performance on stimulus history in a two-alternative forced-choice task. Cognitive, Affective, & Behavioral Neuroscience, 2(4):283–299, 2002.
Matthew Lovett-Barron, Aaron S Andalman, William E Allen, Sam Vesuna, Isaac Kauvar,
Vanessa M Burns, and Karl Deisseroth. Ancestral circuits for the coordinated modulation
of brain state. Cell, 2017.
D Dipon Ghosh, Tom Sanders, Soonwook Hong, Li Yan McCurdy, Daniel L Chase, Netta
Cohen, Michael R Koelle, and Michael N Nitabach. Neural architecture of hunger-
dependent multisensory decision making in c. elegans. Neuron, 92(5):1049–1062, 2016.
Susan J Sara and Sebastien Bouret. Orienting and reorienting: the locus coeruleus mediates cognition through arousal. Neuron, 76(1):130–141, 2012.
Sarah Shomstein and Steven Yantis. Control of attention shifts between vision and audition in human cortex. Journal of neuroscience, 24(47):10702–10706, 2004.
Julio C Martinez-Trujillo and Stefan Treue. Feature-based attention increases the selectivity of population responses in primate visual cortex. Current Biology, 14(9):744–751, 2004.
Ying Zhang, Ethan M Meyers, Narcisse P Bichot, Thomas Serre, Tomaso A Poggio, and
Robert Desimone. Object decoding with attention in inferior temporal cortex. Proceedings of the National Academy of Sciences, 108(21):8850–8855, 2011.
Nadine Bazin, Pierre Perruchet, Marie Christine Hardy-Bayle, and André Feline. Context-
dependent information processing in patients with schizophrenia. Schizophrenia research, 45(1):93–101, 2000.
Terry M Parssinen. Popular science and society: the phrenology movement in early victorian britain. Journal of Social History, 8(1):1–20, 1974.
Rodolfo R Llinás. The contribution of santiago ramon y cajal to functional neuroscience.
Nature Reviews Neuroscience, 4(1):77–80, 2003.
Rodney J Douglas and Kevan AC Martin. Neuronal circuits of the neocortex. Annu. Rev.
Neurosci., 27:419–451, 2004.
David H Hubel and Torsten N Wiesel. Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. The Journal of physiology, 160(1):106–154, 1962.

 

Advertisements
August 1, 2017 / neurograce

Is Math Invented or Discovered?: An Argument for Invention

Steven Strogatz, the popular applied mathematician and educator, recently tweeted a link to a paper on the question of whether mathematics is something that was invented or discovered:

 

The author of that paper, Barry Mazur, highlights the importance of the subjective experience of doing math in addressing this question. As someone who works in computational neuroscience, I wouldn’t fancy myself a mathematician and so I can’t speak to that subjective experience. However I can say that working in a more applied area still leads one to the question. In fact, it’s something we discuss at length in Unsupervised Thinking Episode 13: The Unreasonable Effectiveness of Mathematics.

It’s been awhile since we recorded that episode and its something that has been on my mind again lately, so I’ve decided to take to the blog to write a quick summary of my thoughts. Mazur also gives in that article a list of do’s and don’ts for people trying to write about this topic. I don’t believe I run afoul of any of those in what follows (certainly not the one about citing fMRI results! yikes), but I suppose there is a chance that I am reducing the question to a non-argument. But here goes:

I think the idea of mathematics as a language is a reasonable place to start. Now when it comes to natural languages, like English or Chinese, I don’t believe there is any argument about whether these languages are invented or discovered. While it may have been a messy, distributed, collective invention through evolution, these languages are ‘invented’ nonetheless.  The disorder around the invention of natural languages means that they are not particularly well designed. They are full of ambiguities, redundancies, and exceptions to rules. But they still do a passable job of allowing us to communicate about ourselves and the world.

We cannot, however, do much work with natural languages. That is, we can’t generally take sentences purely as symbols and rearrange them according to abstract rules to make new sentences that are of much value. Therefore, we cannot discover things via natural language. We can use natural language to describe things that we have discovered in the world via other means, but the gap between the language and what it describes is such that its not of much use on its own.

With mathematics, however, that gap is essentially non-existent. Pure mathematicians work with mathematical objects. They use the language to discover things, essentially, about the language itself. This gets trippy however–and leads to these kinds of philosophical questions–when we realize that those symbolic manipulations can be of use to, and lead to discoveries, in the real world. Essentially, math is a rather successful abstraction of the real world in which to work.

But is this ability of math due to the fact that it is a “discovered” entity, or just that it is a well-designed one? There are other languages that are well-designed and can do actual work: computer programming languages. Different programming languages are different ways of abstracting physical changes in hardware and they are successful spaces in which to do many logical tasks. But you’d be hard-pressed to find someone having an argument about whether programming languages are invented or not. We know that humans have come up with programming languages–and indeed many different types–to meet certain requirements of what they wanted to get done and how.

The design of programming languages, however, is in many ways far less constrained than the process that has lead to our current mathematics. An individual programming language needs to be self-consistent and meet certain design requirements decided by the person or people who are making it. It does not have to, for example, be consistent with all other programming languages–languages that have been created for other purposes.

In mathematics, however, we do not allow inconsistencies across different branches, even if those different branches are designed to tackle different problems. It is not the case, for example, that multiplication doesn’t have to be distributive in geometry. I think a strong argument can also be made that the development of mathematics has been influenced heavily by a desire for elegance and simplicity, and by what is useful (and in this way is actually influenced by whether it successfully explains the world). If programming languages were held to a similar constraint, what could have developed is a single form of abstraction that is used to do many different things. We may then have asked if “programming language” (singular) was a discovered entity.

So essentially, what my argument comes down to is the idea that what we call mathematics is a system that has resulted from a large amount of constraints to address a variety of topics. Put this way, it sounds like a solution to an engineering problem, i.e. something we would say is invented. The caveat, however (and where I am potentially turning this into a non-problem), is that what we usually refer to as “discovering” can also be thought of as finding the one, solitary solution to a problem. For example, when scientists “discovered” the structure of DNA, what they really did was find the one solution that was consistent with all the data. If there were more than one solution that were equally consistent, the debate would still be ongoing. So, to say that the mathematical system that we have now is something that was discovered, is to say that we believe that it is the only possible system that could satisfy the constraints. Perhaps that is reasonable, but I find that that formulation is not what most people mean when they talk about math as a discovery. Therefore, I think I (for now) fall on the side of invention.

 

Meta-caveat: I am in no way wedded to this argument and would love to hear feedback! Especially from mathematicians that have the subjective experience of which Mazur speaks.

October 15, 2015 / neurograce

Unsupervised Thinking: A new podcast on neuroscience and Artificial Intelligence!

Hey All,

Long time no blog! And, yes, as with most grad school bloggers that was initially out of too much work, distraction, and a touch of laziness. But more recently, it’s because I’ve started a new project: podcasting! It’s called Unsupervised Thinking (a play off “unsupervised learning” in machine learning) and it’s a podcast about neuroscience, artificial intelligence, and science more broadly. And since I and my two fellow podcasters are PhD students in computational neuroscience, it’ll have a computational/systems bend.

Our first episode is on Blue Brain/Human Brain Project, which is the large EU-funded project to simulate the brain in a computer. Our next episode will be on brain-computer interface. Check it out by clicking below!

Unsupervised Thinking Podcast

                       Give us a listen!

May 26, 2013 / neurograce

Something Old, Something New, Something Borrowed, Something Untrue

This is a piece about the present state, and potential future, of fraud in scientific research which  I wrote for a Responsible Conduct in Research course taught at Columbia.

There seems to be a trend as of late of prominent scientific researchers been outed for fabrications or falsifications in their data. Diederik Stapel’s extravagant web of invented findings certainly stands out as one of the worst examples, and will probably do long term damage to the field of psychology. But psychology is not alone; other realms of research are suffering from this plague too. For example, the UK government exercised for the first time its right to imprison scientific fraudsters when it sentenced Steven Eaton to 3 months for falsifying data regarding an anti-cancer drug. And accusations of fraud fly frequently from both sides of the debate over climate change. Studies would suggest these misdeeds aren’t limited to just the names that make the news. In an attempt to quantify just how bad scientists are being, journalists sent out a misconduct questionnaire to medical science researchers in Belgium. Four out of the 315 anonymous respondents (1.3%) admitted to flat out fabrication of data and 24% acknowledged seeing such fabrication done by others. Furthermore, analysis of publishing practices has shown a steep increase in the rate of retractions of journal articles since 2005, and investigations suggest that up to 43% of such retractions are due to fraud, with an additional 9.8% coming from plagiarism. It seems clear from both anecdotes and analysis, dishonesty abounds in the research world.

But as with any criminal activity, it is hard to really know how accurate statistics on fraud in scientific publishing can be. Is this wave of retractions and public floggings really a result of an increase in inappropriate behavior, or just an increase in the reporting of it? In other words, are we producing more scientists who are willing to lie, cheat, and steal to get ahead, or more who are willing to sound the alarm on those who do?

Certainly the current financial climate creates an incentive, a need even, for a researcher to stand out from the crowd of their peers like never before. To secure funding from grants, publications highlighting hot-topic research findings are a must. The less money going into science, the more competition there is for grants. So, those research findings must become hotter and more frequent. Furthermore, much of the same “high impact publication”-based criteria is used for determining who gets postdoc positions, assistant professorships, and even tenure. This kind of pressure could, and apparently does, lead some scientists to fake it when they can’t make it.

But while today’s economy may make it easier to justify cheating, today’s technology can make it harder to execute it. We have the ability to automatically search large datasets for the numerical anomalies or repetitions that are hallmarks of fabrication. The contents of an article can be compared to large databases of text to catch a plagiarized paragraph before any human eyes have read it. And the anonymity of the internet provides a way for anyone to report suspicious behavior of even the most senior of scientists without fearing retribution. Thus, it may seem obvious that case after case of fraud is being exposed.

No matter the specific reasons for this recent uptick, misconduct in research is something that always has been and always will be with us. In any competitive situation, with glory and profit on the line, some people will turn to deceit to get ahead. So what can we do reduce the number of wrong-doers to the lowest possible? Well certainly the technological tools mentioned above can help. And some may argue that we should go further, and implement as much surveillance of scientists during their data-collecting as possible. Oversight can prevent the usually solitary scientist from engaging in any “data massaging” that they may have considered when no one was looking. Pre-registration of studies is another tool to ensure experimenters aren’t trying to fiddle with or cover up unsavory data. By stating, before the experiment even begins, what is meant to be tested and how, researchers will be less able to squeeze out whatever p<.05 trends they can find in the data and pretend that’s what they were looking for all along.

While such tools can be effective in preventing the deed of fraud, I think, as a field, we would be better served by preventing the motivation for fraud. This means moving away from a funding system that puts unreasonable weight on flashy results and towards one that favors critical thinking, solid methods, and open data/code sharing. We will need to learn to evaluate our peers by this same criteria as well. Furthermore, our publishing process has to make room for the printing of negative results and replicated studies. The scientist who accidentally stumbles upon an intriguing finding shouldn’t necessarily be praised higher than those who attempt to replicate a result they find suspicious or who have spent years tediously testing hypotheses which turn out to be incorrect. Certainly positive novel findings will continue to be the driving force of any field, and this explains them taking precedence when publishing resources were limited. But with today’s online publishing and quick searches, there is little justification for ignoring other kinds of findings. Additionally, it is now possible for journals to host large datasets and code repositories online along with their journal articles, allowing researchers to get credit for these contributions as well. Technological advancements can be used not only to catch fraud, but to implement the changes that will prevent the motivation for it as well.

Of course, incorporating these achievements will require a more complex means of evaluating scientists for grants and promotions, and this will take time. But it is crucial that we start We need to create a culture that recognizes the importance of a good scientific process and the extreme harm done by introducing dishonesty into it. The hierarchical nature of science, with new studies being built on the backs of old ones, means that one small act of fraud can have far-reaching and potentially irreversible effects on the field. Furthermore, it damages the reputation of scientific research in the public eye, which can lessen confidence and support. People may have been upset to learn of Jonah Lerner’s fraudulent reporting of neuroscience, but such concerns pale in comparison to learning of the fraudulent conducting of neuroscience. While fraud and data manipulation are hardly new problems, there can always be new solutions for combating them. We are lucky to live in an age that allows us the tools to detect such practices when they occur, and also to change the system that encourages them. While it is unlikely that we will ever fully eradicate scientific misconduct, we can hope to create a culture amongst scientist that makes dishonesty less common and that views fabrication as an unthinkable option.

ResearchBlogging.org Van Noorden, R. (2011). Science publishing: The trouble with retractions Nature, 478 (7367), 26-28 DOI: 10.1038/478026a

May 10, 2013 / neurograce

Methodological Mixology: The harmful side of diversity in Neuroscience

The range of tools used to study the brain is vast. Neuroscientists toss together ideas from genetics, biochemistry, immunology, physics, computer science, medicine and countless other fields when choosing their techniques. We work on animals ranging from barely-visible worms and the common fruit fly to complicated creatures like mice, monkeys, and men. We record from any brain region we can reach, during all kinds of tasks, while the subject is awake or anesthetized, freely moving or fixed, a full animal or merely a slice of brain…and the list goes on. The result is a massive, complex cocktail of neuroscientific information.

Now, I’ve waxed romantic about the benefits of this diversity before. And I still do believe in the power of working in an interdisciplinary field; neuroscientists are creating an impressively vast collection of data points about the brain, and it is exciting to see that collection continuously grow in every direction. But in the interest of honesty, good journalism, and stirring up controversy, I think it’s time we look at the potential problems stemming from Neuroscience’s poly-methodological tendencies. And the heart of the issue, as I see it, is in how we are connecting all those points.

Combining data from the two populations and calculating the mean (dashed grey line) would show no difference between Variable A and Variable B. In actuality, the two variables are anti-correlated in each population.

Combining data from the two populations and calculating the mean (dashed grey line) would show no difference between Variable A and Variable B. In actuality, the two variables are anti-correlated in each population.

When we collect data from different animals, in different forms, and under different conditions, what we have is a lot of different datasets. Yet what we seem to be looking for, implicitly or explicitly, are some general theories of how neurons, networks, and brains as a whole work. So, for example, we get some results about the molecular properties needed for neurogenesis in the rat olfactory bulb, and we use these findings to support experiments done in the mouse and vice versa. What we’re assuming is that neurons in these different animals are doing the same task, and using the same means to accomplish it. But the fact is, there are a lot of different ways to accomplish a task, and many different combinations of input that will give you the same output. Combining these data sets as though they’re one could be muddling the message each is trying to send about how its system is working. It’s like trying to learn about a population with bimodally distributed variables by studying their means (see Fig 1). In order to get accurate outcomes, we need self-consistent data. If you use the gravity on the Moon to calculate how much force you need to take off from the Earth, you’re not going to get off the ground.

Not to malign my own kind, but theorists, with their abstract “neural network” models, can actually be some of the worst offenders when it comes to data-muddling. By using average values for cellular and network properties pulled from many corners of the literature, and building networks that aren’t meant to have any specific correlate in the real world, modelers can end up with a simulated Frankenstein: technically impressive, yes, but not truly recreating the whole of any of its parts. This quest for the Platonic neural network—the desire to explain neural function in the abstract—seems, to me, misguided. Rather, even as theorists, we should not be attempting to explain how neurons do what they do—but rather how V1 cells in anesthetized adult cats show contrast invariant tuning, or how GABA interneurons contribute to gamma oscillations in mouse hippocampal slices, and so on. Being precise in determining what our models are trying to be will better fuel how we design and constrain them, and lead to more directly testable hypotheses. The search for what is common to all networks should be saved until we know more of what is specific to each.

Eve Marder at Brandeis University has been something of a crusader for the notion that models should be individualized. She’s taken to running simulations to show how the same behavior can be produced by a vast array of different parameter values. For example, in this PNAS paper, Marder shows that the same bursting firing patterns can be created by different sets of synaptic and membrane conductances (Fig 2). This shows how simply observing a similar phenomenon across different preparations is not enough to assume that the mechanisms producing it are the same. This assumption can lead to problems if, in the pursuit of understanding bursting mechanisms, we measured sodium conductances from the system on the left, and calcium conductances from that on the right. Any resulting model we could create incorporating both these values would be an inaccurate explanation of either system. It’s as though we’re combining the pieces from two different puzzles, and trying to reassemble them as one.

Figure 2. The voltage traces show that the two systems have similar spiking behavior. But it is accomplished with different conductance values.

Figure 2. The voltage traces show that the two systems have similar spiking behavior. But it is accomplished with different conductance values.

Now of course most researchers are aware of the potential differences across different preparations, and the fact that one cannot assume that what’s true for the anesthetized rat is true for the behaving one. But these sorts of concerns are usually relegated to a line or two in the introduction or discussion sections. On the whole, there is still the notion that ideas can be borrowed from nearby lines of research and bent to fit into the narrative of the hypothesis at hand. This view is not absurd, of course, and it comes partly from reason, but also from necessity: there’s just some types of data that we can only get from certain preparations. Furthermore, time and resource constraints mean that it is frequently not plausible to run the exact experiment you may want. And on top of the practical reasons for combining data, there is also the fact that evolution supports the notion that molecules and mechanisms would be conserved across brain areas and species. This is, after all, why we feel justified in using animal models to investigate human function and disorder in the first place.

But, like with many things in Neuroscience, we simply can’t know until we know. It is not in our best interest, in the course of trying to understand how neural networks work, to assume that different networks are working in the same way. Certainly frameworks found to be true in specific areas can and should be investigated in others. But we have to be aware of when we are carefully importing ideas and using evidence to support the mixing of data, and when we’re simply throwing together whatever is on hand. Luckily, there are tools for this. Large scale projects like those at the Allen Brain Institute are doing a fantastic job of creating consistent, complete, detailed, and organized datasets of specific animal models. And even for smaller projects, neuroinformatics can help us keep track of what data comes from where, and how similar it is across preparations. Overall, it needn’t be a huge struggle to keep our lines of research straight, but it is important. Because a poorly mixed cocktail of data will just make the whole field dizzy.

ResearchBlogging.org Marder, E. (2011). Colloquium Paper: Variability, compensation, and modulation in neurons and circuits Proceedings of the National Academy of Sciences, 108 (Supplement_3), 15542-15548 DOI: 10.1073/pnas.1010674108

April 22, 2013 / neurograce

Knowledge is Pleasure!: Reliable reward information as a reward itself

Pursuing rewards is a crucial part of survival for any species. The circuitry that tells us to seek out pleasure is what ensures that we find food, drink, and mates. In order to engage in this behavior, we must learn associations between rewards and the stimuli that predict them. That way we can know that our caffeine craving, for example, can be quenched by seeking the siren in a green circle (it’s possible that I do my blog writing at a Starbucks–cuz I’m original like that). Studying this kind of reinforcement learning is big business, and there is still a lot left to find out. But what has been known for some 15 years now is that dopaminergic cells in the midbrain which encode reward value also encode reward expectation. That is, in the ventral tegmental area (VTA), cells increase their firing in response to the delivery of an unexpected reward, such as a sudden drop of juice on a monkey’s tongue. But cells here also fire in response to a reward cue, say a symbol on a screen that the monkey has learned predicts the juice reward. What’s more, after this cue, the arrival of the actual reward causes no change in the firing of these cells unless it is higher or lower than expected. So, these cells are learning to value the promise of a pleasurable stimulus, and signal whether that promise is fulfilled, denied, or exceeded. Suddenly, the sight of the siren is a reward on its own, and getting your coffee is merely neutral.

But the world is rarely just a series of cues and rewards. It’s complex and dynamic: a symbol may predict something positive in one context and punishment in another; reward contingencies can be uncertain or change over time; and with a constant stream of incoming stimuli how do you even figure out what acts as a reward cue in the first place? Luckily, Ethan Bromberg-Martin and Okihide Hikosaka are interested in explaining just these kinds of challenges, and they’ve made a discovery that offers a nice framework on which to build a deeper understanding. In this Neuron paper, Bromberg-Martin and Hikosaka developed a task to test monkeys’ views on information. To start, the monkey was shown one of two symbols, A or B, to which the he had to saccade. After that, one of a set of four different symbols appeared: if A was initially shown then the second symbol would be A1 or A2, and likewise for B. The appearance of A1 always predicted a big water reward, and A2 always predicted a small water reward (which, to greedy monkeys who know a larger reward is possible, is essentially a punishment). But for B1 and B2, the water amount was randomized; these symbols were useless in providing reward information. So, the appearance of A meant that an informative symbol was on its way, whereas B meant something meaningless was coming. Importantly, the amount of reward was equal on average for A and B, it was only the advanced knowledge of the reward that differed.

Recording from those familiar midbrain dopaminergic cells, the authors saw an increase in activity following the appearance of the information-predicting cue A, and a decrease in response to B. These cells then went on to do their normal duty: showing a large spike in response to A1 (the large reward cue), a decrease to A2, and no change in response when these predicted rewards were actually delivered; or, alternatively, little change in response to B1 and B2, and a spike/dip when an unpredictable large/small reward was delivered. What the initial response to A and B shows is that the VTA is responding to the promise of information about reward in the same way is it responds to the promise of a reward or a reward itself. This is further supported by the fact that when monkeys were presented with both A and B and allowed to choose which to saccade to, they overwhelming preferred A—leading them down the path of reward information.

This may seem like a silly preference. Choosing to be informed about the reward size beforehand doesn’t provide a greater reward size or allow the monkey any more control, so why bother valuing the advanced information? The authors put forth the notion that uncertainty is in someway uncomfortable, so the earlier it is resolved the better. But I’m more inclined to believe their second assertion: the informative path (A) is preferred because it provides stable cue-reward associations that can be learned. The process of learning what cue predicts what reward assumes that there are cues that actually do predict reward. So if we want to achieve that goal we have to make sure we’re working in a regime where that base assumption is true—this isn’t the case for uninformative path B. Living in a world of meaningless symbols means all your fancy mental equipment for associating cues and rewards is for naught, and it leaves you with little more than luck when it comes to finding what you need. So there is a clear evolutionary advantage in finding reward in (and thus seeking out) stable cue-reward associations.

But like most good discoveries, this one leaves us with a lot of questions, mainly about how the brain comes to find these stable associations rewarding. We know that for a cue to be associated with a reward, it needs to reliably precede that reward. Then through….well, some process that we’re working out the details of….VTA neurons start firing in response to the cue itself. So presumably in order for the brain to associate a certain cue with reward information, the cue has to reliably precede that information. Here’s where we hit a problem. It is easy enough to understand how the brain is aware that the cue was presented (that’s just a visual stimulus, no problem there), and we can equally as well conceive of how it acknowledges the existence of a reward (again, just a physical stimulus which ends up making VTA cells spike), but how can the brain know that information is present? The information that a cue contains about an upcoming reward isn’t a physical stimulus out there in the world; it’s something contained in the brain itself. If we are to learn to associate an external cue with an internal entity like information, the brain needs to be able to monitor what’s happening inside itself the same way it monitors the outside world.

Luckily, there are possible mechanisms for this, and they fit well with the existing role of VTA cells. Here is the equation the brain seems to be using to make basic reward associations:

visual stimulus + VTA cell firing due to some delayed reward = VTA cell firing to visual stimulus.

But VTA cell firing is VTA cell firing, so we can substitute the second term with the righthand side of the equation and get:

visual stimulus #2 + VTA cell firing due to visual stimulus = VTA cell firing to visual stimulus #2

If pseudomath isn’t your thing: basically, the fact that the brain can learn to treat reward cues as reward means that it can learn to treat cues for reward cues as reward. And cues for cues for reward cues? Maybe, but I wouldn’t bet on it. While they did fire in response to the promise of information signified by cue A, the VTA cells still had their biggest spike increase in response to A1, the cue that signaled a big reward. It seems there’s a limit on how far removed a cue can be from an actual reward. Interestingly, this ability of any kind of metacognition appears restricted to more cognitively complex animals such as primates, and probably contributes to their adaptability as a species. While this kind of study hasn’t been done in rats or mice, my guess is you’d be hard-pressed to find such a preference for information in those lower animals.

Of course these findings leave us with something of a chicken-and-egg problem. Our desire for information is supposed to motivate us to pursue situations with stable cue-reward associations. But we can’t develop that desire until the cue-reward association is already mentally established, so what good is it then? There is also the question of how these results fit into the well-established desire that people (and animals) have for gambling. You’re not going to find a roulette wheel that will tell you where its ball is going to land, or a poker player willing to show you their cards. So what allows us to selectively love risk and uncertainty? Some theories suggest that the possibility for huge payoffs can lead to a miscalculation in expected reward and overpower our better, more rational instincts. But it’s still an area of research in economics as well as neuroscience. Basically, the evidence that reliable information is valued and sought after provides many insights into the process of reinforcement learning, but in order to fully understand its role and consequences, we are going to need more–you guessed it–information.

 

 
ResearchBlogging.org Bromberg-Martin, E., & Hikosaka, O. (2009). Midbrain Dopamine Neurons Signal Preference for Advance Information about Upcoming Rewards Neuron, 63 (1), 119-126 DOI: 10.1016/j.neuron.2009.06.009

March 12, 2013 / neurograce

Frontiers in Nature: A potential opportunity to reverse the peer-review process

Talking with fellow scientists, it would seem that most have a love/hate relationship with the current state of scientific publishing. They dislike the fact that getting a Science or Nature paper seems to be the de facto goal of research these days, but don’t hesitate to pop open the bubbly if they achieve it. This somewhat contradictory attitude is not altogether unreasonable given the current setup. The fate of many a researching career is dependent on getting a paper (or papers) into one of these ‘high impact’ journals. And as illogical as it seems for the ultimate measure of the importance of months of research to be in the hands of a couple of editors and one to three peer reviewers, these are the rules of the game. And if you want to get ahead, you gotta play.

The tides, however, may possibly be changing. Many smaller journals have cropped up recently, focusing on specific areas of research and implementing a more open and accepting review process. PLoS ONE and Frontiers are at the forefront of this. Since the mid-2000s, these journals have been publishing papers based purely on technical merit, rather than some pre-judged notion of importance. This leads to a roughly 70-90% acceptance rate (compared to Nature’s 8% and Science’s <7%), and a much quicker submission-to-print time. It also necessitates a post-publishing assessment of the importance and interest level of each piece of research. PLoS achieves this through article-level metrics related to views, citations, blog coverage, etc. Frontiers offers a similar quantification of interest, and the ability of readers to leave commentary. Basically, these publications recognize the flaws inherent in the pre-publication review system and try to redress them. PLoS says it best themselves:

“Too often a journal’s decision to publish a paper is dominated by what the Editor/s think is interesting and will gain greater readership — both of which are subjective judgments and lead to decisions which are frustrating and delay the publication of your work. PLOS ONE will rigorously peer-review your submissions and publish all papers that are judged to be technically sound. Judgments about the importance of any particular paper are then made after publication by the readership (who are the most qualified to determine what is of interest to them).”

So we have the framework for a new type of review and publication process. But such a tool is only helpful to the extent that we utilize it. Namely, we need to start recognizing and rewarding the researchers who publish good work in these journals. This also implies putting less emphasis on the old giants, Nature and Science. But how will these behemoth establishments respond to the revolution? Well, we may soon find out. NPG, the publishing company behind Nature has recently announced a majority investment in Frontiers. The press release stresses that Frontiers will continue to operate under its own policies, but describes how Frontiers and Nature will interact to expand the number of open access articles available on both sides. Interestingly, the release also states that “Frontiers and NPG will also be working together on innovations in open science tools, networking, and publication processes. “ A quote from Dr. Phillip Campbell, Editor-in-Chief of Nature is even more revealing:

“Referees and handling editors are named on published papers, which is very unusual in the life sciences community. Nature has experimented with open peer review in the past, and we continue to be interested in researchers’ attitudes. Frontiers also encourages non-peer reviewed open access commentary, empowering the academic community to openly discuss some of the grand challenges of science with a wide audience.”

Putting (perhaps somewhat naively) conspiracies of an evil corporate takeover aside, could this move mean that the revolution will be a peaceful one? That Nature sees the writing on the wall and is choosing to adapt rather than perish?

 

If so, what would a post-pre-publication-review world look like? Clearly if some sort of crowdsourcing is going to be used to determine a researcher’s merit, it will have to be reliable and standardized. For example, looking at the number of citations per article views/downloads is helpful in determining if an article is merely well-promoted, but not necessarily helpful to the community—or vice-versa. And more detailed information can be gathered about how the article is cited: are its results being refuted or supported? is it one in a list of many background citations or the very basis of a new project? Furthermore, whatever pre-publishing review the article has been submitted to (for technical merit, clarity, etc) should be posted alongside the article along with reviewer names. The post-publishing commentary will also need to be formalized (with rankings on different aspects of the research: originality, clarity, impact, etc) and fully open (no anon trolls or self-upvoting allowed). Making participation in the online community a mandatory part of getting a paper published can ensure that papers don’t go un-reviewed (the very new journal PeerJ uses something like this). If sites like Wikipedia and Quora have taught us anything (aside from how everything your mother warned you about is wrong), it’s that people don’t mind sharing their knowledge, especially if they get to do it in their own way/time. And since the whole process will be open, any “you scratch my back, I’ll scratch yours” behavior will be noticeable to the masses. The practices of Frontiers and PLoS are steps in the right direction, but their article metrics will need to be richer and more reliable if they are to take the place of big-name journal publications for a measure of research success.

Some people feel that appropriate post-publishing review makes any role of a journal unnecessary. Everyone could simply post their papers to an e-print service like arXiv, as many in the physical sciences do, and never submit to anywhere officially. Personally, I still see a role for journals—not in placing a stamp of approval on work, but in hosting papers and aggregating and promoting research of a specific kind. I enjoy having the list of Frontiers in Neuroscience articles delivered to my inbox for my perusal. And I like having some notion of where to go if I need information from outside my field. And as datasets become larger and figures more interactive, self-publishing and hosting will become more burdensome. Furthermore, there’s the simple fact that having an outsider set of eye’s proofread your paper and provide feedback before widely distributing it is rarely a bad thing.

But what then is left of the big guns, Science and Nature? They’ve never claimed a role in aggregating papers for specific sub-fields or providing lots of detailed information—quite the opposite, in fact. Science’s mission is to:

“to publish those papers that are most influential in their fields or across fields and that will significantly advance scientific understanding. Selected papers should present novel and broadly important data, syntheses, or concepts. They should merit the recognition by the scientific community and general public provided by publication in Science, beyond that provided by specialty journals.” [emphasis added].

Nature expresses a similar desire for work “of interest to an interdisciplinary readership.” And given that Science and Nature papers have some of the strictest length limits, they’re clearly not interested in extensive data analysis. They want results that are short, sweet and pack a big punch for any reader. The problem is that this isn’t usually how research works. Any strong, concise story was built on years of messy smaller studies, false starts, and negative results—most of which are kept from the rest of the research community while the researcher strives to make everything fall in place for a proper article submission. But in a world of minimal review, any progress (or confirmation of previous results, for that matter) can be shared. Then, rather than let a handful of reviewers (n=3? come on, that only works for monkey research) try to predict the importance of the work, time and the community itself will let it be known. Nature and Science can still achieve their goals of distributing important work across disciplines, but they can do it once the importance has been established, by asking researchers of good work to write a review encapsulating their recent breakthroughs. That way, getting a piece in Nature and Science remains prestigious, and also merited. Yes this means a delay between the work and publication in these journals, but if importance really is their criteria, this delay is necessary (has any scientist gotten the Nobel Prize six months after their discovery?). Furthermore, it’ll stop the cycle of self-fulfilling research importance whereby Nature or Science deems a topic important, prints about it, and thus makes it more important. This cycle is especially potent given the fact that the mainstream media frequently looks to Science and Nature for current trends and findings, and thus their power goes beyond just the research community. In a system where their publications were proven to be important, this promotion to the press would be warranted.

The goals of the big name journals are admirable: trying to acknowledge important work and spread it in a readable fashion to a broader audience. But their ways of going about it are antiquated. There was a time when getting a couple of the current thinkers in a field together to decide the merit of a piece of work was the only reasonable technique, but those days are gone. The data-collecting power of web-based tools is immense and their application is incredibly apropos here. We have the technology; we can rebuild the review system. NPG’s involvement with Frontiers offers hope (to the optimistic) that these ideas will come to fruition. We already know that Frontiers supports them. We just need the community at large to follow suit in order to give validity to these measures.

 

%d bloggers like this: