Export 85 results:
Sort by:
2016
Amsalem, O, Douglas RJ, Hill SL, Lein ES, Martin KAC, Rockland KS, Segev I, Shepherd GM, Tamás G.  2016.  Comments and General Discussion on “The Anatomical Problem Posed by Brain Complexity and Size: A Potential Solution”. Frontiers in Neuroanatomy. 10 Abstract
This article gathers together different opinions on the current status and future directions of the study of the brain, taking as a working document the article “The anatomical problem posed by brain complexity and size: a potential solution” http://journal.frontiersin.org/article/10.3389/fnana. 2015.00104/full. These commentaries are followed by a section dedicated to a general discussion of the issues raised, in which all contributors participate. The authors who have contributed to this article are listed in alphabetical order. As the reader will see, there are different points of view and of course there are many other aspects that would need further discussion that have been raised by other scientists who did not participate directly. For example, Peter Somogyi made the following comment (personal communication): [“Anatomy” is a discipline and not a biological entity that exists in nature. Hence the brain or its cells do not have anatomy; we study them with anatomical methods (usually using microscopes) while we carry out “anatomical analysis.” The brain, its nuclei, cells, and their parts are the biological entities which several disciplines study, preferably together, providing a unified description and explanation of them. We must be clear about this, and avoid terms like “anatomical properties,” “physiological properties,” or “biochemical properties” as if these somehow existed in isolation. The separate disciplines, which developed historically due to the limitation of individual human brain capacity and short life span leading to methodological and conceptual specialization, are based on sets of methods, but study the same indivisible biological entity. E.g., the synaptic current recorded by electrophysiological methods flows through the membrane that we see in the electron microscope or with the help of antibodies to synaptic ion channels in the light microscope. Accordingly, the “anatomical problem” exists because of inadequate scientific rigor in addition to methodological limitations that are often not understood, not because of “brain complexity”.] This is just an example of the many possible different points of view when dealing with the subject of the anatomy of the brain. Thus, this article is not intended to be comprehensive, and the unavoidable limitations in the selection of comments, data, and their interpretation reflect, in many cases, the personal views and interests of the authors.
Amsalem, O, Geit W V, Muller E, Markram H, Segev I.  2016.  From Neuron Biophysics to Orientation Selectivity in Electrically Coupled Networks of Neocortical L2/3 Large Basket Cells.. Cerebral Cortex Advance Access. Abstract
In the neocortex, inhibitory interneurons of the same subtype are electrically coupled with each other via dendritic gap junctions (GJs). The impact of multiple GJs on the biophysical properties of interneurons and thus on their input processing is unclear. The present experimentally based theoretical study examined GJs in L2/3 large basket cells (L2/3 LBCs) with 3 goals in mind: (1) To evaluate the errors due to GJs in estimating the cable properties of individual L2/3 LBCs and suggest ways to correct these errors when modeling these cells and the networks they form; (2) to bracket the GJ conductance value (0.05–0.25 nS) and membrane resistivity (10 000–40 000 Ω cm2) of L2/3 LBCs; these estimates are tightly constrained by in vitro input resistance (131 ± 18.5 MΩ) and the coupling coefficient (1–3.5%) of these cells; and (3) to explore the functional implications of GJs, and show that GJs: (i) dynamically modulate the effective time window for synaptic integration; (ii) improve the axon’s capability to encode rapid changes in synaptic inputs; and (iii) reduce the orientation selectivity, linearity index, and phase difference of L2/3 LBCs. Our study provides new insights into the role of GJs and calls for caution when using in vitro measurements for modeling electrically coupled neuronal networks.
Amsalem, O, Pozzorini C, Chindemi G, Davison AP, Eroe C, King J, Newton TH, Nolte M, Ramaswamy S, Reimann MW et al..  2016.  Automated point-neuron simplification of data-driven microcircuit models. Abstract
A method is presented for the reduction of morphologically detailed microcircuit models to a point-neuron representation without human intervention. The simplification occurs in a modular workflow, in the neighborhood of a user specified network activity state for the reference model, the “operating point”. First, synapses are moved to the soma, correcting for dendritic filtering by low-pass filtering the delivered synaptic current. Filter parameters are computed numerically and independently for inhibitory and excitatory input on the basal and apical dendrites, respectively, in a distance dependent and post-synaptic m-type specific manner. Next, point-neuron models for each neuron in the microcircuit are fit to their respective morphologically detailed counterparts. Here, generalized integrate-and-fire point neuron models are used, leveraging a recently published fitting toolbox. The fits are constrained by currents and voltages computed in the morphologically detailed partner neurons with soma corrected synapses at three depolarizations about the user specified operating point. The result is a simplified circuit which is well constrained by the reference circuit, and can be continuously updated as the latter iteratively integrates new data. The modularity of the approach makes it applicable also for other point-neuron and synapse models. The approach is demonstrated on a recently reported reconstruction of a neocortical microcircuit around an in vivo-like working point. The resulting simplified network model is benchmarked to the reference morphologically detailed microcircuit model for a range of simulated network protocols. The simplified network is found to be slightly more sub-critical than the reference, with otherwise good agreement for both quantitative and qualitative validations.
Amsalem, O, Gevaert M, Chindemi G, Rossert C, Courcol J-D, Muller E, Schurmann F, Segev I, Markram H.  2016.  BluePyOpt: Leveraging open source software and cloud infrastructure to optimise model parameters in neuroscience. Abstract
At many scales in neuroscience, appropriate mathematical models take the form of com- plex dynamical systems. Parametrising such models to conform to the multitude of available experimental constraints is a global nonlinear optimisation problem with a complex fitness landscape, requiring numerical techniques to find suitable approximate solutions. Stochastic optimisation approaches, such as evolutionary algorithms, have been shown to be effective, but often the setting up of such optimisations and the choice of a specific search algorithm and its parameters is non-trivial, requiring domain-specific expertise. Here we describe BluePyOpt, a Python package targeted at the broad neuroscience community to simplify this task. BluePy- Opt is an extensible framework for data-driven model parameter optimisation that wraps and standardises several existing open-source tools. It simplifies the task of creating and shar- ing these optimisations, and the associated techniques and knowledge. This is achieved by abstracting the optimisation and evaluation tasks into various reusable and flexible discrete elements according to established best-practices. Further, BluePyOpt provides methods for setting up both small- and large-scale optimisations on a variety of platforms, ranging from laptops to Linux clusters and cloud-based compute infrastructures. The versatility of the BluePyOpt framework is demonstrated by working through three representative neuroscience specific use cases.
Amsalem, O, Verhoog MB, Testa-Silva G, Deitcher Y, Lodder JC, Benavides-Piccione R, Morales J, DeFelipe J, de Kock C PJ, Mansvelder HD et al..  2016.  Unique membrane properties and enhanced signal processing in human neocortical neurons. eLife 2016;5:e16553. DOI: 10.7554/eLife.16553. Abstract
The advanced cognitive capabilities of the human brain are often attributed to our recently evolved neocortex. However, it is not known whether the basic building blocks of the human neocortex, the pyramidal neurons, possess unique biophysical properties that might impact on cortical computations. Here we show that layer 2/3 pyramidal neurons from human temporal cortex (HL2/3 PCs) have a specific membrane capacitance (Cm) of ~0.5 mF/cm2, half of the commonly accepted ’universal’ value (~1 mF/cm2) for biological membranes. This finding was predicted by fitting in vitro voltage transients to theoretical transients then validated by direct measurement of Cm in nucleated patch experiments. Models of 3D reconstructed HL2/3 PCs demonstrated that such low Cm value significantly enhances both synaptic charge-transfer from dendrites to soma and spike propagation along the axon. This is the first demonstration that human cortical neurons have distinctive membrane properties, suggesting important implications for signal processing in human neocortex.
2015
H, M, et al.  2015.  Reconstruction and Simulation of Neocortical Microcircuitry.. Cell. 163(2):456-92. Abstract
We present a first-draft digital reconstruction of the microcircuitry of somatosensory cortex of juvenile rat. The reconstruction uses cellular and synaptic organizing principles to algorithmically reconstruct detailed anatomy and physiology from sparse experimental data. An objective anatomical method defines a neocortical volume of 0.29 ± 0.01 mm(3) containing ~31,000 neurons, and patch-clamp studies identify 55 layer-specific morphological and 207 morpho-electrical neuron subtypes. When digitally reconstructed neurons are positioned in the volume and synapse formation is restricted to biological bouton densities and numbers of synapses per connection, their overlapping arbors form ~8 million connections with ~37 million synapses. Simulations reproduce an array of in vitro and in vivo experiments without parameter tuning. Additionally, we find a spectrum of network states with a sharp transition from synchronous to asynchronous activity, modulated by physiological mechanisms. The spectrum of network states, dynamically reconfigured around this transition, supports diverse information processing strategies.
Ramaswamy, S, et al.  2015.  The neocortical microcircuit collaboration portal: a resource for rat somatosensory cortex.. Frontiers in neural circuits. 9:44.
Mohan H, et al.  2015.  Dendritic and Axonal Architecture of Individual Pyramidal Neurons across Layers of Adult Human Neocortex. Cerebral Cortex. Abstract
The size and shape of dendrites and axons are strong determinants of neuronal information processing. Our knowledge on neuronal structure and function is primarily based on brains of laboratory animals. Whether it translates to human is not known since quantitative data on "full" human neuronal morphologies are lacking. Here, we obtained human brain tissue during resection surgery and reconstructed basal and apical dendrites and axons of individual neurons across all cortical layers in temporal cortex (Brodmann area 21). Importantly, morphologies did not correlate to etiology, disease severity, or disease duration. Next, we show that human L(ayer) 2 and L3 pyramidal neurons have 3-fold larger dendritic length and increased branch complexity with longer segments compared with temporal cortex neurons from macaque and mouse. Unsupervised cluster analysis classified 88% of human L2 and L3 neurons into human-specific clusters distinct from mouse and macaque neurons. Computational modeling of passive electrical properties to assess the functional impact of large dendrites indicates stronger signal attenuation of electrical inputs compared with mouse. We thus provide a quantitative analysis of "full" human neuron morphologies and present direct evidence that human neurons are not "scaled-up" versions of rodent or macaque neurons, but have unique structural and functional properties.
2014
Hay, E, Segev I.  2014.  Dendritic Excitability and Gain Control in Recurrent Cortical Microcircuits. Cerebral Cortex. doi:10.1093/cercor/bhu200 Abstract
Layer 5 thick tufted pyramidal cells (TTCs) in the neocortex are particularly electrically complex, owing to their highly excitable dendrites. The interplay between dendritic nonlinearities and recurrent cortical microcircuit activity in shaping network response is largely unknown. We simulated detailed conductance-based models of TTCs forming recurrent microcircuits that were interconnected as found experimentally; the network was embedded in a realistic background synaptic activity. TTCs microcircuits significantly amplified brief thalamocortical inputs; this cortical gain was mediated by back-propagation activated N-methyl-d-aspartate depolarizations and dendritic back-propagation-activated Ca2+ spike firing, ignited by the coincidence of thalamic-activated somatic spike and local dendritic synaptic inputs, originating from the cortical microcircuit. Surprisingly, dendritic nonlinearities in TTCs microcircuits linearly multiplied thalamic inputs-amplifying them while maintaining input selectivity. Our findings indicate that dendritic nonlinearities are pivotal in controlling the gain and the computational functions of TTCs microcircuits, which serve as a dominant output source for the neocortex.
2013
Bar-Ilan, L, Gidon A, Segev I.  2013.  The role of dendritic inhibition in shaping the plasticity of excitatory synapses. Frontiers in NEURAL CIRCUITS, doi: 10.3389/fncir.2012.00118.
Hay, E, Schürmann F, Markram H, Segev I.  2013.  Preserving axosomatic spiking features despite diverse dendritic morphology. Neurophysiol 109:2972-2981, 2013. doi: 10.1152/jn.00048.2013. Abstractj_neurophysiol-2013-hay-2972-81.pdf
Throughout the nervous system, cells belonging to a certain electrical class (e-class)–sharing high similarity in firing response properties–may nevertheless have widely variable dendritic morphologies. To quantify the effect of this morphological variability on the firing of layer 5 thick-tufted pyramidal cells (TTCs), a detailed conductance-based model was constructed for a three-dimensional reconstructed exemplar TTC. The model exhibited spike initiation in the axon and reproduced the characteristic features of individual spikes, as well as of the firing properties at the soma, as recorded in a population of TTCs in young Wistar rats. When using these model parameters over the population of 28 three-dimensional reconstructed TTCs, both axonal and somatic ion channel densities had to be scaled linearly with the conductance load imposed on each of these compartments. Otherwise, the firing of model cells deviated, sometimes very significantly, from the experimental variability of the TTC e-class. The study provides experimentally testable predictions regarding the coregulation of axosomatic membrane ion channels density for cells with different dendritic conductance load, together with a simple and systematic method for generating reliable conductance-based models for the whole population of modeled neurons belonging to a particular e-class, with variable morphology as found experimentally.
Sarid, L, Feldmeyer D, Gidon A, Sakmann B, Segev I.  2013.  Contribution of Intracolumnar Layer 2/3-to-Layer 2/3 Excitatory Connections in Shaping the Response to Whisker Deflection in Rat Barrel Cortex. Cerebral Cortex 2013; doi: 10.1093/cercor/bht268.
Yaron-Jakoubovitch, A, Koch C, Segev I, Yarom Y.  2013.  The unimodal distribution of subthreshold, ongoing activity in cortical networks. Frontiers In Neural Circuits, doi: 10.3389/fncir.2013.00116. anat_et_al_frontiers_13.pdf
2012
Gidon, A, Segev I.  2012.  Principles governing the operation of synaptic inhibition in dendrites.. Neuron. 75(2):330-41. Abstract
Synaptic inhibition plays a key role in shaping the dynamics of neuronal networks and selecting cell assemblies. Typically, an inhibitory axon contacts a particular dendritic subdomain of its target neuron, where it often makes 10-20 synapses, sometimes on very distal branches. The functional implications of such a connectivity pattern are not well understood. Our experimentally based theoretical study highlights several new and counterintuitive principles for dendritic inhibition. We show that distal "off-path" rather than proximal "on-path" inhibition effectively dampens proximal excitable dendritic "hotspots," thus powerfully controlling the neuron's output. Additionally, with multiple synaptic contacts, inhibition operates globally, spreading centripetally hundreds of micrometers from the inhibitory synapses. Consequently, inhibition in regions lacking inhibitory synapses may exceed that at the synaptic sites themselves. These results offer new insights into the synergetic effect of dendritic inhibition in controlling dendritic excitability and plasticity and in dynamically molding functional dendritic subdomains and their output.
Torben-Nielsen, B, Segev I, Yarom Y.  2012.  The generation of phase differences and frequency changes in a network model of inferior olive subthreshold oscillations.. PLoS computational biology. 8(7):e1002580. Abstract
It is commonly accepted that the Inferior Olive (IO) provides a timing signal to the cerebellum. Stable subthreshold oscillations in the IO can facilitate accurate timing by phase-locking spikes to the peaks of the oscillation. Several theoretical models accounting for the synchronized subthreshold oscillations have been proposed, however, two experimental observations remain an enigma. The first is the observation of frequent alterations in the frequency of the oscillations. The second is the observation of constant phase differences between simultaneously recorded neurons. In order to account for these two observations we constructed a canonical network model based on anatomical and physiological data from the IO. The constructed network is characterized by clustering of neurons with similar conductance densities, and by electrical coupling between neurons. Neurons inside a cluster are densely connected with weak strengths, while neurons belonging to different clusters are sparsely connected with stronger connections. We found that this type of network can robustly display stable subthreshold oscillations. The overall frequency of the network changes with the strength of the inter-cluster connections, and phase differences occur between neurons of different clusters. Moreover, the phase differences provide a mechanistic explanation for the experimentally observed propagating waves of activity in the IO. We conclude that the architecture of the network of electrically coupled neurons in combination with modulation of the inter-cluster coupling strengths can account for the experimentally observed frequency changes and the phase differences.
Nowik, I, Zamir S, Segev I.  2012.  Losing the battle but winning the war: game theoretic analysis of the competition between motoneurons innervating a skeletal muscle.. Frontiers in computational neuroscience. 6:16. Abstract
The fibers in a skeletal muscle are divided into groups called "muscle units" whereby each muscle unit is innervated by a single neuron. It was found that neurons with low activation thresholds have smaller muscle units than neurons with higher activation thresholds. This results in a fixed recruitment order of muscle units, from smallest to largest, called the "size principle." It is thought that the size principle results from a competitive process-taking place after birth-between the neurons innervating the muscle. The underlying mechanism of the competition was not understood. Moreover, the results in the majority of experiments that manipulated the activity during the competition period seemed to contradict the size principle. Experiments at the isolated muscle fibers showed that the competition is governed by a Hebbian-like rule, whereby neurons with low activation thresholds have a competitive advantage at any single muscle fiber. Thus neurons with low activation thresholds are expected to have larger muscle units in contradiction to what is seen empirically. This state of affairs was termed "paradoxical." In the present study we developed a new game theoretic framework to analyze such competitive biological processes. In this game, neurons are the players competing to innervate a maximal number of muscle fibers. We showed that in order to innervate more muscle fibers, it is advantageous to win (as the neurons with higher activation thresholds do) later competitions. This both explains the size principle and resolves the seemingly paradoxical experimental data. Our model establishes that the competition at each muscle fiber may indeed be Hebbian and that the size principle still emerges from these competitions as an overall property of the system. Thus, the less active neurons "lose the battle but win the war." Our model provides experimentally testable predictions. The new game-theoretic approach may be applied to competitions in other biological systems.
Druckmann, S, Hill S, Schürmann F, Markram H, Segev I.  2012.  A Hierarchical Structure of Cortical Interneuron Electrical Diversity Revealed by Automated Statistical Analysis. Cerebral Cortex Advance Access. doi:10.1093/cercor/bhs290
Torben-Nielsen, B, Segev I, Yarom Y.  2012.  The Generation of Phase Differences and Frequency Changes in a Network Model of Inferior Olive Subthreshold Oscillations.. PLoS Computational Biology. 8(7)
Gidon, A, Segev I.  2012.  Principles Governing the Operation of Synaptic Inhibition in Dendrites. Neuron 75, 330-341.
Druckmann, S, Hill S, Schürmann F, Markram H, Segev I.  2012.  A Hierarchical Structure of Cortical Interneuron Electrical Diversity Revealed by Automated Statistical Analysis. Cerebral Cortex. Abstract
Although the diversity of cortical interneuron electrical properties is well recognized, the number of distinct electrical types (e-types) is still a matter of debate. Recently, descriptions of interneuron variability were standardized by multiple laboratories on the basis of a subjective classification scheme as set out by the Petilla convention (Petilla Interneuron Nomenclature Group, PING). Here, we present a quantitative, statistical analysis of a database of nearly five hundred neurons manually annotated according to the PING nomenclature. For each cell, 38 features were extracted from responses to suprathreshold current stimuli and statistically analyzed to examine whether cortical interneurons subdivide into e-types. We showed that the partitioning into different e-types is indeed the major component of data variability. The analysis suggests refining the PING e-type classification to be hierarchical, whereby most variability is first captured within a coarse subpartition, and then subsequently divided into finer subpartitions. The coarse partition matches the well-known partitioning of interneurons into fast spiking and adapting cells. Finer subpartitions match the burst, continuous, and delayed subtypes. Additionally, our analysis enabled the ranking of features according to their ability to differentiate among e-types. We showed that our quantitative e-type assignment is more than 90% accurate and manages to catch several human errors.
2011
Druckmann, S, Berger TK, Schürmann F, Hill S, Markram H, Segev I.  2011.  Effective Stimuli for Constructing Reliable Neuron Models. {PLoS} Comput Biol. 7:e1002133. Abstract
Neurons perform complicated non-linear transformations on their input before producing their output - a train of action potentials. This input-output transformation is shaped by the specific composition of ion channels, out of the many possible types, that are embedded in the neuron's membrane. Experimentally, characterizing this transformation relies on injecting different stimuli to the neuron while recording its output; but which of the many possible stimuli should one apply? This combined experimental and theoretical study provides a general theoretical framework for answering this question, examining how different stimuli constrain the space of faithful conductance-based models of the studied neuron. We show that combinations of intracellular step and ramp currents enable the construction of models that both replicate the cell's response and generalize very well to novel stimuli e.g., to “noisy” stimuli mimicking synaptic activity. We experimentally verified our theoretical predictions on several cortical neuron types. This work presents a novel method for reliably linking the microscopic membrane ion channels to the macroscopic electrical behavior of neurons. It provides a much-needed rationale for selecting a particular stimulus set for studying the input-output properties of neurons and paves the way for standardization of experimental protocols along with construction of reliable neuron models.
Hay, E, Hill S, Schürmann F, Markram H, Segev I.  2011.  Models of Neocortical Layer 5b Pyramidal Cells Capturing a Wide Range of Dendritic and Perisomatic Active Properties. {PLoS} Comput Biol. 7:e1002107. Abstract
The pyramidal cell of layer 5b in the mammalian neocortex extends its dendritic tree to all six layers of cortex, thus receiving inputs from the entire cortical column and supplying the major output of the column to other brain areas. L5b pyramidal cells have been the subject of extensive experimental and modeling studies, yet realistic models of these cells that faithfully reproduce both their perisomatic Na+ and dendritic Ca2+ firing behaviors are still lacking. Using an automated algorithm and a large body of experimental data, we generated a set of models that faithfully replicate a range of active dendritic and perisomatic properties of L5b pyramidal cells, as well as the experimental variability of the properties. Furthermore, we show a useful way to analyze model parameters with our sets of models, which enabled us to identify some of the mechanisms responsible for the dynamic properties of L5b pyramidal cells as well as mechanisms that are sensitive to morphological changes. This framework can be used to develop a database of faithful models for other neuron types. The models we present can serve as a powerful tool for theoretical investigations of the contribution of single-cell dynamics to network activity and its computational capabilities.
2010
Bar-Ilan, L, Gidon A, Segev I.  2010.  Interregional synaptic competition in neurons with multiple STDP-inducing signals. J Neurophysiol 105:989-998, 2011. Journal of Neurophysiology. Abstract
Neocortical layer 5 {(L5)} pyramidal cells have at least two spike initiation zones: Na(+)-spikes are generated near the soma and Ca(2+)-spikes - at the apical dendritic tuft. These spikes interact with each other and serve as signals for synaptic plasticity. The present computational study explores the implications of having two spike-timing-dependent plasticity {(STDP)} signals in a neuron, each with its respective regional population of synaptic "pupils". In a detailed model of a L5 pyramidal neuron, competition emerges between synapses belonging to different regions, on top of the competition among synapses within each region, which characterizes the {STDP} mechanism. Inter-regional competition results in strengthening of one group of synapses, which ultimately dominates cell firing, at the expense of weakening synapses in other regions. This novel type of competition is inherent to dendrites with multiple regional signals for Hebbian plasticity. Surprisingly, such inter-regional competition exists even in a simplified model of two identical coupled compartments. We find that in a model of a L5 Pyramidal cell the different synaptic sub-populations "live in peace" when the induction of Ca(2+)-spikes requires the back-propagating action potential {(BPAP).} Thus, we suggest a new key role for the {BPAP}, namely to maintain the balance between synaptic efficacies throughout the dendritic tree, thereby sustaining the functional integrity of the entire neuron.
2009
Gidon, A, Segev I.  2009.  Spike-timing-dependent synaptic plasticity and synaptic democracy in dendrites. Journal of Neurophysiology. 101:3226–3234. Abstract
We explored in a computational study the effect of dendrites on excitatory synapses undergoing spike-timing-dependent plasticity {(STDP),} using both cylindrical dendritic models and reconstructed dendritic trees. We show that even if the initial strength, g(peak), of distal synapses is augmented in a location independent manner, the efficacy of distal synapses diminishes following {STDP} and proximal synapses would eventually dominate. Indeed, proximal synapses always win over distal synapses following linear {STDP} rule, independent of the initial synaptic strength distribution in the dendritic tree. This effect is more pronounced as the dendritic cable length increases but it does not depend on the dendritic branching structure. Adding a small multiplicative component to the linear {STDP} rule, whereby already strong synapses tend to be less potentiated than depressed (and vice versa for weak synapses) did partially "save" distal synapses from "dying out." Another successful strategy for balancing the efficacy of distal and proximal synapses following {STDP} is to increase the upper bound for the synaptic conductance (g(max)) with distance from the soma. We conclude by discussing an experiment for assessing which of these possible strategies might actually operate in dendrites.
2008
Druckmann, S, Berger TK, Hill S, Schürmann F, Markram H, Segev I.  2008.  Evaluating automated parameter constraining procedures of neuron models by experimental and surrogate data. Biological Cybernetics. 99:371–379. Abstract
Neuron models, in particular conductance-based compartmental models, often have numerous parameters that cannot be directly determined experimentally and must be constrained by an optimization procedure. A common practice in evaluating the utility of such procedures is using a previously developed model to generate surrogate data (e.g., traces of spikes following step current pulses) and then challenging the algorithm to recover the original parameters (e.g., the value of maximal ion channel conductances) that were used to generate the data. In this fashion, the success or failure of the model fitting procedure to find the original parameters can be easily determined. Here we show that some model fitting procedures that provide an excellent fit in the case of such model-to-model comparisons provide ill-balanced results when applied to experimental data. The main reason is that surrogate and experimental data test different aspects of the algorithm's function. When considering model-generated surrogate data, the algorithm is required to locate a perfect solution that is known to exist. In contrast, when considering experimental target data, there is no guarantee that a perfect solution is part of the search space. In this case, the optimization procedure must rank all imperfect approximations and ultimately select the best approximation. This aspect is not tested at all when considering surrogate data since at least one perfect solution is known to exist (the original parameters) making all approximations unnecessary. Furthermore, we demonstrate that distance functions based on extracting a set of features from the target data (such as time-to-first-spike, spike width, spike frequency, etc.)–rather than using the original data (e.g., the whole spike trace) as the target for fitting-are capable of finding imperfect solutions that are good approximations of the experimental data.