This project aims to develop an online user interface to implement simple user designed neural circuits to serve as demonstration and education material for high school and college level students and educators. The user interacts with a graphical user interface (GUI) which on the front end allows the user to assemble, probe, and alter simple neural circuits; while on the back end the GUI implements the FitzHugh-Nagumo neuron model in Java as a system of alterable point neurons with adjustable connection matrices and a low-order differential equation solver to allow for real-time computation and visualization of action potential generation and propagation. This interface can serve as a demonstration of the complex nonlinear interactions which drive neurons, while increasing understanding of how the brain works on a cellular and computational level in a way accessible to the general public. The final aim is the publication of the GUI online, where users can interact with it, and increasing the complexity of the model and interface to demonstrate more complex behaviors and greater user definability.

The hippocampus is identified as an important structure in the cerebral cortex of mammals for forming new memories, storing these memories independently and retrieving them. The ability to associate to and distinguish between similar experiences forms an important component of learning. This project will attempt to model the learning dynamics of the Dentate Gyrus (DG) and CA3 regions in the hippocampus which revolve around two major methods of Pattern Separation and Pattern Completion. Based on previous research, our computation model incorporates Hebbian learning rule. We attempt to implement the key concept of Pattern Separation and highlight the constraints involved in our model.

We used a model of spike-timing-dependent plasticity (STDP) based on
calcium signaling to test the effects of correlated inputs on synaptic weight
distributions. Gilson and Fukai (2011) used amplitudes for the STDP curve
that depended on synaptic strength and demonstrated the emergence of
stable bimodal weight distributions. Those sets of synapses with correlated
inputs were more strongly potentiated than those without. However, this
model did not include any biophysical mechanism for STDP. Whereas most
versions of STDP model the time difference between pre- and postsynaptic
spikes explicitly, as in the above study, Shouval et al (2002) used a model of
NMDA-R-dependent calcium signaling to effect long-term potentiation and
depression in a similar spike-timing-dependent manner to traditional STDP.
We implemented a biophysical model inspired by this that used weight
dependence along with modified calcium dynamics. We found that there is
systematic potentiation of inputs with strong correlation, and there is
depression of inputs with weak correlation.

'Granger' causality was explored as a method for determining
effective/functional vascular connections between lung regions in a twodimensional
time series of blood flow MR images, generated using Arterial
Spin Labeling (ASL). Two different approaches to segmentation were
employed to attempt to optimize the ratio of spatial to temporal data, lobar
segmentation and 'high-resolution' segmentation of the lung field. While
lobar segmentation yielded significant causal interactions during data
collected in normoxic breathing, insufficient number of observations
hampered detection causal interactions during either hypoxia or hyperoxia.
In contrast, utilizing higher resolution segmentation on data acquired at
twice the temporal resolution (5s as opposed to 10s), significant differences
in effective network connectivity and flow autonomy were revealed.
Network connectivity and autonomy were enhanced during hypoxic
breathing compared to normoxia or hyperoxia, suggesting a level of active
flow control present in the healthy lung that becomes engaged when oxygen
levels are reduced.

Chronic pain is estimated to afflict millions of people worldwide, affecting
their well-being and quality of life. Due to the complexities of the human
brain, it can often be very challenging to treat with only 40-60% of patients
achieving partial relief. The source of chronic pain is neurons with abnormal
and hyper-excitable firing properties. Botulinum toxin, similar to other
neurotoxins, interacts with the neurotransmitter release vesicles and prevents
the release of acetylcholine, which can be used to reduce the hyperactivity of
pain neurons. Transcranial magnetic stimulation (TMS) is a noninvasive method
of brain stimulation that can alter the firing frequency of neurons. This
paper investigates the use of neurotoxin in conjunction with TMS in restoring
normal neuronal behavior.

Oscillating electric fields have shown great promise for indirect stimulation of damaged axons, classification of synchronizing neural networks, and inhibition of nociceptive signals. So far, models in this course have focused on modeling the membrane voltage dynamics of neurons based on their ionic components. Without altering the physical properties of neurons, electric fields also induce changes in the membrane voltage. With this in mind, we aim to analyze the effect of an external electric field on primary and secondary afferent neurons immediately following a nociceptive stimulus. The application of a field in this manner is expected to impede the downstream signal, thereby preventing the sensation of transient pain.

In recent years transcranial alternating current stimulation (tACS) has emerged as a popular tool for the study of
rhythmic brain activity. A great deal of focus has been directed toward applying tACS to modulate the spike timing
dependent plasticity (STDP) of occipital neural networks for potential therapeutic purposes. While little clinical
evidence has emerged supporting the efficacy of tACS as a treatment for cognitive and psychological disorders arising
from the occipital lobe the ability of tACS to entrain and modulate the frequency, and consequent strength, of occipital
network firing has been well documented in several EEG studies. However, little work has been done to develop robust
models that can be used to study this effect in silico. In the present study we develop a micro-network of modified
FitzHugh-Nagumo neurons to model of the effect of tACS on naturally oscillating neural networks. We conclude that
while tACS modulates the resonant frequency of the network via the STDP of its constituent neurons, its effects are
transient and that more work is necessary to accurately model the physiological dynamics of large scale neural
network firing.

Idling signatures of the resting brain have become a recent topic of interest in the functional MRI literature, wherein baseline activation signals are being spatially localized and interpreted as a connected and synchronized neural network, instantiating a so-called 'default mode' network. Further work is being done to couple MRI and EEG functional and structural connectivity analyses of this network. The goal of this paper is to examine eyes-open and eyes-closed baseline EEG data of individuals with autism compared to non-autistic individuals to identify changes in causal information flow among maximally independent components of the EEG signal in a default state. Further, a qualitative and perhaps quantitative comparison of diffusion-weighted or diffusion-tensor imaging data of the same subjects will provide a bimodal approach to extracting distinct features of the default mode network across populations, in order to identify functional and structural characterizations of neural deficits in autism.

Independent Component Analysis (ICA) and related algorithms provide a model for explaining how sensory information is encoded in our brains. However, it remains unclear how neural network uses its biological plasticity to achieve this ICA-like processing: maximization of information transmission or minimization of redundancy. Here, we consider a neuron model proposed by Savin, Joshi, and Triesch (2010), which includes three forms of plasticity in real neural network: spike-timing dependent plasticity (STDP), intrinsic plasticity (IP), and synaptic scaling. We investigate both theoretical and experimental aspects of the model and found that the three types of plasticity play important but different roles in efficiency and quality of learning. Although this neuron model cannot compete with classic ICA algorithms in solving blind separation problem, it provides a biological perspective that can potentially explain how our brains learn and why our brains have such high capacity and complexity.

This project will investigate schemes to systematically reducing the number of of different of differential equations required for biophysically realistic neuron model. The original scheme is invented by Thomas Kepler in 1992, and it is used in many neurons models, such as Hodgkin-Huxley, A-current model and stomatogastric neuron.The general idea of this scheme is to inverse all the gating variables to get corresponding equivalent potential. Since some of those potentials have similar wave form with the action potential. And reduction is based on that fact. Singular perturbation theory and principal component analysis can be used. In this project, we will use this scheme to do the reduction of HVc neuron model, which contains 11 gating variables. The goal is to reduce the number of gating variables as less as possible and to retain as high a degree of fidelity to the original system as possible.

One well established phenomenon in the CNS is every action potentials does not trigger
neurotransmitter release from a single presynaptic terminal. Observations in hippocampal
slice and from two photon calcium imaging in vivo suggest that probability of release at a
single excitatory synapse is significantly less than .5. We wanted to examine the impact of
the probability of release and variance in probability of release on synaptic input to
pyramidal neurons. To accomplish this we modeled inputs from probabilistically speaking
upstream neurons, varying the mean and variance of the probability of vesicle release for
the same upstream spiking pattern and distribution of synapses and examined the impact
on spiking in a single downstream layer 2/3 pyramidal neuron. We used NEURON to
model this biophysically realistic downstream pyramidal cell.

Long term prognosis of individuals with autism spectrum disorder (ASD) can improve dramatically with early intervention, thus making early detection critical to the treatment of ASD. Unfortunately, the current most reliable test for early autism diagnosis, the Autism Diagnostic Observation Schedule-Toddler Module, is based on behavioral observations, which limits age of detection to when clinically defined behavioral symptoms begin to manifest. Rather than relying on behavioral data, diagnostic approaches based on genetics may further decrease the age at which ASD can be detected. Numerous studies have indicated that autism has a strong genetic basis, although the genetic mechanisms that lead to the highly variable ASD phenotypes are complex and poorly understood. Supervised machine learning, such as artificial neural networks (ANN), has successfully predicted cancer classifications that typically elude routine histology. The goal of this project is to train an ANN to output correct ASD diagnoses given microarray data collected from toddlers with varying degrees of ASD, typically developing toddlers, and developmentally delayed toddlers that serve as a contrast group. ANN will be built using Matlab's Neural Network Toolbox with the specific approach of applying the feedforward resilient backpropagation learning algorithm to a multilayer perceptron. A supporting goal of this project is to determine the number of layers and neurons in each layer that result in the most accurate prediction of ASD diagnosis.

We propose an implementation of Izhikevich spiking neural networks to solve a 2D path
finding problem. Given a 2D grid of size nxn, we can solve the path finding problem with a
spiking neural network consisting of n^2 neural populations. We propose that the activation of
a population encodes the exploration of a certain state with convergence encoding the next best
state. The series of next best states terminating at the destination node is indeed the shortest
path to the destination. This paper discusses the theoretical foundation of this research along
with a simplified experiment for a fully connected 2x2 graph graph.

Restricted Boltzmann Machines (RBMs) have been demonstrated to perform efficiently
on a variety of applications, such as dimensionality reduction and classification.
Implementing RBMs on neuromorphic hardware has certain advantages, particularly from a
concurrency and low- power perspective. This paper outlines some of the requirements
involved for neuromorphic adaptation of an RBM and attempts to address these issues with
suitably targeted modifications for sampling and weight updates. Results show the feasibility
of such alterations which will serve as a guide for future implementation of such algorithms
in VLSI arrays of spiking neurons.

Spike-timing-dependent plasticity (STDP), an asymmetric form of Hebbian learning, shows how synaptic strength between neurons changes corresponding to time difference between pre- and post- spikes [1]. It is widely believed that synaptic plasticity can learn and store information of brain, so understanding STDPhelps study ofthe process of learning in the brain.Moreover,hardware implementation of STDP is of great importance in developing brain- machine interfaces. In this paper, we simulate weight change respect to a fixed time difference in Matlab. Then we design circuits to investigate continuous-time STDP by showing weight changes between two neurons. The circuit, which includes integrate and fire (I and F) neuron module, synaptic trace module and weight tower module, is designed and simulated in the Cadence design environment. At last we compare the simulation results of circuits with Matlab simulation results.

A constraint-based modeling technique is presented for analyzing neural connectivity
networks. Drawing inspiration from metabolic modeling, the method considers
all feasible connectivity states in a neural network based on fundamental
connectivity constraints. The published connectivity network of *Caenorhabditis*
*elegans* is used as an example of the method.

In this project, Graphical User Interfaces (GUI) are designed in MATLAB to implement spike sort-
ing and behavioral analysis for the interactive playback experiments and are tested for a behavioral
experiment. The basic aim of the software is to output both spike times and behavioral times, so that
dynamics of the neurons could be studied with respect to various behavioral events in the interactive
playback experiments. In the first part of the project, a GUI for spike sorting was designed to extract
spike times belonging to dierent neurons from neural recordings. For the second part of the project, a
GUI was implemented for extracting the timings of various behavioral events from files containing playback
recordings. Finally, plots concerning dynamics of the neurons were made using the spike times and behavioral
times, to show the practical use of the software in the interactive playback experiments.

Retinal ganglion cells (RGCs) respond to spatiotemporal patterns falling on
photoreceptors by firing spike trains with an exquisitely precise temporal
structure. Existing models of RGCs are reduced input-output models of
light intensity or other features (eg contrast), but contain no biophysical
parameters for a single RGC. These models, such as the stochastic
integrate and fire (IF) and linear-nonlinear (LN) Poisson models are unable
to account for the spike trains that are observed in actual data. The
generalized linear model (GLM) is the most promising for reproducing the
temporal structure of spike trains, even though it is not biophysically
constrained. It would be an important result to generate a biophysical
model of RGC spiking, and compare that model to one fit to the GLM to
determine whether reduced models of that type are capable of capturing
some spiking characteristics that full biophysical models contain. This
paper analyzes some possible methods of comparison between these two
types of models. It found that certain relations between elements of the two
models reproduce actual retina data more faithfully.