Our website has moved to a new location! Please select link here



Previous journal clubs are here  ….

ANTG Fall 2018

We’re going to use as text a few chapters of the following

Time Series Analysis and Its Applications: With R Examples (Springer Texts in Statistics) 4th ed. 2017 Edition

by Robert H. Shumway (Author), David S. Stoffer (Author)

A PDF can be downloaded from the UW library

1. October 2nd

Chapter 1 (a). Characteristics of time series data

2. October 9th

Chapter 1(b). Characteristics of time series data

3. October 16th

Chapter 2(a). More autoregression and some time series regression

4. October 23

Chapter 2(b).Time series regression and exploratory data analysis

5. October 30

Chapter 3. Time series regression and exploratory data analysis

6. Nov 6

Hidden Markov models, download the ‘cheat sheet’ from here

7. Nov 13

Chapter 4(a) Spectral analysis and filtering

8. Nov 20: NO CLASS


9. Nov 27

Chapter 4(b) Spectral analysis and filtering

10. Dec 4


ANTG Spring 2018

Class 1. March 27: Maximum likelihood vs. Bayes, who will win?

Lecturer: Ione Fine

Readings are the maximum likelihood tutorial (here) and the Bayesian tutorial (here) from Robbie Jacobs (my old advisor). (He has many other awesome tutorials here.)

Class 2. April 3: No Class

Class 3. April 10: Psychophysics and Signal detection theory

Guest lecturer: Ione Fine

For psychophysics, start with Heeger’s online tutorial. Then read David Heeger’s tutorial Heeger_sdt-advanced (Note that he has two tutorials on his web site, make sure you download the advanced one.)

The actual lecture will actually be based on Appendix A in an ancient article of mine. No need to read the actual article, Appendix A stands alone. No need to read it beforehand, but I figured I’d warn you beforehand. I chose it because it’s explains SDT in the Bayesian context, so it’s a nice linkage …

Class 4: April 17: Using Bayesian models to explain behavioral choice

Lecturer: Geoff Boynton
Student presenter: Kelly Chang

Paper is here (Fischer_NN2010)

The code for this paper is in the class Github repository. The link for that is at the top of the page.

Class 5: April 24: Back to Signal detection theory

This paper is best read with a top hit from 1977 playing in the background


Class 6: May 1: Mo Bayes and the beginning of classification

This paper (Clayards_Cognition2008) discusses categorization of speech sounds from a Bayesian perspective

Class 7: May 8: Supervised learning

Faculty lecturer: Michael Beyeler.
Bishop – Pattern Recognition And Machine Learning – Springer 2006 . First, third and fourth chapter (don’t be fooled, give yourself some time) of Bishop’s most awesome book!  Hopefully everything you learned is going to come together with the sound of celestial trumpets …

Also read this chapter (here)

Class 8: May 15: Neural nets and deep learning

Faculty lecturer: Michael Beyeler
Fifth chapter of Bishop

Read this chapter (here)

Class 9: May 22:  Cancelled


Class 10: May 29: Vocoders

Faculty lecturer: Matthew Winn






Class 10: May 29: Mo Mo Machine learning

Faculty lecturer: Tara Madhyastha madhyt@uw.edu
Student lecturer: Jesse M. Resnick, send Q to resnick@uw.edu).

This class will focus more closely on the Song paper

Paper is here (Song et al.)

We are going to do models of the peripheral auditory system next quarter – we have a plan!


ANTG Winter 2017

Jan 9th: Electrophysiology (David Perkel)

Patch-clamp technique, Hamill

The patch-clamp technique enables the recording of bioelectrical signals in excitable and nonexcitable cells. At the molecular level, tight-seal, high resolution current recording from small membrane patches (∼10 μm2) allows real time monitoring of conformational transitions in single membrane channel proteins as they are gated open (or closed) by specific stimuli, including changes in membrane potential, membrane tension, and specific chemicals/neurotransmitters. Cell-free membrane patch recording allows characterisation of channels modulated by intracellular messengers (e.g., Ca2+, nucleotides, G-proteins and phospholipids). Tight-seal whole-cell recording can monitor action and synaptic potentials/currents generated by many channels. Furthermore, by using multiple patch pipettes to record at spatially and functionally distinct regions of a cell one can measure precisely the initiation and spread of potential within geometrically complex cells.

Code to play with while you wait for the next episode of Game of Thrones can be found here

Jan 16th:  Electrophysiology (David Perkel)

Neuronal activity in the brain gives rise to transmembrane currents that can be measured in the extracellular medium. Although the major contributor of the extracellular signal is the synaptic transmembrane current, other sources — including Na+ and Ca2+ spikes, ionic fluxes through voltage- and ligand-gated channels, and intrinsic membrane oscillations — can substantially shape the extracellular field. High-density recordings of field activity in animals and subdural grid recordings in humans, combined with recently developed data processing tools and computational modelling, can provide insight into the cooperative behaviour of neurons, their average synaptic input and their spiking output, and can increase our understanding of how these processes contribute to the extracellular signal.

The origin of extracellular fields and currents – EEG, ECoG, LFP and spikes

Jan 23: Optogenetics (David Gire)

Optogenetics in neural systems published as a Neuron Primer in 2011 from Karl Deisseroth’s lab (http://www.cell.com/neuron/abstract/S0896-6273(11)00504-6 ). It is a little older, but covers all of the important topics for optogenetics and the types of proteins used have not changed much since it was published.

Jan 30: MR physics  (Ione)

This article will provide an introduction to the physics of MR imaging. Some very basic initial concepts will be described, and these will be combined to form the foundation for the more complicated concepts of T1 and T2 relaxation and contrast. Finally, several basic pulse sequences will be discussed.

An attempt has been made to keep the descriptions quite simple; they assume no prior understanding of MR physics, and no complicated math is included. A difficulty remains in the fact that all of the simple concepts must be retained simultaneously in order to apply these concepts to more complicated learning situations.

Pooley_MR Physics Tutorial

The following online tutorial is also very good. The chapters I would prioritize are Nuclear Magnetic Resonance, NMR signal and MR contrast and Spatial encoding in MRI, Diffusion-weighted and Diffusion tensor, and Functional MRI of the brain.


Feb 13: NO CLASS

Feb 20: MR experimental design and analysis (Ione)

Here’s a review that focuses on the underlying principles by Amaro_2005

Also for those interested in auditory neuroimaging  Peelle_2014 has a recent review

Feb 27: Anatomy (David Perkel)


The birth of cortical layers 


Feb 6: Auditory Psychophysics, Matt Winn

Psychophysics, Signal detection theory and Bayes: Ione

David Heeger’s tutorial Heeger_sdt-advanced is consise and clear. Note that he has two tutorials on his web site, make sure you download the advanced one.

For psychophysics, start with Heeger’s online tutorial


Autumn Qtr 2017

Date: October 3.

Date: October 10. The Physics and Analysis of Sound & The Outer and Middle Ears Instructor: KC Lee

Discussion Master: Brooke

Pickles, Chapter 1 & 2

Questions from Ione:

  1. What are the advantages & disadvantages of using Fourier analysis as a fundamental way of thinking about sound?

2. A linear system obeys the rules of homogeneity, additivity, superposition,  and shift-invariance. Does the auditory system obey any of these? Give examples.

3. A vision science graduate student tells her S&H colleague that she bought some noise-muffling headphones for $100. “They reduce noise by 15dB!”, she says proudly. Her S&H colleague rolls her eyes and hands her this hat. Why?

Image result for woolly hat

4. You know how you can make your phone beep so that you can find it. But sometimes the beep is hard to localize. What sound would you choose and why?

5. Someone gives you a recording of a restaurant and tells  you that in the background you will be able to hear the tiny whispers of malevolent spy-elves. At what volume should you listen to the recording to best hear the elves and uncover their evil schemes?

Date: October 17. The cochlea

Instructor: Jenny Stone

Discussion Master: Nick

Pickles, Chapters 3, 4, 5

How do inner and outer hair cells differ with respect to their anatomy, relationship to the tectorial membrane, and afferent and efferent innervation?

Discuss the anatomic features of the cochlea that contribute to the passive mechanics and broad tuning of the traveling wave.  Why does the traveling wave grow in amplitude as it moves apicallly, then slow down?  Why is there a peak in amplitude when the active process is absent?

How is the endocochlear potential generated? Why is it necessary for sharp tuning of the basilar membrane? How do outer hair cells contribute to the active process?  By which mechanisms do aminoglycoside antibiotics, anoxia, and loop diuretics alter sharp tuning?

Explain Figure 3.12.

[optional] Explain how a Wiener kernel can be used to estimate auditory nerve response characteristics.

Why do you think there might be a high baseline firing rate in AN neurons (particularly the low-threshold fibers)?

Questions for Chapter 5 (Optional)  “Mechanisms of transduction and excitation in the cochlea”

Explain why, with respect to mechanisms of hair cell mechanotransduction, mutations in cadherin-23 result in deafness.  Would you expect mutations in prestin to lead to a similar degree of hearing loss?

What is the difference between fast and slow adaptation of hair cells?

Which two theories of cochlear micromechanics are evoked to explain how sharp tuning is achieved?  In other words, how are movements of the traveling wave amplified by cells/organelles in the organ of Corti?

Re afferent fibers: “neural connections of inner hair cells showing a great deal of divergence [each hair cell connects to a great many fibers] and those of outer hair cells showing both convergence and divergence [each hair cell is connected to many fibers and each fiber connects to many hair cells] “. Speculate about the significance of this …


Date: October 24. Sub-cortical pathways [Bill Spain]

Pickles, Chapter 6.

Chapter 6 is quite long and complicated so I will limit the session on Tuesday to the stuff below the inferior colliculus (i.e.  only up to and including section 6.4).

There is a ton of info in the chapter and some good general statements about how to think about how the CNS encodes different aspects of sound.  However, be careful not to beat yourself up if you cannot make sense of some of the assertions that are made in the chapter.  I say this because some some the statements are only correct under a very restricted set of conditions (and those conditions are not always provided in the chapter) and other assertions are simply incorrect.

I suggest we think/concentrate about how the ventral stream (and the CNS in general) encodes intensity of a sensory stimulus versus the temporal aspects of a sensory stimulus.  What are common circuit specializations, synapse specializations and neuronal (intrinsic excitability) specializations that favor intensity coding versus time coding?  Not all of this is made clear in the chapter but there are some statements that get us in the ballpark. I will try to focus on this when we meet on Tuesday.

Discussion Master: Jesse

  • Could we go through Figure 6.15 in detail?
  • My question involves the mechanism for the temporal fluctuation of auditory stimuli encoding in cochlear nuclei (section What does the author mean when he says, ” (due to high convergence of auditory nerve fibers onto D-stellate cells) only those portions of the waveform which evoke the greatest probability of activation would be able to trigger firing” (bottom of the second paragraph in section, page 169 in the third edition). Is the author saying that each D-stellate neuron has a characteristic frequency fluctuation that it responds to based off of the convergence of excitatory input of many auditory nerve fibers?
  • Sound localization is presented as an astonishingly sensitive system to intensity and delay.  Given this, has there been any research into how ear position relative to the head for animals with mobile ears {such as in cats} is integrated in this steam to modulate localization cell’s sensitivity or tuning?
  • How do cells like the octopus cell, which take >60 neuronal inputs, function in intensity coding? I’m trying to imagine the input-output function for such a neuron and how it would respond differently over a range of sound levels. Can it still code onsets with good temporal fidelity across a broad range of intensities, or do we need an array of octupus cells with different intrinsic thresholds to code for onsets at different intensities?
  • Do the primary auditory nerves synapsing on particular MSO neurons have similar thresholds and spontaneous rates? How is spontaneous activity handled by the system?
  • How does the binaural temporal difference processing stream handle noisy input? For instance, if I’m trying to localize a low frequency signal in a noisy environment how are the various ITDs detected attributed to different auditory objects? Could we see evidence for these separate objects in the activity of a population of MSO neurons?
  • How does the brain adapt to localize sounds when hearing is suddenly and significantly worsened or lost in one ear, like in the case of an accident which physically damages the cochlea?
  • Can we talk a bit about what he dorsal stream actually does? The book is pretty vague here.

Date: October 31. Cortex 

Instructor: Ione Fine

Pickles, Chapter 7.

Discussion Master: Kelly

  • Can we talk in greater detail about the similarities and differences of columnar organization in A1 vs. V1 and how the structure of each guides our perception of auditory and visual objects, respectively?
  • Physical objects tend to exhibit permanence due to the conservation of mass. Since light sources don’t tend to change all that rapidly, visual objects tend to exhibit at least some degree of permanence (or at least continuity). This is not true of auditory objects since they reflect active processes. How does this alter what information is being extracted from the source in these two different modalities?
  • Given these properties of auditory objects, how should we think about their neural representations; particularly how they differ from the representations of visual objects?
  • Pickles mentioned that the parabelt connects to the frontal eye field. Can you discuss the functional implications of this connection?
  • What is the “koniocortex”?
  • Can you discuss methods of “cooling” the cortex? How is this done and how does it help us to learn about the cortex?
  • Do we know why adjacent tonotopic regions reverse their direction of organization?  I.e., why we see H-L | L-H | H-L   instead of H-L | H-L | H-L ?
  • The extraction of spatial information relies on fine timing and spectral cues. In this sense, the spatial dimension is not really orthogonal to the temporal and frequency dimensions of signals. How does the brain recover the independence of these signal streams? For instance, how are temporal fluctuations at the source interpreted when they coincide with spatially induced ITDs? How is the spatial filtering of the penna deconvolved out of the signal to do unbiased comparison of intensity across frequencies?
  • How should we think about auditory objects? Are they simply correlations in signals across spatial, temporal, and frequency space that we use to try and make predictions about what the source/s are doing?
  • “Cortical responses to acoustic stimuli are particularly clearly enhanced when the stimulus is associated with another stimulus of strong significance for the animal, such as an electric shock…”
  • Is there a limit to the number or intensity of distinct non-auditory stimuli that an animal experiences before the enhancement of the cortical response goes away?

Date: November 7. Auditory psychophysics and performance, low level

Matthew Winn

Pickles, Chapter 9.

Discussion Master: Tristram Savage

  • Can you discuss the difference between frequency resolution and spectral resolution?
  • Could we talk more about higher levels of speech processing? The textbook only really talk about Broca’s and Wernicke’s area, which seems to be too vague and outdated.
  • Characteristic frequencies of neurons are defined based on (unmodulated) sine wave stimuli.  But Fig. 9.6 shows considerable tuning to specific modulation frequencies of that carrier BF.

(a)  what is the underlying physiology supporting modulation rate tuning?

(b)  if the “best” stimulus for such cells is not a pure-tone sine wave, why do we persist in defining the characteristic frequency in that way?  (I’m imagining a 3D plot of C.F. vs. modulation rate vs. firing rate…  there could (in theory) be a diagonal ridge of best responses, that co-varied in C.F. and mod. freq.

  • Can you explain in greater detail how two-tone suppression affects the transformation of speech sounds from acoustic to neural representations? And perception of these sounds?
  • In section the author discusses the “third model” for sound source localization in the azimuth where binaural input is compared at higher brain levels and a decision is made (two-channel model). Can we talk more about the neural mechanism for why the neural response is most robust for sound sources coming straight ahead?
  • I’m not clear how co-modulation masking release works at the psychophysical or neuron level, and would appreciate some clarification.
  • Can we talk through the different types of masking experiments and how the details of each inform our understanding of auditory processing?
  • How are timing cues interpreted at different stimulus intensity levels? For instance, an intense pure tone will spread over a substantial region of the cochlea and auditory neurons connected to each of these regions will phase lock at their characteristic frequency. How does the central auditory system select which primary neurons to consider for timing cues?
  • Can go over in more detail the concept of co-modulation masking release–I am finding this difficult to visualize. 

Date: November 14. Sensorineural hearing loss

Instructor: Olivia BerminghamMcDonogh

Pickles, Chapter 10.

Discussion Master: Elle

 Auditory Nerve

  • Why are synapses onto low-spontaneous rate fibers more sensitive to excitotoxic damage?
  • How do central changes cause the phenomenon of loudness recruitment when the ANF response growth functions are normal?
  • I remember vaguely that Shae (no longer on the ANTG) spoke about tripolar stimulation in cochlear implants, is there any more development in how that could be used with cochlear implant patients?
  • I want to know more about the 4 kHz notch arising from broadband exposure. Is there a new understanding about why it happens that post-dates the pickles book?
  • How do the loss of auditory nerves (or their loss of function due to impaired hair cells) affect midbrain processing? How do cells in the CN and IC compensate for the loss of sensory input? What’s known about their homeostatic regulation of synaptic weights/temporal properties?

Hair Cells

  • Is there evidence that dietary antioxidant consumption can prevent or slow the progression of sensorineural hearing loss?
  • What physiological mechanisms allow hair cells to regenerate in birds, but not in mammals?
  • What is the current state of hair cell replacement and gene therapy research?
  • Why are outer hair cells particularly susceptible to damage from acoustic overexposure? They’re not even doing any active amplification at high stimulus intensities.
  • What is known about the supporting cells that differentiate into hair cells in birds? Are they a class of multipotent stem cells that differentiate into HCs when given the correct signal, or are they a specialized cell type that de-differentiate to become HCs after ototoxic damage?


  •  I’d be interested to learn more about sound conditioning/toughening, and whether a similar phenomenon has been observed in the vestibular system.

Date: Nov 21. No class


Date: November 28. Cochlear implants

 Julie Arenberg 

Reading TBD

Discussion Master: Kelly J


Date: December 5.  Speech: Matt  Winn & Dan McCloy

Reading TBD

Discussion Master: Dan


This class is associated with the Github repository https://github.com/ionefine/ANTG

This github site also has some useful SDT cheat sheets

‘There are also some relevant Matlab tutorials

Signal Detection Theory and the ‘yes/no’ experiment

ROC analysis

ROC analysis of neuronal responses


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s