[VQ chip picture]

Parallel VLSI implementation of an analog vector quantizer (VQ). Vector quantization is a coding scheme frequently used for speech coding, image compression, and pattern recognition. The CMOS chip operates directly on analog vectorial data, and produces a digital output code in a single clock cycle. The inset shows the circuit cell used repeatedly to implement the parallel VQ computations. The cell measures 78 um by 60 um in 2um CMOS technology, and dissipates less than 10 pJ of energy per computational cycle (Cauwenberghs and Pedroni, 1995).


Some other examples of microsystems developed by our group include:

  • Kerneltron, a massively parallel Support Vector "Machine" in silicon for real-time reconfigurable, large-scale kernel-based pattern recognition and machine vision;
  • AdOpt, a parallel model-free analog microcontroller, with supporting focal-plane metric sensors, for adaptive optical wavefront control;
  • IFAT, an integrate-and-fire array transceiver for reconfigurable spike-based address-domain neural computation and spike-timing dependent synaptic plasticity;
  • A BiCMOS log-domain instantaneous companding bandpass filterbank for audio spectral decomposition;
  • Perturbative stochastic reinforcement learning architectures in analog VLSI;
  • Fuzzy Adaptive Resonance Theory (ART) processors for stable adaptive pattern classification;
  • An auditory feature-based acoustic coding front-end chip (Best Student Paper Award, IEEE ICNN-97);
  • Acoustic transient classifiers implementing time-frequency correlation;
  • Continuous wavelet transform processors for audio decomposition and reconstruction;
  • On-chip scene-based non-uniformity correction in IR focal-plane array optical sensors;
  • Focal-plane edge detection processors implementing the Feature Contour System (FCS) on a hexagonal grid;
  • A zero-crossing detector array for auditory feature analysis;
  • Experimental on-chip learning of continuous-time recurrent dynamics in an analog VLSI neural network;
  • Model-free stochastic error descent for supervised learning and optimization in dynamical systems;
  • Long-term analog volatile storage using binary quantization and partial incremental refresh;
  • An 8-bit, 200 uW, 20 usec, 0.068 sq. mm A/D/A converter cell in 2um CMOS for large-scale parallel analog quantization; and
  • Demonstration of outer-product learning in analog neural hardware with two transistors per synapse.