Final Project: Design of an Integrating Photosensor Array

Electronic Design Laboratory 520.348

March 18, 1998

The objectives of the final project are:

  1. To learn about the state of the art in solid-state imagers, and get hands-on experience in the field.
  2. To actually go through the process of a full VLSI design cycle: system-level design, circuit cell design, simulation, layout, and verification.
  3. To work together in a team with different groups doing part of the design, such that everything fits together.
  4. To get a feel what electronic engineering and circuit design in the real world is all about!

Introduction

One of the hottest areas in VLSI design is image and video processing, for applications in multimedia, communications, motion detection, etc. Videocameras are getting more and more sophisticated, and have evolved significantly from the original CCDgif imager chips. Now they include several signal processing functions besides the imaging itself, in a compact and low-weight unit that dissipates less power and lasts longer on a single battery load. The objectives are challenging: small size, low power, added functionality. Even more demanding is the design of chips that perform real-time compression of digital video, using standards such as JPEG (joint pictures expert group) or MPEG (moving pictures expert group).gif For HDTV (high definition television, a digital standard for TV), the presently existing algorithms require close to supercomputer power to be performed in real time.

We will implement a simple integrating imager. Integrating means that the light intensity is integrated over a given time interval, before it is read out. For those of you that are not afraid of challenges, you are free to add more onto it and make the chip ``smarter", e.g. adjust the timing of the integration for automatic gain control, or desynchronize it with the readout demultiplexing circuitry.

An Integrating Photosensor Array

The core of the chip will be arranged as an array of pixel cells on a rectangular gridgif. Each pixel contains a photosensor, which converts incident light to a current proportional to light intensity. (Of course we take off the cover of the chip package!) CCDs use the photocurrent from a pn junction of a reverse drain-substrate junction; we will use a photodiode in a similar way, and explore different options available in CMOS technology. The photocurrent is then integrated onto a capacitor, converting the photosignal into an output voltage. The timing and initialization of the integration is controlled by the digital signals on the tex2html_wrap_inline177 and tex2html_wrap_inline179 lines. The tex2html_wrap_inline179 signal steers the photocurrent through a differential pair, enabling or disabling the current charging the capacitor. The tex2html_wrap_inline177 signal activates a switch to reset the voltage on the capacitor to tex2html_wrap_inline185 , prior to integration. For readout, the voltage is buffered by a source follower.

A plain imager includes some circuitry to scan out the image in sequential fashion. (A CCD does that automatically by shifting through the image like in a bucket brigade.) We'll make an imager where each pixel can be addressed individually with an x and y address, like a memory cell in a semiconductor memory. The ``address" and ``data" lines for x and y run across the array of pixels in horizontal and vertical directions.

   figure32
Figure 1: Schematic of the ``baseline" integrating photosensor pixel design. The photodiode D1 receives a photocurrent proportional to light intensity, which is integrated and converted to voltage on the C1 capacitor. Transistors M1 and M2 implement a differential pair to steer the current onto the capacitor, and M3 implements a reset switch to initialize the voltage. M4 and M5 implement a source follower to buffer the output voltage, which is multiplexed on the y data line by M6 when the address on x is high.

Figure 1 shows the ``baseline" pixel, which certainly is not ideal, but at least can be laid out more or less compactly for a dense imager. Feel free to add transistors and change the design in ways that will improve the functionality, e.g., to perform automatic gain control, or anti-blooming. But remember, the more transistors per pixel, the bigger the pixel size, the fewer pixels on you chip, and hence the lower the resolution. It will be more challenging to remove transistors or reduce the pixel size in other ways, than it is to add transistors for increased functionality. Finding the right balance is an art, especially on a ``Tiny'' chip for which the active area including pads is only 2.2 mm by 2.2 mm!gif

   figure41
Figure 2: Floor plan of the integrating image sensor, with the array of pixels at the core. The x address selects a single column of pixels that are output along the horizontal y data lines. The y address selects one of the data lines, and outputs the current on that line which comes from the pixel at x, y coordinates on the plane. The x and y addresses are presented in digital form, and decoded using two address decoders. The digital addresses can be supplied externally, or can be automatically incremented from a clock signal and using two counters to scan out the image.

Figure 2 shows the floor plan of the chip, which contains the array of pixels at the core. Ideally, all the peripheral circuitry surrounding the pixel plane should not be too large, and most of the area should be taken by as many pixels as you can fit in the available area. The main peripheral components include two address decoders for random-access digital addressing of the pixels, and two counters for scanning out the image sequentiallygif. If you want to be really challenged, you can think about providing a digital output, using either a fast A/D at the output, or a column of slow A/D converters before the de-multiplexing of the data lines to the output.

Project Plan

Pixel Design

First, each group will give its best shot at the pixel design, simulation and layout. There are two criteria: functionality, and size. Smallest designs, or designs with added functionality that are still size efficient, will get the highest marks. Very compact implementations of the ``baseline" design are fine. Each group will provide the schematic design on paper, the transient simulation results in TSpice, and the layout in L-Edit.

We will select a few of the best designs, and continue from there. The goal is to have those fabricated through MOSIS.

Design of the Peripherals

Second, each group will design one part of the periphery, for all or one of the selected pixel designs. Those are:

Several groups may work on the same part, but there should be at least one group for each part (except the A/D stage).

As before, submit the schematic, simulations (where needed), and layout. There will be strict guidelines to the layout to make sure everything will fit together and there is no waste of area on the chip! You will need to supply a (hierarchical) spice deck of your design, and prove that it is consistent with your extracted layout. (There will be a Layout vs Schematic (LVS) software tool available that allows to compare two spice files for consistency.)

Putting it All Together

The final part will be the ultimate test of the team spirit: we'll put everything together and it better fits! We'll select the best designs and hopefully we get several chips which we can submit for fabrication. After fabrication, you will have the opportunity to test your chip(s) in the lab (which you can do as an independent study after this course).

Final Presentations and Report

Each group will (briefly) present what they did, and submit a (brief!) report summarizing the results. Include plots of the layout, and schematic diagrams.

Project Schedule

Grading Scheme

The project counts for 40 % towards the final grade, and itself is divided up as follows:

About this document ...

Final Project: Design of an Integrating Photosensor Array

This document was generated using the LaTeX2HTML translator Version 96.1 (Feb 5, 1996) Copyright © 1993, 1994, 1995, 1996, Nikos Drakos, Computer Based Learning Unit, University of Leeds.

The command line arguments were:
latex2html -split 0 -dir /home/gert/www/courses/348/project project.tex.

The translation was initiated by Gert Cauwenberghs on Tue Mar 31 18:22:36 EST 1998

...CCD
CCD means Charge Coupled Devices. They really are just a chain of MOS transistors that are linked together with drains and sources.
...group).
JPEG uses spatial redundancy in still images to achieve compression, and is mostly suited for pictures. MPEG uses temporal redundancy to achieve compression, trying to correlate consecutive frames in a moving image.
...grid
Those that want to be challenged can choose a hexagonal grid, which is better for most image processing applications.
...2.2 mm!
Without pads, you will have about 3,000 by 3,000 tex2html_wrap_inline199 to play with, where the feature size tex2html_wrap_inline201 corresponds to the grid spacing in your L-Edit layout, which is 0.6  tex2html_wrap_inline203 m in MOSIS' 1.2  tex2html_wrap_inline203 m technology.
...sequentially
With proper generation of vsync and hsync synchronization signals, you can dump the output to a multi-sync monitor.
 


Gert Cauwenberghs
Tue Mar 31 18:22:36 EST 1998