Archived posting to the Leica Users Group, 1999/10/12

[Author Prev] [Author Next] [Thread Prev] [Thread Next] [Author Index] [Topic Index] [Home] [Search]

Subject: [Leica] Part 1: For those who think film will be dead in the near future... (extra long)
From: Jim Brick <jimbrick@photoaccess.com>
Date: Tue, 12 Oct 1999 08:37:33 -0700

Many of you have read most of this before. It was called "Things that go
bump in the night." It is very long so if you are not interested, push
delete now.

For those that are interested. This is pretty much generalized so please
don't pick at me about details. I've kept it general, mainly because
anything more detailed would detract from the point. I do not claim to be
an authority on all of this. It came from both the work I am doing at Photo
Access (www.photoaccess.com) and from my reference library. And again, it's
generalized for understandability.

These are averages, not specifics.

Sizes:

1 micron = size of a cell nucleus
.1 micron = strands of DNA
.01 micron = structure of DNA
.001 micron = DNA molecule
.0001 micron = 1 Angstrom
1 Angstrom = Carbon's outer electron shell
wavelength of visible light = 4000 to 8000 Angstroms

Current semiconductor chip geometry = .18 micron
It could possibly go to .1 micron. The size of DNA strands.
The average wavelength of light is 6 microns.

How many of you remember when the Electron Microscope was invented. The
reason it was invented was because visible light and optics had reached
their limit. Visible light got in it's own way. So going to the atomic
level, inventing a Scanning Electron Microscope, eliminated the visible
light/optics limit.

So at less than .1 micron, semiconductors will be stumbling over their
molecular structure. And the wavelength of light requires certain
dimensions in order to pass the ray and not cut it off like a filter. A
polarizer, I believe, works at around 1.5 microns. These phenomena place
size constraints on semiconductor junctions (such as photo transistors, AKA
a pixel) and the light gathering "bucket" (a capacitive junction), and of
course attempting to read-out the minuscule signal representing a pixel.

The following discussion about the latent image is a high level overview.
It by no means is definitive. Libraries are filled with volumes of the
physics and chemistry of the latent image and development. The point being
made here is that the silver image is produced at the atomic level. Atoms,
electrons, and valence levels.

To put things in perspective, the volume of an "average" silver halide
grain is .0000000000001 cubic cm. Within the 10**-13 cubic centimeter grain
of silver halide, there are 10 billion silver halide molecules. Exposure is
affected by a photon hitting a silver halide molecule. This causes
electrons, within the molecule, to change from a stable to an unstable
energy level, leaving an electron deficiency in the lower level.
Development is accomplished by allowing an electrolyte with redox potential
(developer) to contact the silver halide crystal. An electron from the
developer moves into the vacancy left by the electron that moved because of
the photon hit. When the new electron moves in, the overall charge of the
crystal is negative (because of the added negative electrons.) To
compensate for the increased negative charge at the latent image site,
positive charged interstitial silver ions move into the sites, neutralizing
the charge. If enough photons hit the silver halide grain, enough electrons
will move from the developer into the grain, which allow the formation
silver atom aggregates. This is your image forming.

In digital photography, a semiconductor capacitor stores the electrons
(supplied by a battery) that a photo transistor allows in. The number of
electrons stored will depend upon how much light hit the photo transistor,
and for how long. Think of the photo transistor as an electron gate. The
stored charge will be a "voltage level." This voltage (at each individual
pixel site) is then applied to an analog to digital converter (A to D). The
output of the A to D is a number between 0 and 255, representing the amount
of light hitting the pixel. And don't forget, these pixels are read out
"one at a time". All one, two, four, or six million of them.

So now, in digital, we have 256 possible density levels, at a site that is
at least 5 microns by 5 microns square. While in film, we have a grain site
that has 10 Billion molecules. If it takes 1,000 silver atoms to produce a
developed "speck" on the film, we have 10,000 possible density/size levels
producible at a silver grain site. If it takes 10,000 silver atoms to
produce a developed "speck" on the film, we have 1000 possible density/size
levels producible at a silver grain site. All done at the atomic level.
Without batteries, capacitors, transistors, A/D's, wires, megabytes of
memory, gigabytes of storage, etc... By the way, it takes only takes a
cluster of three silver atoms to produce a developable and detectable (not
with your naked eye) speck.

There's more. Each pixel in a digital sensor, sees light a little
differently than its neighboring pixel. If you took a photograph using a
raw sensor, it would look awful. In a good digital camera, "each" sensor
has to be calibrated. We have to test and "record" how each pixel differs
from a normal pixel. This is called PRNU (Photo Response Non Uniformity)
correction. Cheap digital cameras (under $2000) use only "white balance"
and approximately adjust each pixel's output with regard to white. Good
digital systems use PRNU correction. The PRNU correction table for a 2
megapixel sensor, without PRNU compression, is six megabytes. So your
professional digital camera has to have six to twenty megabytes of memory
available just for pixel correction. This correction has to be done, on the
fly, as pixels are streaming out of the sensor, into memory.

Many digital cameras use sensors that have bad pixels. It is very difficult
to make a large (35mm size) sensor without faults. That's why most consumer
cameras use very tiny sensors and 10mm to 15mm lenses as the normal lens.
The process of fabricating a large sensor is extremely complex and full of
problems. As it is with any large semiconductor "chip". Good large sensors
are very expensive to make and expensive to purchase. So bad pixels must be
handled in the camera. Algorithms that give weighted averages to "previous"
pixels go into forming a density value for the bad pixel. We can only use
"previous" pixels because this process works as the pixels stream in, and
the only known values are from "previous" pixels.

Since a digital image is simply samplings of the subject, at precise
points, fine patterns in the subject will be recorded incorrectly. The
digitizing of anything has a "nyquest" sampling boundary where the
frequency of the source (subject) interferes with the digitized output.
Think of using a digital sensor for astronomical photography. Two distant
stars, side by side, that happen to focus on adjacent digital pixels. The
digital system will see them as a single elongated spot. Not two distinct
spots, or stars, as they would appear on film. Going from analog to digital
in any discipline, causes problems that didn't exist before. The real world
is an analog world. Anytime you digitize any analog representation,
something will be lost. That's the physics of A to D. As the sampling rate
increases, the representation of the analog source is more true.
Unfortunately, the sampling rate in a digital sensor is simply how closely
packed and how small the pixel sensors can be made. Well... the physics of
semiconductor manufacturing, pretty much establishes the rules. And we are
up against the wall.

END PART ONE

Jim