Archived posting to the Leica Users Group, 1999/02/05

[Author Prev] [Author Next] [Thread Prev] [Thread Next] [Author Index] [Topic Index] [Home] [Search]

Subject: [Leica] Things that go bump in the night (very long). Updated.
From: Jim Brick <jimbrick@photoaccess.com>
Date: Fri, 05 Feb 1999 10:32:52 -0800

For those that are interested. This is pretty much generalized so please
don't pick at me about details. I've kept it general, mainly because
anything more detailed would detract from the point. I do not claim to be
an authority on all of this. It came from both the work I am doing at Photo
Access and from my reference library. And again, it's generalized for
understandability.

These are averages, not specifics.

Sizes:

1 micron = size of a cell nucleus
.1 micron = strands of DNA
.01 micron = structure of DNA
.001 micron = DNA molecule
.0001 micron = 1 Angstrom
1 Angstrom = Carbon's outer electron shell
wavelength of visible light = 4000 to 8000 Angstroms

Current semiconductor chip geometry = .18 micron
It could possibly go to .1 micron. The size of DNA strands.
The average wavelength of light is 6 microns.

How many of you remember when the Electron Microscope was invented. The
reason it was invented was because visible light and optics had reached
their limit. Visible light got in it's own way. So going to the atomic
level, inventing a Scanning Electron Microscope, eliminated the visible
light/optics limit.

So at less than .1 micron, semiconductors will be stumbling over their
molecular structure. And the wavelength of light requires certain
dimensions in order to pass the ray and not cut it off like a filter. A
polarizer, I believe, works at around 1.5 microns. These phenomena place
size constraints on semiconductor junctions (such as photo transistors, AKA
a pixel) and the light gathering "bucket" (a capacitive junction), and of
course attempting to read-out the minuscule signal representing a pixel.

The following discussion about the latent image is a high level overview.
It by no means is definitive. Libraries are filled with volumes of the
physics and chemistry of the latent image and development. The point being
made here is that the silver image is produced at the atomic level. Atoms
and electrons.

To put things in perspective, the volume of an "average" silver halide
grain is .0000000000001 cubic cm. Within the 10**-13 cubic centimeter grain
of silver halide, there are 10 billion silver halide molecules. Exposure is
affected by a photon hitting a silver halide molecule. This causes
electrons, within the molecule, to change from a stable to an unstable
energy level, leaving an electron deficiency in the lower level.
Development is accomplished by allowing an electrolyte with redox potential
(developer) to contact the silver halide crystal. An electron from the
developer moves into the vacancy left by the electron that moved because of
the photon hit. When the new electron moves in, the overall charge of the
crystal is negative (because of the added negative electrons.) To
compensate for the increased negative charge at the latent image site,
positive charged interstitial silver ions move into the sites, neutralizing
the charge. If enough photons hit the silver halide grain, enough electrons
will move from the developer into the grain, which allow the formation
silver atom aggregates. This is your image forming.

In digital photography, a semiconductor capacitor stores the electrons
(supplied by a battery) that a photo transistor allows in. The number of
electrons stored will depend upon how much light hit the photo transistor,
and for how long. Think of the photo transistor as an electron gate. The
stored charge will be a "voltage level." This voltage (at each individual
pixel site) is then applied to an analog to digital converter (A to D). The
output of the A to D is a number between 0 and 255, representing the amount
of light hitting the pixel. And don't forget, these pixels are read out
"one at a time". All one, two, four, or six million of them.

So now, in digital, we have 256 possible density levels, at a site that is
at least 5 microns by 5 microns square. While in film, we have a grain site
that has 10 Billion molecules. If it takes 1,000 silver atoms to produce a
developed "speck" on the film, we have 10,000 possible density/size levels
producible at a silver grain site. If it takes 10,000 silver atoms to
produce a developed "speck" on the film, we have 1000 possible density/size
levels producible at a silver grain site. All done at the atomic level.
Without batteries, capacitors, transistors, A/D's, wires, megabytes of
memory, gigabytes of storage, etc...

There's more. Each pixel in a digital sensor, sees light a little
differently than its neighboring pixel. If you took a photograph using a
raw sensor, it would look awful. In a good digital camera, "each" sensor
has to be calibrated. We have to test and "record" how each pixel differs
from a normal pixel. This is called PRNU (Photo Response Non Uniformity)
correction. Cheap digital cameras (under $2000) use only "white balance"
and approximately adjust each pixel's output with regard to white. Good
digital systems use PRNU correction. The PRNU correction table for a 2
megapixel sensor, without PRNU compression, is six megabytes. So your
professional digital camera has to have six to twenty megabytes of memory
available just for pixel correction. This correction has to be done, on the
fly, as pixels are streaming out of the sensor, into memory.

Many digital cameras use sensors that have bad pixels. It is very difficult
to make a large (35mm size) sensor without faults. That's why most consumer
cameras use very tiny sensors and 10mm to 15mm lenses as the normal lens.
The process of fabricating a large sensor is extremely complex and full of
problems. As it is with any large semiconductor "chip". Good large sensors
are very expensive to make and expensive to purchase. So bad pixels must be
handled in the camera. Algorithms that give weighted averages to "previous"
pixels go into forming a density value for the bad pixel. We can only use
"previous" pixels because this process works as the pixels stream in, and
the only known values are from "previous" pixels.

Since a digital image is simply samplings of the subject, at precise
points, fine patterns in the subject will be recorded incorrectly. The
digitizing of anything has a "nyquest" sampling boundary where the
frequency of the source (subject) interferes with the digitized output.
Think of using a digital sensor for astronomical photography. Two distant
stars, side by side, that happen to focus on adjacent digital pixels. The
digital system will see them as a single elongated spot. Not two distinct
spots, or stars, as they would appear on film. Going from analog to digital
in any discipline, causes problems that didn't exist before. The real world
is an analog world. Anytime you digitize any analog representation,
something will be lost. That's the physics of A to D. As the sampling rate
increases, the representation of the analog source is more true.
Unfortunately, the sampling rate in a digital sensor is simply how closely
packed and how small the pixel sensors can be made. Well... the physics of
semiconductor manufacturing, pretty much establishes the rules. And we are
up against the wall.

So... my feeling is that the "technological breakthrough" that will propel
digital photography into the forefront, will be nothing that we (lay
people) are familiar with today. It will have to be the ability to
electronically control molecular bonds or electron orbits, and perhaps have
a build-up of, or depletion of, tagged electrons that can be read-out,
perhaps radiometrically, in parallel. But your guess is as good as mine.

Going to the level of actually competing with film will require a very very
major breakthrough in electronic pixel recording. This is going to take a
long time by anyone's standards. The saying of "every year half the cost
and twice the performance" works only over a finite time. Just like optical
microscopes. Just like propeller engines. Just like electronics. You can
only go so small and so fast for so long. And then you hit the wall. And
have to "invent" some new grandiose scheme. We are pretty close to that
wall in digital photography.

Digital photography is "different" than film photography. The whole
process, from front to back, is near the wall. It takes massive files to
store digital images at full resolution. Lossless compression has been
worked on, by wizards, for decades. Anything other than a little
compression, will degrade the image. Moving, storing, and portability of
digital images is a also a major headache. Requiring lots of CPU
horsepower. The whole discipline requires lots of money.

Digital photography, as we know it now, produces outstanding results for
the disciplines for which it is matched. Consumer P&S, catalog photography
& production, and news photography. But for sheer recording superiority,
information content, ease of use, easy storage, highly portable, easily
viewable, comparatively inexpensive (unless you are a Leica user!), very
inexpensive (pinhole photography), comes in a little can with no wires
attached, and can be digitized after the fact producing superior digitized
results, it is "film". Can you choose between digital sensor "types"? High
speed low res, Low speed hi res and all in between? Color? B&W? No. You
have to use film, or buy a different type of digital camera. One matched to
each type of photography you want to do.

So until the breakthrough (don't hold your breath)... I'm a Leica kinda guy!

I Lika Leica

Jim