HomeRoast Digest


Topic: Computerized image analysis (100 lines)
1) From: Kevin DuPre
Eric,
Having spent one eighth of my 40 year life developing
just such a program for the location of fiduciary
landmarks in a sequence of high-speed video images of
X-ray motion sequences of human and animal joints
under a variety of movements. I will tell you that
this is NOT a trivial process by any means. 5  years
after we started we ended up with some image
processing instrumentation that could locate 2-d
targets in an X-ray image with nearly 100% rejection
of non-target data.
I spent 3 months of 50 hour weeks alone just
developing the non-uniformity correction for the
camera and calibration of image gray levels to a
standard reference.
The remainder of the time was spent in developing
convolution sequences and convolution kernels (small
NxN images which represent complex mathematical
operations such as filters, edge detection, erosion
and dilation, etc. and connecting them in the right
sequence to produce the expected result.
Even if I had the computers to do this in a reasonable
time per frame, I don't have the time it would take to
develop the program with my experience and knowledge.
An RGB digital photo is really 3 separate planes of
gray-level image with each representing either the
values or the individual color sensors in the camera,
or HSB (hue-saturation-brightness) levels depending on
the format of the image. GIF or JPG is not sufficient
since with GIF the image pixels are mapped to a very
limited color map (each pixel is a pointer to an RGB
or HSB value in the color map - the color map
containing the actual values. In some cases run-length
encoding is performed to compress the size of the
image file.  JPEG is or can be highly compressed and
of no practical instrumentation value so you really
need a high resolution color camera which outputs its
data in a bitmap (uncompressed format), 3 values per
pixel each for RGB or HSB.
Once you have that, you need to take a blank image of
a uniformly lighted and pigmented reference "field". 
This is due to the fact that camera sensors are not
linear over thier surface and actually contain
significant differences which must be subtracted out
of the final image before density correction.  Both
the subject and the reference field must be lit using
the exact same source and intensity to perform a
pixel-for-pixel subtraction of the non-uniformity and
recalibration to the range of gray level values.  This
removes non-linearities of the image sensor, and makes
the image suitable for instrumentation use.  A
description sounds so simple, but the program code to
implement this is far more complex.
A number of instrumentation techniques can then be
used to "flatten" the image field (remember that the
beans are 3-dimensional things and a photograph of
them indicates the 3rd dimension by highlight and
shadow which varies across the organic and highly
irregular surface of the beans facing the camera) so
that a comparison can be made. That is the difficult
part of the program, the rest being relatively easier.
who knows what the algorithm or technique is that will
do this - that is the basis for additional research
and development.
It's a great idea, but I'm not so sure that anyone has
the actual time it would take to develop a reliable
instrumentation technique to measure it.
The color chips and visual comparison would probably
be quicker, cheaper, and from a practical standpoint
more reliable as long as everyone was using the same
chips.
<Snip>http://shop.store.yahoo.com/cinemasupplies/lasgreycar4x.html<Snip>
=====
--
Kevin DuPre
obxwindsurfhttp://profiles.yahoo.com/obxwindsurf"The real voyage of discovery consists not in seeking new landscapes but in having new eyes -- Marcel Proust"
Do you Yahoo!?
New DSL Internet Access from SBC & Yahoo!http://lists.sweetmarias.com/mailman/listinfo/homeroast">http://sbc.yahoo.comhomeroast mailing listhttp://lists.sweetmarias.com/mailman/listinfo/homeroast


HomeRoast Digest