Annons

Frågor till Anders Uschold angående digitalteknik

Produkter
(logga in för att koppla)

macrobild

Avslutat medlemskap
Hi Anders
I have question for you.
A 200iso exposed raw-file from Canon 5d shows less noise in the shadows than a 100iso raw-file and the 100iso raw-file is underexposed -1step = Same light, photons is hitting the Cmos.
What is happening internally in the camera with the signal after the readout from the sensor?

Mikael
 

macrobild

Avslutat medlemskap
from member Magnus Johansson

Is it technically possible or desirable to have a common raw format - without the proprietary restraints that we see now. And why / why not?
 
macrobild skrev:
Hi Anders
I have question for you.
A 200iso exposed raw-file from Canon 5d shows less noise in the shadows than a 100iso raw-file and the 100iso raw-file is underexposed -1step = Same light, photons is hitting the Cmos.
What is happening internally in the camera with the signal after the readout from the sensor?

Mikael
Hello Mikael,

the problem is, that the camera internal signal processing is extremely complicated and RAW is not RAW data. From the technical point of view your two exposurers will put the same number of photons on to the sensor. Therefore the number of electrons and the primary voltage on the sensor should be identical. CMOS sensor are much more complex than CCD'S. Often amplifiing modules and comparators are located under each pixel and will proceed the pixel's individual signal just there. Then the amplified signal gets out of the sensor. Usually a linearisation of the sensor signal happens to optimize the pixel signal for further processing.
During several further processing steps, the individual pixel information will be composed to a matrix signal. Then the demosaicing takes place, the RGB signal gets converted to Luminance and Chroma. These two channels pass individual first noise and chromatic Moiré compensation and get re-arranged to a RGB matrix signal. Next frequency based and partially scene referred rendering functions take place to eliminate dead pixels and dust on the sensor. Additionally color fringes get reduced and first sharpening takes place. As a next step, partial contrast and tonal reproduction takes place. Here is, as I assume, your problem located. Even if the amount of photons of your two shots is identical, some changes may occur by the additional camera settings. You have selected an underexposure by -1 EV. I suspect the scene or output device refered rendering of the camera to induce a different 12bit to 8 Bit conversion or even apply a different flag in the RAW file to induce this to your later RAW to Tiff conversion in the computer.

Unfortunately RAW tuning is a highly confidential classified part of the manufacturers work. So much quality related tuning takes place here, you will never get to know from. I can only assume what happened.

The truth is, that you camera will in no way provide unbiased and unprocessed data, when you select RAW. Therefore this contradictious behaviour of your camera is no surprise to me.

My best regard,

Anders
 

macrobild

Avslutat medlemskap
Hello Anders and welcome to Fotosidan here in Sweden.
I hope you have time to answer questions from others members as well.
Mikael

PS like the question from Magnus above.
 
Senast ändrad:

Knight Palm

Aktiv medlem
Questions on sensor technologies

1.
How does the Foveon X3® Direct Image Sensors and the Fujifilm Super CCD Type HR (High Resolution) and Super CCD Type SR (Super Dynamic Range) compare with respect to noise characteristics? I have only data accessible from the Kodak Fullframe CCD sensors, and would like to know the performance related to those. The Fujifilm is a CCD, however I doubt we have to call it Fullframe or Interline, but their octagonal pixel-geometry allows for larger pixels. Isn't Sony also starting to use faceted pixels?

2.
The market trend with sensors as it looks to me, is to transition from CCD to MOS techniques. Would MOS sensors also benefit from the fancy geometries like the faceted pixels? Or will you instead use analoge circuitry close to the photodiodes, to achieve the noise performance and limiting the saturation signal? Nonlinear analogue circuit behaviour close to the photodiodes, can later be compensated for in the succeeding digital domain? Has the NMOS sensors here an advantage over the Complementary MOS technology?

3.
Most sensor technologies seems to be able to accomplish Live View, with maybe the exception for the Fullframe CCDs. Power consumption is also less in MOS then in CCD sensors. If Johnson (thermal) noise would be a problem, then we would see heatpipes, fans and even Peltier cooling in the cameras. Therefore the dynamic range seems to be most affected by the absence and presence of the photons. The saturation signal would be easy to account for, since in a fast MOS sensor, you can sample the sensor faster then the shutter speed and estimate when the pixel saturation is reached. That leaves us to look at the Sensor Noise parameter to be the limiting factor to focus on. Utilizing the silicon area seems to be the best parameter to optimize, which Panasonic illustrates with their successful Maicovicon sensor. I would like to know how the Foveon X3® Direct Image Sensor compare to the Panasonic Maicovicon Live MOS Sensor in photosensitive area utilization?

4.
Will each camera manufacturer still design & produce the image processor themselves in future, or will that become commodity products, like Texas Instruments showed in a reference design using their multimedia signal processor together with a Micron CMOS sensor at the time for the PMA2007. As long as the vendors can still keep their proprietary algorithms implemented either in hardware or software code within the development platform, and allow for faster and easier verification, I think it could be an advantage. I think Pentax already are using third parties for their image processing pipeline.

Ref:
 

Bilagor

macrobild

Avslutat medlemskap
Hej Sundvission
Earlier reply from Anders to me.

Mikael


I will be on holiday from tomorrow until next Wednesday. So if
> there is something interesting, I'll be back from next Thurday on.
>
> Have a nice weekend,
>
> Anders
 
Re: Questions on sensor technologies

@
Wow, you're a tough guy :).

These are very interesting questions and I will be happy to answer them. My answers will be may be different from what you expect, because I do not rely my evaluations on data sheets. It is good to know the photon capacity of a sensor, but I have seen to many different results from the sensor under different post processing.

There is just a little problem right now. Tomorrow my family and me will leave for a five days holiday and I still have to pack my luggage. I did not expect the invitation of Mikael today :-O.

Please allow me to answer to these questions on late Wednesday or Thursday, when I am back. I hope that is ok for you.

Best regards,

Anders
 

Makten

Aktiv medlem
Hi Anders!

I have some questions related to the discussion below. No hurry, I've understood that you're away for a couple of days.

Anders_Uschold skrev:
Often amplifiing modules and comparators are located under each pixel and will proceed the pixel's individual signal just there. Then the amplified signal gets out of the sensor.
This is the most interesting part! Irrespective of CMOS or CCD, the signal is amplified before getting digitalized, right? And the amplification gain depends on ISO speed rating?

Usually a linearisation of the sensor signal happens to optimize the pixel signal for further processing.
During several further processing steps, the individual pixel information will be composed to a matrix signal. Then the demosaicing takes place, the RGB signal gets converted to Luminance and Chroma. These two channels pass individual first noise and chromatic Moiré compensation and get re-arranged to a RGB matrix signal. Next frequency based and partially scene referred rendering functions take place to eliminate dead pixels and dust on the sensor. Additionally color fringes get reduced and first sharpening takes place.
I presume these are all digital processings?

The reason for these questions is that I suppose the noise increases if the analog signal is low and then digitally corrected exposure-wise, compared to the analog signal from the sensor being further amplified (higher ISO speed?), and thus no digital exposure compensation would be needed.

Best regards / Martin
 
Senast ändrad:
Makten skrev:
Hi Anders!

I have some questions related to the discussion below. No hurry, I've understood that you're away for a couple of days.


This is the most interesting part! Irrespective of CMOS or CCD, the signal is amplified before getting digitalized, right? And the amplification gain depends on ISO speed rating?


I presume these are all digital processings?

The reason for these questions is that I suppose the noise increases if the analog signal is low and then digitally corrected exposure-wise, compared to the analog signal from the sensor being further amplified (higher ISO speed?), and thus no digital exposure compensation would be needed.

Best regards / Martin
Hello Martin,

usually there is a analogue amplification before A/D conversion. There is also digital amplification possible, but if you enhance a stepwise signal, e.g. the digital signal, no matter if it happens on 8 Bit or on 10-11 Bit, you widen the stepwidth.

Regarding tonal reproduction, this may lead to loss of number of tonal levels or posterizing.

Regarding noise, it will lead to second level classification or spreading of noise. Imagine the distribution of noise caused by distribution of photons. The smallest unit is one photon, so analogue noise or photon noise is fine sampled. For A/D conversion you must divide the number of photons by the smallest photon count, your A/D system needs for conversion.

For example: A CCD bowl of 8000 photons maximum capacity and a photon count of 8 ph per signal level will lead to 8000/8 = 1000 levels. This are 10 Bit maximum. Even of two sensor cell differ by just one photon, this may lead to two different brightness levels. If you amplify the analogue signal 2 times, the difference will be similar to noise of two photon difference. This will lead to one sbrightness difference as well after A/D conversion. But if you amplify just these two neigbiured sensor cells in the digital workflow, you spread the noise by factor 2 = two brightness levels difference.

Best regards,

Anders
 

macrobild

Avslutat medlemskap
Hello Anders
I hope you have a good holliday with the family.
What about Sundvisions 4 questions above about different sensors ?
Later on I have a question about Fujis dual pixels and DR VS Canon cmos.


Mikael
 
To TI Sundvisson

Sorry, I can not cite your posting, because you have added charts to it. Therefore I will copy and paste and mark them:

> Questions on sensor technologies

>1.
>How does the Foveon X3® Direct Image Sensors and >the Fujifilm Super CCD Type HR (High Resolution) >and Super CCD Type SR (Super Dynamic Range) >compare with respect to noise characteristics? I >have only data accessible from the Kodak >Fullframe CCD sensors, and would like to know the >performance related to those. The Fujifilm is a >CCD, however I doubt we have to call it Fullframe >or Interline, but their octagonal pixel-geometry >allows for larger pixels. Isn't Sony also >starting to use faceted pixels?

From my tests, the Foveon suffers from low photon capacity. Therefore dynamic range is limited. We have also seen strange complementary chromatic noise of very low frequency with the SD14. Unfortunately the do not pass their signals through a anti-aliasing filter, so you have very strong chrominance Moiré.

The Fuji SuperCCD is the highest performing sensor in high ISO dynamic range. To combine a small and a high capacity sensor is one of the smartest approaches in my opinion. The backside of the medal is soft highlight reprodution. The F30 is the only camera who deserves to be called high-ISO-stabilised :). All other brands fail or do cheap tricks like pixel combination with severe loss of resolution.

>2.
>The market trend with sensors as it looks to me, >is to transition from CCD to MOS techniques. >Would MOS sensors also benefit from the fancy >geometries like the faceted pixels? Or will you >instead use analoge circuitry close to the >photodiodes, to achieve the noise performance and >limiting the saturation signal? Nonlinear >analogue circuit behaviour close to the >photodiodes, can later be compensated for in the >succeeding digital domain? Has the NMOS sensors >here an advantage over the Complementary MOS >technology?

Sorry, this exceeds my knowledge. One of the mayor reasons of the success of MOS technology are cheaper and simplier production costs and lower power consumption - the essential aspect for low noise and fast signal readout with high frame rate.

>3.
>Most sensor technologies seems to be able to >accomplish Live View, with maybe the exception >for the Fullframe CCDs. Power consumption is also >less in MOS then in CCD sensors. If Johnson >(thermal) noise would be a problem, then we would >see heatpipes, fans and even Peltier cooling in >the cameras.

We have some of these strategies yet. usually the sensor gets located close to metal housing to drain heat. In earlier cameras, we had Peltier cooling.

>Therefore the dynamic range seems to
>be most affected by the absence and presence of >the photons. The saturation signal would be easy >to account for, since in a fast MOS sensor, you >can sample the sensor faster then the shutter >speed and estimate when the pixel saturation is >reached. That leaves us to look at the Sensor >Noise parameter to be the limiting factor to >focus on. Utilizing the silicon area seems to be >the best parameter to optimize, which Panasonic >illustrates with their successful Maicovicon >sensor. I would like to know how the Foveon X3® >Direct Image Sensor compare to the Panasonic >Maicovicon Live MOS Sensor in photosensitive area >utilization?

Sorry, this exceeds my knowledge.

>4.
>Will each camera manufacturer still design & >produce the image processor themselves in future, >or will that become commodity products, like >Texas Instruments showed in a reference design >using their multimedia signal processor together >with a Micron CMOS sensor at the time for the >PMA2007.

From my experience, there are much more third-party-technologies in your cameras, as yould ever expect. Don't believe, that even the worst competitors do not partially share their plant capacities :).

>As long as the vendors can still keep
>their proprietary algorithms implemented either >in hardware or software code within the >development platform, and allow for faster and >easier verification, I think it could be an >advantage. I think Pentax already are using third >parties for their image processing pipeline.

The highest limitations are not the skills of the engineers or the plants, it is patents. As long as a manufacturer is not allowed to use a key technology, he faces problems.

Another key feature has not been mentioned yet: pricing and priorities. Canon for example uses partially low-end mechanical quality with their lenses, but they spend most energy in processing performance. They process their images close to the maximum. Olympus spends highest efforts in their lenses - and need less processing performance. unfortunately they do not provide an appropriate professional body yet. From the optical point of view, the 7-14, 35-100 or 150 do not have any competitor by far.

Please ask some questions regarding lenses or practical results of cameras, I cannot fulfil your expectations regarding sensor architectures.

Anders
 

Knight Palm

Aktiv medlem
Vielen dank für alle intressante antworten

Hallo Anders Uschold,

Sorry that with my propeller hat on, most of my speculative questions were spinning around sensor architectures. Still, I found your answers informative and I am appreciating both your experienced answers and for taken the time to answer those questions here.

I understand that capturing the photons is an important step in the beginning of the image flow, which gives i.e. the mentioned F30 a good start.

So patents, like the lens mount, is a way for several vendors to survive in their own sandbox, in the larger playfield. As long as some key benefits exist for each vendor, it is enough to keep them surviving. You can patent implementations, but not the idea, like we are seeing at least with three different CCD-shift in-body image stabilization techniques.

You are mentioning image processing performance. Will we still rely on the JPEG, or are other picture standards on the horizon within say the next decade ? There's sometimes talk about dynamic range limitations of JPEGs, especially when we now see cameras improve on this parameter. Will still the raw files be kept as company proprietary? Or will raw files become less important over the next years, when Out-Of-Camera image quality improves. (I'm using raw files more for adjusting the exposure and contrast, and less for WB adjustments. So a larger latitude of my JPEG, could reduce the need for using raw files.)

I read with interest a test just published in GP (Göteborgs-Posten), that your company has produced for them. Here the long-zoom cameras were tested. Are we still going to see the pixel-race continue? I just read about a 1/1.8" Foxconn 12 megapixel DS-C350 digital camera, aren't we seeing any effects of diffraction yet? Or is diffraction hidden in the available resolution of the lenses and also disguised in the processing? Among the dSLRs, I expect the Four-Thirds mount will be the first to be affected by diffraction. How limiting is diffraction in current dSLRs on the market?

I posted on another forum a question regarding Lens/Body calibration, which seems to me a relatively common problem. What is the reason behind this? Are there any fundamental differences in the way the autofocus works on different cameras? Or is it the support for legacy lenses that complicates the situation? So, why does sometimes, both the camera bodies and the lenses, need to be taken into the camera/lens service centers for calibration of the AF performance with respect to front/back-focus?


Thanks gain, for taking your time to discuss with us here on this forum. Thanks also to Mikael Risedal who invited you here.
 
Re: Vielen dank für alle intressante antworten

Hello IT Sundvisson,

Sundvisson skrev:
Hallo Anders Uschold,

Sorry that with my propeller hat on, most of my speculative questions were spinning around sensor architectures. Still, I found your answers informative and I am appreciating both your experienced answers and for taken the time to answer those questions here.
Thank you very much. It is my pleasure. In some German communities people feel, they have the absolute right to get answers for all of their questions, no matter how much time the "interrogated" spends or whether he touches some confidential aspects.

Therefore I appreciate your words!


Sundvisson skrev:


I understand that capturing the photons is an important step in the beginning of the image flow, which gives i.e. the mentioned F30 a good start.

So patents, like the lens mount, is a way for several vendors to survive in their own sandbox, in the larger playfield. As long as some key benefits exist for each vendor, it is enough to keep them surviving. You can patent implementations, but not the idea, like we are seeing at least with three different CCD-shift in-body image stabilization techniques.
This is an excellent example: The idea to stabilise by shifting the sensor is free. But the drive technology isn't. This detail is extremely interesting, as it shows many tradeoffs of technolgies. The driving technology of a sensor is highly related to the driving technology of an AF-motor:

1. Fastest and most precise are piezo-ceramic drives, known as "ultrasonic" or AF-S or HSM. In reality this is a pure marketing word, ultrasonic waves don't occur anywhere in those lenses :). The disadvantage of this technology are less sturdiness and mechanical wear. The ceramic surfaces show abrasion during long term use and loss accuracy and reliability. KonicaMinolta and Sony feauture piezo ceramic drives as well as the lens manufacturers mentioned above.

2. Second and most converntional are magnetic spin drives, that are used by Olympus in sensor stabilisation and lens drives. This technology is slower but shows less mechanical wear.

3. Magnetic position is used by Pentax/Samsung to do their sensor drive. This technology is contactless and therefore shows now mechanical wear or deterioration at all. But we have no experience of power consumtion, speed and interference of these extremely strong permanent magnets around the sensor with their environment.

Me by myself, I would not wear a mechanical watch
on my wrist, while holding one of these cameras *lol*.


Sundvisson skrev:
You are mentioning image processing performance. Will we still rely on the JPEG, or are other picture standards on the horizon within say the next decade ? There's sometimes talk about dynamic range limitations of JPEGs, especially when we now see cameras improve on this parameter. Will still the raw files be kept as company proprietary? Or will raw files become less important over the next years, when Out-Of-Camera image quality improves. (I'm using raw files more for adjusting the exposure and contrast, and less for WB adjustments. So a larger latitude of my JPEG, could reduce the need for using raw files.)
The truth about JPEG and its amazing persistence is pricing: JPEG 2000 proved to be much better, but the inventors asked for license fees. Therefore the whole industry ignored the quality aspect and prefered the economical aspect. To be fair, I do not assume, that there haven't been some who thought about switching to the better standard. But it is enough, if just one or two global players keep with the free old JPEG. They would gain an advantage in price competition and the result for the "sincere brands using JPEG 2000" would have been to loose market share. The horrible truth is, that the common user doesn't care for quality, he cares for money.

Regarding dynamic range: People and companies often rely on the sensor's or A/D converter's dynamic range. This doesn't meet reality. A lens has a dynamic range too. Imagine a global flare, = homogenous mist all over the image caused by internal reflections and diffusion, of 0.5 %. This is 1/200 or something like 1 / 2^8. That represents a limitation of 8 stops or 48 dB. Well, lens flare is additive to scene signals, so the shadows of an projected image will not be cut, but softened. But we often see, that a poor lens limits a camera's input dynamic range significantly. BTW that's not new, with film only excellent lenses provided brilliant images of black shadows and bright highlights :).

A comparison of the Fuji S3Pro showed:

- 8.7 stops using the Nikon 18-70 3.5-4.5
- 9.0 stops using the Nikon 60 2.8 Micro

Both are good lenses, but a prime lens often features less elements and therefore higher flare
performance.

Sundvisson skrev:
I read with interest a test just published in GP (Göteborgs-Posten), that your company has produced for them. Here the long-zoom cameras were tested. Are we still going to see the pixel-race continue? I just read about a 1/1.8" Foxconn 12 megapixel DS-C350 digital camera, aren't we seeing any effects of diffraction yet? Or is diffraction hidden in the available resolution of the lenses and also disguised in the processing?
From the technical point of view, the pixel race is already over with the small compacts. They are limited by diffarction and only raise in noise. And ALL engineers from the big brands know for that. My advise is: buy a 7 megapixel compact - don't touch a 10 Mp and even don't glance at 12 MP or higher.

The problem is the marketing trap all companies are cought in. They promoted pixel numbers for years and years. How to explain the senselessness and the limitation of the universe, = optical diffraction, to the "stupid" customer. Well they have been made stupid by customizing them to simple numbers like Pixel. What do you think is the maximum resolution of all those 2400 to 9600 dpi printers on paper? We found 400 dpi an excellent mark, somthing like 600 dpi is our current high score. And almost all flatbed scanners don't provide more than 800-1200 dpi just in the sweet area - no matter how much they claim.


Sundvisson skrev:
Among the dSLRs, I expect the Four-Thirds mount will be the first to be affected by diffraction. How limiting is diffraction in current dSLRs on the market?
You are right in theory - 4/3 suffers from earlier diffraction than APS. But on the other hand, the smaller sensor and the complete change to a new concept away from the 35-mm mount restrictions, enabled them to harmonise lens pupils, sensor specs and microlens shifting entrance pupil. The smaller sensor diameter also allows to built faster maximum aperture.

These conditions allow to build lenses of "open-aperture-performance" at an affordable price. The usable or preferable aperture range is limited by open aperture restrictions at the front and diffraction at the more closed apertures. If a lens faces diffraction by 1:9.5 already, but provides high performance from aperture 1:2 on, it beats a lens that features high performance from aperture 1:4 to 1:11. No matter if this lens has a soft maximum aperture of 1:2.8 or 1:2.

Regarding film, the problem of diffraction and limited aperture ranges are equivalent. These lenses haven't been better performance. The problem of digital photography is the so called "new visiblity" of limitations. Who of you possess' a 20x magnifying loupe and spent his time by examining slides on your light table? But today this examination is just a <ctrl+0>-click away :).

When we tested medium format format lenses with more than 20 Megapixel, the thing we found was optical limitations. So don't worry for the pixel count. 10 - 12 is enough from my point of view.


to be continued ..
 
... continued

Sundvisson skrev:


I posted on another forum a question regarding Lens/Body calibration, which seems to me a relatively common problem. What is the reason behind this? Are there any fundamental differences in the way the autofocus works on different cameras? Or is it the support for legacy lenses that complicates the situation? So, why does sometimes, both the camera bodies and the lenses, need to be taken into the camera/lens service centers for calibration of the AF performance with respect to front/back-focus?
From my experience, this is the tradeoff for lens compatibility. The whole photographic world was based on a resolution circle of 30 microns or for some enthusiasts 15 microns on film. So all components had to meet this spec. Now we have sensors with 4 micron pixel pitch on one hand and the wish of those thousands photographers to keep on working with their 30 micron specified gear. Well, what do you expect? The 35-mm has never been designed for this extended resolution, how could it meet the tolerances? It was a pleasant marketing ly of all those companies who claimed "full compatibility" of all lenses to all cameras. Ok - aperture works, metering works, lenses can be mounted - from the legal point this may be called compatibility. But not from the performance point of view.

And the worldwide ly succeeded perfectly. Those who mentioned these limitations lost the game, those who hid the truth, sorry this is my personal opinion, won the game. Olympus, who without doubt proved optical excellence in the analogue past, could not convince all those pro's. In the beginning the custom to the well known lenses, bodies, handling functions etc. were of higher importance as well as the lower costs by keeping existing lens sets.

To be fair, Olympus could not offer a competitive body, they invested too much budget to provide a digitally optimised lens line up first and the sensor technology of the time, the E-1 was designed, was still under rapid development.

To be fair again, many pro's never noticed the reduced performance - so may be we, the quality enthusiasts, are not respesentative for the photographic world.

The truth lies somewhere between. But the focus problems or the extreme full format corner shading or the elimination of chrominance information to pretend low chromatic artefacts at fine details or the hidden built-in software dust removal algorithms or the center to corner contrast enhancement or ..... have never been surprising. They are tradeoffs to compensate the limitation of an architecture, that has not been designed for digital use.


Just for your information:

Personally I take shots on B/W film on manual focus cameras from 1920 to 1980, develop them by my own and print them on fibre base paper. My private life is a "non-digital-area" *lol*.

Sundvisson skrev:


Thanks gain, for taking your time to discuss with us here on this forum. Thanks also to Mikael Risedal who invited you here.
It's my pleasure,

Anders
 
Re: Vielen dank für alle intressante antworten

Sundvisson skrev:
I posted on another forum a question regarding Lens/Body calibration, which seems to me a relatively common problem. What is the reason behind this? Are there any fundamental differences in the way the autofocus works on different cameras? Or is it the support for legacy lenses that complicates the situation? So, why does sometimes, both the camera bodies and the lenses, need to be taken into the camera/lens service centers for calibration of the AF performance with respect to front/back-focus?
There are some more aspects regarding AF and "digitally designed lenses". A very unknown optical defect is variability of back focal distance. Just to explain:

You have a lens like a 28-70 2.8 set to 40 mm zoom position. Due to the automatic diapraghm, the aperture remains open until exposition of the film/sensor. Almost synchronously to the move up of the mirror, the aperture gets closed the the selected setting like 1:5.6. The interesting aspect is, that a lens may vary the back focal distance at different apertures. The distance of the rear elemnt to the focal plane may be 20 mm at aperture 1:2.8, 20.2 mm at aperture 1:4 and may be 19.8 mm at aperture 1:8. But the AF-sensors will see the image of the maximum aperture, due to the automatic diapraghm mentioned above.

In reality that means, that if you focus perfectly at maximum aperture and stop down to 1:4, your image is out of focus due to change of back focal distance.

Now you might assume, that the increase of depth of focus will compensate the shift - in many cases it doesn't. And depth of focus depends on your resolution circle.

In analogue times we had the situation, that many lenses showed an unwanted focus shift because they have always been focussed at maximum aperture. But why didn't we face those problems? Because film had the more tolerant 30 microm resolution circle. Additionally depth of focus based on these 30 microns was higher.

Now we have down to 4 micron pixel pitch, that represents a resolution circle around 7-10 microns - this is heavily discussed under specialists :). You can see defects and back-/front-focus unseen in analogue times. Additionally DOF can compensate these effects to a lower degree. The solution is called AF-offset: The camera sends information of the selected aperture to the lens. The lens' CPU features an AF-offset table and return a correction mark according to the zoom position, aperture and focal distance. Then the camera focus at open aperture to "open aperture focal distance". Now comes the trick: While the mirror moves up and the aperture gets close, the camera sghhifts the focus by the AF-offset mark delivered by the lens. We found a Canon 24-105 to change the focus during the exposition from the 1m mark close to the 0.7m mark on the distance scale. That is a huge offset!

You can imagine, that this correction table requires a sophisticated calibration of the optical system. But I suspect, that the lenses are not individually calibrated - far too expensive. So whenever one lens is close to or has left the tolerances, it may show bias from theoretical change of back focal distance to real change of back focal distance. Then its correction table is not correct anymore.

Other aspects are closed-loop or open-loop focus control or the loose play of the mechanical AF-components. But now I have to prepare my monthly tax declaration. I'll be back later.

Image quality or camera performance is like a spider web. There is never just one aspect you may focus on. Each one is linked to all others, if you touch one, the whole web will move.


Best regards,

Anders
 

Knight Palm

Aktiv medlem
Hello Anders Uschold,

Thanks for your detailed and anticipative answers to many of my questions. I have to read through carefully and digest the content in pieces.

I find your comments on diffraction and pixel requirement encouraging, as well as your elaboration on JPEGs. Also when considering dynamic range to ensure the entire picture chain has optimum links from the motive through the air, onto the sensor and out from the image processing pipeline.

I also now better understand from your writing, that the requirement on the autofocus system is quite complex, and how it's increasing with the higer resolution sensors.

I have also one comment regarding the CCD-shift driving technology. I think it's Olympus that's going to use what they call the blur correction provided by the image stabilizer unit with breakthrough SWD (Supersonic Wave Drive)

Best regards, Tore

P.S:
Good to hear you're enjoying the craftsmanship with classic photography. My first B/W photos I took with a 120-film 6x9 camera. No worries about noise or enough pixels then. I plan to scan those in at some point in time.
 

Anders74

Aktiv medlem
Re: Re: Vielen dank für alle intressante antworten

Hi Anders,

Interesting reading, but there is one thing I can’t really understand:

Anders_Uschold skrev:

The interesting aspect is, that a lens may vary the back focal distance at different apertures. The distance of the rear elemnt to the focal plane may be 20 mm at aperture 1:2.8, 20.2 mm at aperture 1:4 and may be 19.8 mm at aperture 1:8. But the AF-sensors will see the image of the maximum aperture, due to the automatic diapraghm mentioned above.

In reality that means, that if you focus perfectly at maximum aperture and stop down to 1:4, your image is out of focus due to change of back focal distance.

Do you have any physical background or references that explains that phenomena?
 

photodo

Aktiv medlem
Re: Re: Re: Vielen dank für alle intressante antworten

Anders74 skrev:
Hi Anders,

Interesting reading, but there is one thing I can’t really understand:

Do you have any physical background or references that explains that phenomena?
Hello Anders U and all others. It is really good to read your comments on very intreresting questions Anders. Thank you Mikael Risedal for bringing Anders into this forum.
Hasselblad is the perfect example of a camera company that involves digital enhancements in image quality limitation due to lens performance. When you take a picture with a Hasselblad H camera, focus will be adjusted according to the f-number used. And that is only one of many possible corrections that is involved. Focus shift due to stopping down the aperture is a well known fact for large format photographers of the old days. Read the interesting story written by Ansel Adams about his preparations when taking his famous picture Moonrise Henadez New Mexico.

Lars Kjellberg
 

Anders74

Aktiv medlem
Thank you for providing some clues Lars, they gave me some ideas, but I havn’t found a full description of the preparations by Ansel Adams, including focus shift, nor focus compensation made by Hasselblad H cameras.

Two common areas where “focus shift” appears:
1) Focus shift at wavelengths outside the visible spectra (IR and UV)
http://www.cartage.org.lb/en/themes/arts/photography/fieldskinds/scientificph/medscient/reflectultraviol/opticalcon/optical.htm

2) Focus shift while changing focal length by zooming.
http://www.imx.nl/photosite/technical/lensdesigns/t003.html

There is however some (rare) texts that implies that the problem exists
http://www.robertwhite.co.uk/zeiss.htm
“Carl Zeiss T* ZM - Leica M mount fit lenses are specifically designed to minimize focus shift with aperture changes - an important innovation with big benefits for rangefinder photography.”
It could however be a statement invented by the marketing department, rather then the optical designers ;)

The only physical explanation I have found is the slight difference in optimal focus distance depending on best contrast versus best sharpness which is found in some spherical lenses. The best compromise (contrast vs sharpness) might change (a little) when those lenses are stopped down. In today’s lenses, often with aspherical elements, this doesn’t seems to be an issue anymore, and I haven’t fond any evidence for this being compensated for in Canon cameras.
http://www.imx.nl/photosite/leica/mseries/SummiluxASPH/s14-50.html

Anders Uschold wrote:
“We found a Canon 24-105 to change the focus during the exposition from the 1m mark close to the 0.7m mark on the distance scale. That is a huge offset!”

The slight compensation (discussed above) needed to accomplish best focus (for best sharpness) when a lens is stopped down couldn’t explain such a huge offset, even if the lens was of an old design with a lot of spherical aberration.

Any comments that could explain this phenomenon or show some hard proof of Canon using focus compensation is most welcome! Are there some diffraction involved in the explanation?
 
ANNONS