To what end(s)?
I have nothing against counter-intuitive theories, after all any progress depends on such thoughts being formulated, tested, and refined. However, counter-productive theories that fail to predict phenomena already known, or muddles their understanding, are less than useful. It is a simple fact that the 'equivalence theory' fails us in understanding why there should be a difference in film vs a digital sensor regarding dynamic range when a part of the "sensor" (film strip, say, or a cropped DX from FX frame) is used to capture illumination. We *know* from film a piece of the film sheet responds identically to the entire sheet. It is not only intuitive it has been used in practice for ages. So why should a digital sensor be magically different? The proponents of the 'equivalence theory' repeatedly claim there is an area dependency involved. See, for example, the calculations for DX to FX in an earlier post.,
The area dependency only applies for measurements performed on the standardized output size. Thus, you would have to blow up the cut pieces of film to different degrees, changing the secondary magnification. The graininess of the output would depend on the size of the piece of film that you used.
I do not claim any other area dependency, but I claim that the described area dependency is relevant to discussions of image quality that can be achieved with a certain system. I'm fine using both terminologies interchangably because they are completely synonymous.
It all boils down to a small piece of film or photo site having a log D-E curve basically of the same functional shape. The film has the "toe" (= shadow noise in digital") and has a much higher headroom at the upper end of the curve simply because the linear relation log D - E starts breaking down; the digital sensor clips more brutally. No need to dismiss film because it "lacks full-well"capacity, it certainly has but is manifested differently. But then film and digital are different recording media. In any case, film is not area dependent on dynamic range meaning the range of light values that can be recorded; digital is claimed to be. This assertion has been posted over and over again here;
I maintain that I did not claim what you understood.
The area dependence applies to a
specific definition of DR, relating to the signal measured on a standardized output size.
I did not dismiss film in any way, I was merely pointing out that the upper limit for DR is not a precise point as in a digital sensor, where it is a precise number of electrons.
The per-pixel DR is not always useful. Two sensors (of the same size) might have vastly different photosite sizes, leading to very different per-pixel DR, but still produce final images with similar noise properties. If all sensors had the same size but not number of pixels, we could define a DR per unit area, and use this to compare sensors. But since sensors come in different sizes as well as pixel counts, Bill Claff and others proposes to use a notion of DR that is measured in a standardized output.
sometimes stating that one "throws away" dynamic range by sampling from the initial file, which simply ignores the fact that the entire chain up to the final output (print of fixed size, another of those constraints that remove degrees of freedom) tends to "waste" information as much more is recorded initially than can be presented in the final stage. We expressly don't need 36 MPix cameras to make a 30x40 cm print or post a picture of 2000x1300 pix on a web page. Most of the collected data at the first step is discarded unless one want mural prints and in that particular case, few of today's consumer models deliver enough pixels to avoid significant extrapolation, which negatively impacts the output quality in its own manner.
Here again, I don't follow the reasoning, as already stated before. What does the fact that we can throw away information at any stage of processing bring to the table?
You can always degrade the output to erase all differences. But this is irrelevant. Deleting the image would be taking this to the extreme.
The question about 'required resolution' is quite complicated as it involves viewing distance and the dot gain of the paper, among other things. If the image is printed at a size such that the pixel size is smaller than the dot gain, the additional resolution is in some sense 'wasted', even though the result should not be any worse sharpness-wise than from a lower-res sensor.
Meanwhile, the sampling argument we had was referring to the PDR discussion and John's calculation, where a factor sqrt(N) appeared which you questioned.
I don't think we can derive from that much regarding spatial resolution. The PDR is about questions of noisiness when reproducing very bright and very dark objects in the same photograph.