Author Topic: Discussion of 'Equivalence'  (Read 49895 times)

simsurace

  • NG Member
  • *
  • Posts: 835
Re: Discussion of 'Equivalence'
« Reply #90 on: May 16, 2017, 19:07:08 »
To what end(s)?

I have nothing against counter-intuitive theories, after all any progress depends on such thoughts being formulated, tested, and refined. However, counter-productive theories that fail to predict phenomena already known, or muddles their understanding, are less than useful. It is a simple fact that the 'equivalence theory' fails us in understanding why there should be a difference in film vs a digital sensor regarding dynamic range when a part of the "sensor" (film strip, say, or a cropped DX from FX frame) is used to capture illumination. We *know* from film a piece of the film sheet responds identically to the entire sheet. It is not only intuitive it has been used in practice for ages. So why should a digital sensor be magically different? The proponents of the 'equivalence theory' repeatedly claim there is an area dependency involved. See, for example, the calculations for DX to FX in an earlier post.,

The area dependency only applies for measurements performed on the standardized output size. Thus, you would have to blow up the cut pieces of film to different degrees, changing the secondary magnification. The graininess of the output would depend on the size of the piece of film that you used.
I do not claim any other area dependency, but I claim that the described area dependency is relevant to discussions of image quality that can be achieved with a certain system. I'm fine using both terminologies interchangably because they are completely synonymous.

It all boils down to a small piece of film or photo site having a log D-E curve basically of the same functional shape. The film has the "toe" (= shadow noise in digital") and has a much higher headroom at the upper end of the curve simply because the linear relation log D - E starts breaking down; the digital sensor clips more brutally. No need to dismiss film because it "lacks full-well"capacity, it certainly has but is manifested differently. But then film and digital are different recording media. In any case, film is not area dependent on dynamic range meaning the range of light values that can be recorded; digital is claimed to be. This assertion has been posted over and over again here;

I maintain that I did not claim what you understood.
The area dependence applies to a specific definition of DR, relating to the signal measured on a standardized output size.
I did not dismiss film in any way, I was merely pointing out that the upper limit for DR is not a precise point as in a digital sensor, where it is a precise number of electrons.

The per-pixel DR is not always useful. Two sensors (of the same size) might have vastly different photosite sizes, leading to very different per-pixel DR, but still produce final images with similar noise properties. If all sensors had the same size but not number of pixels, we could define a DR per unit area, and use this to compare sensors. But since sensors come in different sizes as well as pixel counts, Bill Claff and others proposes to use a notion of DR that is measured in a standardized output.

sometimes stating that one "throws away" dynamic range by sampling from the initial file, which simply ignores the fact that the entire chain up to the final output (print of fixed size, another of those constraints that remove degrees of freedom) tends to "waste" information as much more is recorded initially than can be presented in the final stage. We expressly don't need 36 MPix cameras to make a 30x40 cm print or post a picture of 2000x1300 pix on a web page. Most of the collected data at the first step is discarded unless one want mural prints and in that particular case, few of today's consumer models deliver enough pixels to avoid significant extrapolation,  which negatively impacts the output quality in its own manner.

Here again, I don't follow the reasoning, as already stated before. What does the fact that we can throw away information at any stage of processing bring to the table?
You can always degrade the output to erase all differences. But this is irrelevant. Deleting the image would be taking this to the extreme.

The question about 'required resolution' is quite complicated as it involves viewing distance and the dot gain of the paper, among other things. If the image is printed at a size such that the pixel size is smaller than the dot gain, the additional resolution is in some sense 'wasted', even though the result should not be any worse sharpness-wise than from a lower-res sensor.

Meanwhile, the sampling argument we had was referring to the PDR discussion and John's calculation, where a factor sqrt(N) appeared which you questioned.
I don't think we can derive from that much regarding spatial resolution. The PDR is about questions of noisiness when reproducing very bright and very dark objects in the same photograph.
Simone Carlo Surace
suracephoto.com

Jack Dahlgren

  • NG Member
  • *
  • Posts: 1528
  • You ARE NikonGear
Re: Discussion of 'Equivalence'
« Reply #91 on: May 16, 2017, 19:26:53 »
Dynamic range is always tied to a certain signal (or device, but in this case we use different DR definitions for the same device, so they must be referring to different signals). What is called a 'signal' differs from case to case. Sometimes the same quantity that is noise in one case, can be a signal in another.

- If the signal is the brightness of a patch of the normalized output image (either the entire image or a certain predetermined fractional part of it, does not matter), then the definition that was used by John and me, or alternatively Bill Claff's PDR, applies.

- For the per-pixel DR, the signal is the charge measured in the photosite.

The name is still valid because it is a ratio of the biggest and the smallest recordable signals.
What name would you propose?

Considering that pixel level DR is a constant for the same sensor, then definitely do not call the variable the same name. It would be most effective if it refers to what it is. I believe in the calculation it is the the square root of number of pixels x dynamic range of pixel. Since number of pixels is related to size, the name should be related to size (which is the actual variable here). I'll let someone more interested than I am figure that out, but probably should not use terms already in use.

bclaff

  • NG Member
  • *
  • Posts: 47
    • Photons to Photos
Re: Discussion of 'Equivalence'
« Reply #92 on: May 16, 2017, 19:51:54 »
Well, the fact remains that there has been asserted a relation that has to be supported by hard evidence. There is a frequently held view, quite prominently displayed on threads here on NG and seen elsewhere that once one "crops" to DX format in camera, or equivalently, on the sampled file, the DX will have a reduced DR. Can such a view be supported - yes or no? If it cannot be supported please stop talking about "equivalence" as anything else than a fancy label for overall magnifcation.

Here is an example of claimed DR difference using DX to FX from the same camera. Fact or fiction?
The chart you reference is fact. The interpretation is up to the knowledgeable reader.

There is at least one caveat that anyone attempting to apply "equivalence" should note.
DX Crop Mode Photographic Dynamic Range
In short one must be careful to understand almost everyone fails to properly "do the math" in discussions of cropping.
(Possibly because most people don't have access to sufficient data to do it properly.)

Bjørn Rørslett

  • Fierce Bear of the North
  • Administrator
  • ***
  • Posts: 8252
  • Oslo, Norway
Re: Discussion of 'Equivalence'
« Reply #93 on: May 16, 2017, 19:55:16 »
I read that article before I posted in this thread.  I see no need to change my views.

bclaff

  • NG Member
  • *
  • Posts: 47
    • Photons to Photos
Re: Discussion of 'Equivalence'
« Reply #94 on: May 16, 2017, 19:56:16 »
...
DRFX=1.5*DRDX

...
However, this is an approximation and an example of erroneous math covered in DX Crop Mode Photographic Dynamic Range

BTW, to make it perfectly clear, the (DX) PDR figures are measured they are not based on the (FX) figures.

bclaff

  • NG Member
  • *
  • Posts: 47
    • Photons to Photos
Re: Discussion of 'Equivalence'
« Reply #95 on: May 16, 2017, 20:04:35 »
Considering that pixel level DR is a constant for the same sensor, then definitely do not call the variable the same name. It would be most effective if it refers to what it is.
On other boards we distinguish "dynamic range" by calling pixel dynamic range Engineering Dynamic Range (EDR) rather than Photographic Dynamic Range (PDR).
I believe in the calculation it is the the square root of number of pixels x dynamic range of pixel. ...
If you are referring to PDR you are very wrong; PDR is not based on EDR.
On the other hand, DxOMark Landscape (print) score is based on EDR.
(In My Opinion this is a fundamental flaw in the DxOMark approach.)

Les Olson

  • NG Member
  • *
  • Posts: 502
  • You ARE NikonGear
Re: Discussion of 'Equivalence'
« Reply #96 on: May 16, 2017, 20:59:15 »
But Equivalence is not used to evaluate photographs. It is used to compare/contrast certain variables of the photo gear.

Yes, but surely it is only worth comparing variables if they have observable effects on the photographic merits of actual photographs. 

"Equivalence" started when Canon had 36 x 24 sensors and Nikon didn't.  Nikon said "Ah, but how much real-world difference does that actually make?", and "equivalence" was Canon's answer. 

Because of that marketing context, only "certain variables" were compared.  Other possible comparisons were simply ignored.  Everyone now knows, eg, that for the same framing and output size you need one stop larger aperture to get the same DoF on DX as on FX.  But no one has seen a calculation of how much larger your print has to be on DX to have the same DoF, given the same framing and aperture (and it is not true that output size does not affect DoF because bigger prints are viewed from greater distance, because bigger prints are not viewed from greater distances). 

The other fallacy of the equivalence-based comparison is that variables are continuous, rather than categorical.  A simple example is a photograph of a page of text: either I can read it or I can't.  If I can read it, more or less "sharpness" is irrelevant, and if I can't, the same.  There are only two categories: can read it, and can't read it.  Differences within the categories are of no importance.  The same is true of dynamic range.  There are only two categories: more than a printer (or monitor) can deliver, and less.  If the camera can deliver more, it does not matter how much more (OK, that is a slight exaggeration, but the point stands).


Birna Rørslett

  • Global Moderator
  • **
  • Posts: 5286
  • A lesser fierce bear of the North
Re: Discussion of 'Equivalence'
« Reply #97 on: May 16, 2017, 21:04:46 »
Plus in addition once the imaging chain exceeds the maximum magnification of detail it can deliver, it runs into empty magnification and then more pixels or more dynamic range whether EDR or PDR or what have you will not help. Empty magnification has been with us from the birth of photography and no format can escape it, all that differs is how far one can go.

Anthony

  • NG Supporter
  • **
  • Posts: 1608
Re: Discussion of 'Equivalence'
« Reply #98 on: May 16, 2017, 21:25:53 »
  A simple example is a photograph of a page of text: either I can read it or I can't.  If I can read it, more or less "sharpness" is irrelevant, and if I can't, the same.  There are only two categories: can read it, and can't read it.  Differences within the categories are of no importance. 

I have to disagree with this, as my eyes age.  I can read many things, but some are much easier and more pleasant to read without optical assistance.  Differences in the "can read" category are of great importance.
Anthony Macaulay

simsurace

  • NG Member
  • *
  • Posts: 835
Re: Discussion of 'Equivalence'
« Reply #99 on: May 16, 2017, 21:29:37 »
The other fallacy of the equivalence-based comparison is that variables are continuous, rather than categorical.  A simple example is a photograph of a page of text: either I can read it or I can't.  If I can read it, more or less "sharpness" is irrelevant, and if I can't, the same.  There are only two categories: can read it, and can't read it.  Differences within the categories are of no importance.  The same is true of dynamic range.  There are only two categories: more than a printer (or monitor) can deliver, and less.  If the camera can deliver more, it does not matter how much more (OK, that is a slight exaggeration, but the point stands).

This is an interesting point, but I'm not sure it is that simple. You say that either you can read a given letter or not, but that is like flipping a coin: it is either heads or tails.
On the other hand, if you display a set of letters corrupted by noise, some will be more easily readable than others. If you graph the probability of identifying the letter correctly as a function of noise level for experiments like this, most of the time you will get a psychometric curve that is smooth (but depending on the experiment, the steepness can vary widely). So the legibility of letters is actually not black and white, but can be related to a continuous measurable variable.

If entire words are displayed, letters can be inferred despite the fact that they would be barely legible on an individual basis.

About printing dynamic range: you can surely compress the input DR to match the output DR.
Simone Carlo Surace
suracephoto.com

Bjørn Rørslett

  • Fierce Bear of the North
  • Administrator
  • ***
  • Posts: 8252
  • Oslo, Norway
Re: Discussion of 'Equivalence'
« Reply #100 on: May 16, 2017, 21:35:32 »
....
About printing dynamic range: you can surely compress the input DR to match the output DR.

Oops, now you are advocating making a loss of information something you questioned strongly earlier ??? The path to travel surely is narrow.

Les Olson

  • NG Member
  • *
  • Posts: 502
  • You ARE NikonGear
Re: Discussion of 'Equivalence'
« Reply #101 on: May 16, 2017, 21:38:54 »
I was not trying to -- as you seem to have interpreted -- derive mathematical conditions for the pleasingness of an image. That is a different subject for neuroscience to figure out, but not entirely futile, e.g. how certain geometrical relationships in faces can be correlated with pleasingness across multiple viewers.

Instead, I was describing a possible definition of what it means that an image can look 'the same' despite differences along the imaging chain. There is no notion of subjective quality or pleasingness to this, merely the notion of sameness of measurable quantities, like noise, DOF, framing etc.

Are you saying that humans might perceive two images as different, despite the fact that all possible measurements on the image say they are the same? If yes, what would you attribute the difference in perception to? The brain can only pick up stuff that is there, or would you like to invoke some supernatural senses?

[...]

You are saying that such concepts are 'singularly useless' for photographers, as if this matters, or is an objective fact. Instead, we are merely discussing conceptual tools that any photographer, camera designer, scientist or interested individual can freely choose to make use of or not. Obviously, you have decided that they are useless for you, but the fact that there is a lot of literature on the subject indicates that there is considerable interest in them.

I was not under the impression that you were trying to develop a mathematical description of pleasingness.  On the contrary, the problem is that you are ignoring pleasingness.

The issue is not that humans say images are different when measurements say they are the same but that humans say they are the same when measurements say they are different.  You must define a threshold of significance for each of the parameters you measure, and that threshold must be based on human evaluation - which, as of now, it is not.  That said, however, you cannot imagine that it is impossible for, eg, two portraits of a person to have identical dynamic range and SNR and whatever else you can measure but completely different emotional impact.

I may be old-fashioned, but the idea that what is or is not useless for photographers matters to photographers seems to me self-evident.  Of course, anyone can use any concept they like for any purpose they deem worthwhile.  I am just asking you to explain the photographic value of equivalence.  I am doing that because my claim is that equivalence is nothing but a marketing tool.

Les Olson

  • NG Member
  • *
  • Posts: 502
  • You ARE NikonGear
Re: Discussion of 'Equivalence'
« Reply #102 on: May 16, 2017, 21:51:47 »
I have to disagree with this, as my eyes age.  I can read many things, but some are much easier and more pleasant to read without optical assistance.  Differences in the "can read" category are of great importance.

Yes, but not from the point of view of information.  If you can, eventually, read the sign that says that the 12:45 train to Hogwarts leaves from Platform 9.75, you don't know any more because reading it was easier or more pleasant, or less because it was slower or more difficult. 


David H. Hartman

  • NG Member
  • *
  • Posts: 2783
  • I Doctor Photographs... :)
Re: Discussion of 'Equivalence'
« Reply #103 on: May 16, 2017, 21:56:46 »
Oops, now you are advocating making a loss of information something you questioned strongly earlier ??? The path to travel surely is narrow.

The Dynamic Range captured on a linear slope is all but never presented as linear on a print or on a computer display or any media. For as long as people made prints from negatives the toe of the film compressed the shadows and the toe of the printing paper compressed the highlights. This compression of dynamic range that came through the lens, exposed the film, came through the developed negative and exposed the printing paper is all I knew from about 1971 to 2003. One could never print all the dynamic range available in a Kodacolor 100 negative, not with any method I knew. [Even Tri-X gave more dynamic range that could be used. One can think of the enlarger as a giant shoe horn.]

There are two things one can do with the dynamic range if it's greater that can fit the display media. You can cut it off, truncate it, as one will do the brightness and contrast or compress it as one will do with levels and curves. [The camera and software tone map it so I'm not sure how to get a *totally* flat image out of a camera. It's already been tone mapped by the time I can start using it. I don't believe the Flat Picture Control offered by current Nikon cameras and Capture NX-D is totally flat.]

The compression of shadow and highlight was also present in the process of developing transparency films. All of the transparency films I used also truncated the dynamic range coming through the lens.

A print or other image with a linear slope looks flat, chalk and soot. I don't even know if a print would look right to me if it could be linear and that's what a grew up with. My brain is hopelessly programed to want good mid tone contrast and that must be at the expense of shadow and highlight process.

Give me a camera with lots of DR and I'll figure out how to use it but I'll compress where needed and may truncate it to satisfy me brain. I will always do this.

Dave Hartman

Now I'll have another cup of coffee to satisfy my brain in other ways. :)
Beatniks are out to make it rich
Oh no, must be the season of the witch!

simsurace

  • NG Member
  • *
  • Posts: 835
Re: Discussion of 'Equivalence'
« Reply #104 on: May 16, 2017, 22:01:06 »
I was not under the impression that you were trying to develop a mathematical description of pleasingness.  On the contrary, the problem is that you are ignoring pleasingness.

The issue is not that humans say images are different when measurements say they are the same but that humans say they are the same when measurements say they are different.  You must define a threshold of significance for each of the parameters you measure, and that threshold must be based on human evaluation - which, as of now, it is not.  That said, however, you cannot imagine that it is impossible for, eg, two portraits of a person to have identical dynamic range and SNR and whatever else you can measure but completely different emotional impact.

I may be old-fashioned, but the idea that what is or is not useless for photographers matters to photographers seems to me self-evident.  Of course, anyone can use any concept they like for any purpose they deem worthwhile.  I am just asking you to explain the photographic value of equivalence.  I am doing that because my claim is that equivalence is nothing but a marketing tool.

Yes, I'm ignoring pleasingness because it is, for me at least, a completely different topic. Everyone can define their own criteria for pleasingness, and include or not include measurable parameters like SNR etc. I still think there are patterns to what humans generally find visually pleasing, which boil down to properties of the brain, but they are not easy to model.

Regarding the threshold of significance: Sorry for misunderstanding you. This can be added on top of everything that has been said. The threshold only blurs things and makes things the same that could be measurably different. I don't see how this interferes with the concept of equivalence. Isn't it nice that the technical concept is more discriminative than human judgement? Would you prefer it the other way round, a concept which claims images are the same when most humans could tell them apart?

I do not want to define the photographic value of the concept since that is entirely subjective.
Likewise for 'what matters', I think there is no generally accepted canon of relevance.
Does it matter that water is made up of hydrogen and oxygen (i.e. the fact)? For sure. Does it matter that we know? Probably not, humans have lived very long without having an idea.
Simone Carlo Surace
suracephoto.com