NikonGear'23

Gear Talk => What the Nerds Do => Topic started by: simsurace on May 15, 2017, 12:09:59

Title: Discussion of 'Equivalence'
Post by: simsurace on May 15, 2017, 12:09:59
Over time and in various topics, discussions over the meaning and validity of 'Equivalence' have appeared and continue to appear.
Let's discuss these issues here without risking to derail other topics.

I will open with the following question:

Which of the arguments in http://www.josephjamesphotography.com/equivalence/ (http://www.josephjamesphotography.com/equivalence/) do you think is circular, and why?

I have yet to see these models take into account the basic fact that by comparing two formats printed to a final fixed and similar size, the magnification of detail along the image chain will be different and thus also will the effective aperture.

'Equivalent' apertures are defined in order to take this into account.
http://www.josephjamesphotography.com/equivalence/#equivalence (http://www.josephjamesphotography.com/equivalence/#equivalence)
Title: Re: Discussion of 'Equivalence'
Post by: Bjørn Rørslett on May 15, 2017, 12:57:54
For the models under discussion to have any meaning, the native performance of the sensor itself has to be compared. Later massaging data to fit to say a fixed print size or what have you should not enter the discussion at all.

Thus, can a real difference in sensor performance of a FX camera set to DX mode be manifested and documented? If not, all  conclusions following this first, critical step is just artefacts due to model parameters and implied constraints.

So, I challenge anyone to document that cropping off the RAW file from an FX camera to match the DX format is significantly different from setting the DX option directly in the same camera
. We need apples-to-apples comparisons here. I have put forward a testable hypothesis. The null hypothesis is that there is no difference.
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 15, 2017, 13:29:21

So, I challenge anyone to document that cropping off the RAW file from an FX camera to match the DX format is significantly different from setting the DX option directly in the same camera
. We need apples-to-apples comparisons here. I have put forward a testable hypothesis. The null hypothesis is that there is no difference.

You mean just cropping the image and nothing else?
This will not produce an equivalent image because you will get a different framing. Thus the uncropped and the cropped image cannot be compared side-by-side as images of the same scene. One image will display only a portion of the other image, and if the final display size is proportional to the size of the used portion of the sensor, there will be no other differences. Since you could have obtained the same result by cutting the print of the FX frame after the fact, this is totally obvious and does not require any more experiments as far as I'm concerned.

The predictions of 'equivalence' would have to be tested in a setting where we try to obtain two images, one with the FX sensor and one with the FX sensor cropped to DX, that look the same in terms of framing, perspective, DOF, diffraction, motion blur, noise, and where we compare the two images at identical display sizes (thus at sizes not uniformly scaled from the capture sensor).
Title: Re: Discussion of 'Equivalence'
Post by: Bjørn Rørslett on May 15, 2017, 13:38:42
The point here is that the shown graphs (Hasselblad X1D thread) says DX is inferior to FX in terms of dynamic range.

If this statement assumes a lot of manipulation going on after the RAW data has been captured, it cannot lend support to the claim above. We are left with consequences of model parameters and constraints, NOT a true sensor difference. All the "equivalence" thinking of the world won't change that fundamental fact. "Equivalence" has built-in circularity.

Please provide hard evidence refuting or corroborating the null hypothesis I put forward (reply #1). Quoting "authorities" will not lead anywhere - hard facts are required.
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 15, 2017, 14:37:19
The point here is that the shown graphs says DX is inferior to FX in terms of dynamic range.
(for anyone coming freshly to this topic, we are talking about the photographic dynamic range (PDR) charts on http://www.photonstophotos.net/Charts/PDR.htm (http://www.photonstophotos.net/Charts/PDR.htm))

Well, the graphs are making this statement with regards to a very specific measure, PDR, which is defined in a way that takes into account the different secondary magnifications that are required to produce a standardized output from different sensor sizes. 

It should be reiterated that anyone reading these graphs and using them to make decisions should read the definition of PDR and check whether it matches his or her idea of what is being displayed, and whether these criteria for comparing images suits them.

PDR is a derived quantity, but the data that underlies the graphs is still measured. The fact that they are measured shows up in the following way:

The graphs show slight deviations from equivalence theory when you compare different sensor models. So the idea that they are just predictions from a model is not consistent with the displayed values. The data being displayed in units of PDR, however, facilitates the comparison with the predictions of equivalence theory.
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 15, 2017, 14:43:03
Please provide hard evidence refuting or corroborating the null hypothesis I put forward (reply #1). Quoting "authorities" will not lead anywhere - hard facts are required.

I will not defend a hypothesis that I never claimed was different from null. I already explained why that null hypothesis is tautologically true.
Mere cropping is not what is being talked about in Bill Claff's graphs (see also my previous reply).
Title: Re: Discussion of 'Equivalence'
Post by: Bjørn Rørslett on May 15, 2017, 15:04:55
Well, the fact remains that there has been asserted a relation that has to be supported by hard evidence. There is a frequently held view, quite prominently displayed on threads here on NG and seen elsewhere that once one "crops" to DX format in camera, or equivalently, on the sampled file, the DX will have a reduced DR. Can such a view be supported - yes or no? If it cannot be supported please stop talking about "equivalence" as anything else than a fancy label for overall magnifcation.

Here is an example of claimed DR difference using DX to FX from the same camera. Fact or fiction?

Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 15, 2017, 15:41:37
I did not introduce the term. However, the term is about more than just magnification and describes the parameters that are required to produce very similar-looking images (when viewed at a standardized size).
I think that comparing images at a standardized output size, and showing the same framing, perspective, etc., is really the only technically sound way to compare them, but that is just me.

Neither Joseph James, nor Bill Claff or me have put forward the hypothesis that you are asking to be tested.

The rest, e.g. whether something is a property of the sensor or not, is a matter of definitions/terminology. A digital image sensor can be viewed as a fixed unit, with a certain size, pixel pitch, fill factor, quantum efficiency etc., or as a medium with variable size (similar to film). I tend to favor the former, and think that the size of a sensor as the property with the biggest impact on the photographic process. Perhaps you find the latter view more natural. That difference might explain some of the disagreement.
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 15, 2017, 15:57:17
Here is an example of claimed DR difference using DX to FX from the same camera. Fact or fiction?

The graph is not fictitious insofar as applying the definition of PDR (for the respective formats) to the SNR measurements of the D5 produces these data points.
I cannot personally verify the SNR measurements myself, so if you contest the accuracy of those measurements, this is a totally different matter.

As for the definition of PDR, I think we already discussed this earlier.
Title: Re: Discussion of 'Equivalence'
Post by: Jack Dahlgren on May 15, 2017, 15:59:27
I did not introduce the term. However, the term is about more than just magnification and describes the parameters that are required to produce very similar-looking images (when viewed at a standardized size).

The rest, e.g. whether something is a property of the sensor or not, is a matter of definitions/terminology. A digital image sensor can be viewed as a fixed unit, with a certain size, pixel pitch, fill factor, quantum efficiency etc., or as a medium with variable size (similar to film). I tend to favor the former, and think that the size of a sensor as the property with the biggest impact on the photographic process. Perhaps you find the latter view more natural. That difference might explain some of the disagreement.

If we move this to a different domain, say automotive, then this "only care about the end result" approach starts to fall apart. Imagine the sensor is the engine and the "print" is the body of the car. If you change the body size to be larger (and heavier) certainly the accleration will change, but claiming that the same engine has a different "equivalent torque" in a different body makes no sense. Comparisons based on that equivalent torque would be meaningless.

Good practice is to separate elements so you do not attribute qualities of the system to the wrong element.

In this discussion, and in my analogy, magnification is the key element, not dynamic range of the sensor.

As you state, larger sensor size is important because it allows lower magnification, not because it has better/different dynamic range.
Title: Re: Discussion of 'Equivalence'
Post by: Bjørn Rørslett on May 15, 2017, 16:02:32
The problem with the 'equivalence' thinking is that putting forward alternative approaches to reach a final result (be it print size or whatever) is masked by all the muddle of confused terminology. Will, for example, increasing the sheer number of pixels in the first step mitigate some of the problems caused later by large magnification? This again introduces another constraint, namely also the need for making the print not only fixed size but also with a fixed dpi setting of course (just to show how confused any line of thought ends up with so tongue in cheek is required). Will downscaling an FX sensor result in "increased DR" if the downscaling is done by binning photo sites? and so on. A maze of likely unusable roads can be followed for those having plenty of spare time upon their hands. The wiser ones would pick up the nearest camera and go shooting instead.

Title: Re: Discussion of 'Equivalence'
Post by: JohnMM on May 15, 2017, 16:06:17
Well, the fact remains that there has been asserted a relation that has to be supported by hard evidence. There is a frequently held view, quite prominently displayed on threads here on NG and seen elsewhere that once one "crops" to DX format in camera, or equivalently, on the sampled file, the DX will have a reduced DR. Can such a view be supported - yes or no?

Yes - it's a matter of definition.

Let's take a pixel with a dynamic range equal to DRPIX.
We use the full well capacity (FWC) for the upper limit and the read noise (RN) for the lower limit.
DRPIX=FWC/RN

If we combine a number, N, of these pixels to make a DX sensor we get, using an obvious notation, DRDX = sqrt(N)*DRPIX
This follows since the upper limits are added arithmetically and the lower lmits are added in quadrature.

For an FX sensor which is linearly 1.5 times bigger we need 2.25*N pixels.
DRFX=1.5*sqrt(N)*DRPIX

DRFX=1.5*DRDX

Claff uses a different lower limit and talks about "photographic dynamic range" - but similar ideas apply.




Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 15, 2017, 16:28:37
The problem with the 'equivalence' thinking is that putting forward alternative approaches to reach a final result (be it print size or whatever) is masked by all the muddle of confused terminology. Will, for example, increasing the sheer number of pixels in the first step mitigate some of the problems caused later by large magnification? This again introduces another constraint, namely also the need for making the print not only fixed size but also with a fixed dpi setting of course (just to show how confused any line of thought ends up with so tongue in cheek is required). Will downscaling an FX sensor result in "increased DR" if the downscaling is done by binning photo sites? and so on. A maze of likely unusable roads can be followed for those having plenty of spare time upon their hands. The wiser ones would pick up the nearest camera and go shooting instead.

Yes, I think these are valid concerns, but unsurprisingly they all concern second-order (smaller) effects (I hope that I mentioned that we are dealing with an approximation, i.e. as pointed out by someone 'a model that is wrong but useful').

For instance, increasing the number of pixels of the sensor has a rather small effect. If the used lenses are sufficiently high-resolving, the higher number of pixels can reveal some additional fine details, but only up to the limits imposed by diffraction. Regarding noise, on a pixel-by-pixel level there will be more noise because each pixel records a smaller number of pixels photons, but when viewed on the final print, unless we are dealing with very low resolution the additional noise will be largely invisible (printing is also a noisy process).

As said, the limitation of the model is that the number of pixels and other details of the sensor design are not included, because they are way too difficult to model with such a simple framework. However, the differences we see between formats that have vastly different image sensor areas, are well captured by the model. E.g. going between m4/3 and FX, or between a smartphone camera and FX, etc.

EDIT: To clarify the remark about photons: smaller pixels require a lower FWC if the base ISO is maintained because they will be struck by fewer photons for a certain exposure. If the read noise does not change, the DR of each pixel is reduced (lower FWC, same read noise ---> lower DR).
Title: Re: Discussion of 'Equivalence'
Post by: Bjørn Rørslett on May 15, 2017, 16:31:06
Yes - it's a matter of definition.

Let's take a pixel with a dynamic range equal to DRPIX.
We use the full well capacity (FWC) for the upper limit and the read noise (RN) for the lower limit.
DRPIX=FWC/RN

If we combine a number, N, of these pixels to make a DX sensor we get, using an obvious notation, DRDX = sqrt(N)*DRPIX
This follows since the upper limits are added arithmetically and the lower lmits are added in quadrature.

For an FX sensor which is linearly 1.5 times bigger we need 2.25*N pixels.
DRFX=1.5*sqrt(N)*DRPIX

DRFX=1.5*DRDX

Claff uses a different lower limit and talks about "photographic dynamic range" - but similar ideas apply.

Yes?? and no. The claimed increase is a direct function of N, the number of pixels. Thus if the density of pixels is increased on SAME sensor, the value will increase. The higher number of pixels per sensor the higher the dynamic range all things else considered the same. This is not borne out in practice with existing cameras.

Check your definitions applied on a subset of the FX file, cut to proportion of the DX format. The sampling of data and their conversion into digital domain is already performed. Thus *any* change has to be attributed to the ensuing operations on the initial data, not related to the sensor performance at all. Same can be said of a scheme in which every 2 out of 3 pixels are sampled and combined to a new frame (FX>DX incidentally). In the latter case one can hardly assume a lower well capacity as the conversion to digital is completed at the time of the subsampling.

The old adage of 'garbage in, garbage out' is as sound today as the day it was coined.

Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 15, 2017, 16:34:27
If we move this to a different domain, say automotive, then this "only care about the end result" approach starts to fall apart. Imagine the sensor is the engine and the "print" is the body of the car. If you change the body size to be larger (and heavier) certainly the accleration will change, but claiming that the same engine has a different "equivalent torque" in a different body makes no sense. Comparisons based on that equivalent torque would be meaningless.

Using your analogy, you already have the 'equivalent' measure to compare cars: acceleration. This is what matters, let's say, on a test track. A more massive body has to be compensated by a stronger engine.
In engineering and physics, the reformulation of relationships in terms of dimensionless quantities usually represents a step towards a deeper understanding, since stuff that is irrelevant for the qualitative behavior of the system gets moved out of the way.
Of course 'qualitative behavior' might not be so straightforward to define in photography. Still, I think that the methods of Joseph James are not too far off the mark.
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 15, 2017, 16:37:20
Yes?? and no. The claimed increase is a direct function of N, the number of pixels. Thus if the density of pixels is increased on SAME sensor, the value will increase.
If the density increases, then the full well capacity usually goes down. Otherwise you would require more exposure to fill the well. If you keep full well capacity the same while decreasing pixel size, then you got yourself a lower base ISO. The higher DR would again make sense.
Title: Re: Discussion of 'Equivalence'
Post by: Bjørn Rørslett on May 15, 2017, 16:38:48
I exemplified by using subsampling from a sensor. No change in well capacity as the data are already processed in-camera.

The 'theory' is hardly universal if it only applies to very specific situations, or not? That apart from the questionable practice of heavily massaging the data.

Again, the entire debate is going around in circles. At this stage I'm tempted to ask say the Fuji fans why they use that system because 'theory' says it is inferior ?? Perhaps practice indicates otherwise? I really hope and believe so.
Title: Re: Discussion of 'Equivalence'
Post by: Jakov Minić on May 15, 2017, 16:43:27
Bjørn posted a screen shot that baffles me and I believe it's fiction.
How can it be that if you use the D5 in DX mode you lose dynamic range?
I have been shooting DX lenses (10.5/2.8 fish-eye and 40/2.8 micro) in FX mode on D4, Df, and D750.
Was I increasing dynamic range just because I shot in FX mode??? Would I have lost dynamic range had I shot in DX mode???
I don't get it!
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 15, 2017, 16:55:05
I exemplified by using subsampling from a sensor. No change in well capacity as the data are already processed in-camera.

The 'theory' is hardly universal if it only applies to very specific situations, or not? That apart from the questionable practice of heavily massaging the data.

If you are subsampling, you are throwing away the photons that were recorded in the discarded photosites. Statistically speaking, it has the same effect as using photosites that are less efficient, or increasing the ISO and exposing less.

The theory is certainly not universal: it does not account, among other things, for destructive operations on the data like your subsampling strategy. As an extreme example, painting all the pixels black would set the DR to zero irrespective of everything else  :D. One can hardly say that this would be a failure of 'equivalence'.
Title: Re: Discussion of 'Equivalence'
Post by: Bjørn Rørslett on May 15, 2017, 17:04:06
Whatever floats your boat.

I think I have gotten the answers I need at this stage. One cannot alter the already sampled data by subsampling later. A 'theory' that puts 'response' before 'action' implies use of negative time. If one is content with such nefarious miracles, be my guest.

Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 15, 2017, 17:06:39
Bjørn posted a screen shot that baffles me and I believe it's fiction.
How can it be that if you use the D5 in DX mode you lose dynamic range?
I have been shooting DX lenses (10.5/2.8 fish-eye and 40/2.8 micro) in FX mode on D4, Df, and D750.
Was I increasing dynamic range just because I shot in FX mode??? Would I have lost dynamic range had I shot in DX mode???
I don't get it!

Hi Jakov, thanks for voicing your concerns!

The PDR is a measure of the noisiness of images that result from a certain sensor. Cameras with the same PDR should produce images that are similarly noisy.
When you use only a DX-sized portion of the FX sensor, it is similar as if you are using a native DX sensor of the same technology.
Of course, if the images are otherwise not similar at all, you will not be able to observe this.
But if you use your FX cam in crop mode and set everything in order to get a similar image, you will probably see this in effect.

As an example:

Shoot the same scene from the same position (entrance pupil location) using:
1) a 150mm lens on FX, exposed at 1/100s f/8 at ISO3200
2) a 100mm lens in DX crop, exposed at 1/100s f/5.6 at ISO1600

The two resulting images will look very similar and have similar noise when viewed at the same output size. The fact that the DX crop has a lower PDR results in getting the same noise as the FX image at half the ISO setting.
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 15, 2017, 17:10:16
Whatever floats your boat.

I think I have gotten the answers I need at this stage. One cannot alter the already sampled data by subsampling later. A 'theory' that puts 'response' before 'action' implies use of negative time. If one is content with such nefarious miracles, be my guest.

I don't quite understand the purpose of an example where you deliberately throw away data after capturing it. What is this example supposed to prove? You can always make the image worse in some way, no general theory will account for that.
Title: Re: Discussion of 'Equivalence'
Post by: Bjørn Rørslett on May 15, 2017, 17:12:18
Jakov: it's about the lens you put on your camera and how far you are away from your subject, ie. magnification and nothing else. Just go out and continue shooting without too much worries about these alleged "differences". They really exist only in the eye of he beholder.
Title: Re: Discussion of 'Equivalence'
Post by: Bjørn Rørslett on May 15, 2017, 17:13:46
I don't quite understand the purpose of an example where you deliberately throw away data after capturing it. What is this example supposed to prove? You can always make the image worse in some way, no general theory will account for that.

To illustrate the fallacies in the suggested model. DR here is only a function of N, number of included pixels. Which is of course pretty nonsensical.

Had the 'theory' had any applicability, going in either direction should have been taken care of.

So far, I haven't seen the slightest shred of evidence that support the claims of DR differences using the published graphs. I read through the more technical articles linked to and did not find anything relevant there.  Further quotations of authorities are superfluous as these cannot prove or disprove anything; hard data can. I have challenged people to come forth with such data and just observe a wall of silence - why?

A final word about "throwing away data". In printing to a fixed size, at least some if not all cameras require one to "throw away" pixels simply because the number recorded is higher than necessary. As today's better cameras can deliver huge prints much bigger than needed for print to magazines or newspapers, such "waste" of image data constitutes the norm. Not to speak of the low requirements for web images.  On the other hand, if a larger number of pixels would be required for the given print, interpolation have had to take place, again introducing possible artefacts.
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 15, 2017, 17:24:49
To illustrate the fallacies in the suggested model. DR here is only a function of N, number of included pixels. Which is of course pretty nonsensical.

Had the 'theory' had any applicability, going in either direction should have been taken care of.

Why is it nonsensical?
John showed you a back-of-the-envelope estimation of DR of a sensor composed of pixels of a given per-pixel DR.
Of course it is not accurate if you deliberately process the data to reduce DR.



Title: Re: Discussion of 'Equivalence'
Post by: Bjørn Rørslett on May 15, 2017, 17:29:29
I simply cannot accept the statement unless it is evidenced by hard data, that's why. Obviously a tough challenge to comply with?

Please, if this debate is going to be at all useful, new and better arguments need to be made, not by putting forth predictions from a model. A model cannot prove anything per se.

I fear the "science" here is unlikely to have survived peer reviewing in any decent journal.
Title: Re: Discussion of 'Equivalence'
Post by: Andrea B. on May 15, 2017, 17:35:58
oh heaven have mercy! ??? ::) :P

One of you is making arguments based on the (Generally Accepted) Definition of "Equivalence". One of you is not. Neither of you is wrong.

For a proper debate, you both must agree on the definition of terms beforehand.

As per Joseph James in the link above we have that "Equivalent" photos are photos of a given scene all of which have the following characteristics.
This is a very specialized sub-definition for the common word 'equivalence'.

As per this specialized Definition of "Equivalence", we can observe that Bjørn's example -- Foto1 shot as FX and Foto2 cropped from Foto1 to make a derived DX -- are NOT "Equivalent" because they do not have the Same Framing.

Foto1 and Foto2 quite do have the same perspective, DOF, Diffraction, Exposure Time, Brightness. (As they are theoretical, we don't know if Foto1 and Foto2 have the same display dimensions, but who cares anyway.)  :)


Whether or not one likes this Definition of "Equivalence" is beside the point because that is how "Equivalence" is currently defined and used in the various photographic arguments across the web. However I seriously doubt whether most users of the term actually understand the implications and proper usage of it when making their arguments. And language barriers get in the way also.


Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 15, 2017, 17:38:08
So far, I haven't seen the slightest shred of evidence that support the claims of DR differences using the published graphs. I read through the more technical articles linked to and did not find anything relevant there.  Further quotations of authorities are superfluous as these cannot prove or disprove anything; hard data can. I have challenged people to come forth with such data and just observe a wall of silence - why?

Well, I have only seen you ask for the test of one specific hypothesis, which I did not put forward. I hope it is clear why I'm unlikely to engage in such an exercise.
Too busy on testing hypothesis that I did in fact put forward.  8)

I'm not sure what portion of my explanations of the definition of PDR was unclear.
The graphs of PDR only make sense if you accept what PDR is.
Would you instead endorse a DR definition that uses the same SNR cutoff for all formats? Why?
If yes, would this make it easier to interpret the resulting graphs in terms of predicting photographic output of a fixed size?

Title: Re: Discussion of 'Equivalence'
Post by: Bjørn Rørslett on May 15, 2017, 17:39:41
The properties of 'Equivalence' (Andrea)  quotes can only be attained by using the same camera and lens all the time. It actually defines repeated sampling of a scene, nothing else.

It was bad before. Now it is getting worse. No offence .

I officially give up. What have been learnt? Nothing that wasn't already known in the 19th Century, way ahead of the digital era.
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 15, 2017, 17:49:31
I simply cannot accept the statement unless it is evidenced by hard data, that's why. Obviously a tough challenge to comply with?

Please, if this debate is going to be at all useful, new and better arguments need to be made, not by putting forth predictions from a model. A model cannot prove anything per se.

I fear the "science" here is unlikely to have survived peer reviewing in any decent journal.

If you are thinking about John's argument, I think an introductory text about Poisson processes would be a start.
Unless you are asking me to experimentally re-prove the Poisson statistics of photon arrivals, I don't see what kind of experiment is needed to support this kind of standard calculation.

Peer reviewing is basically what we are doing here. Despite the obvious absence of the authors.

EDIT: Sorry, but I got confused here. John's argument does not rely on Poisson statistics.
Title: Re: Discussion of 'Equivalence'
Post by: Andrea B. on May 15, 2017, 17:56:19
Bjørn, noise and dynamic range are NOT part of the definition of equivalence.
Title: Re: Discussion of 'Equivalence'
Post by: Jakov Minić on May 15, 2017, 17:58:52
Hi Jakov, thanks for voicing your concerns!

The PDR is a measure of the noisiness of images that result from a certain sensor. Cameras with the same PDR should produce images that are similarly noisy.
When you use only a DX-sized portion of the FX sensor, it is similar as if you are using a native DX sensor of the same technology.
Of course, if the images are otherwise not similar at all, you will not be able to observe this.
But if you use your FX cam in crop mode and set everything in order to get a similar image, you will probably see this in effect.

As an example:

Shoot the same scene from the same position (entrance pupil location) using:
1) a 150mm lens on FX, exposed at 1/100s f/8 at ISO3200
2) a 100mm lens in DX crop, exposed at 1/100s f/5.6 at ISO1600

The two resulting images will look very similar and have similar noise when viewed at the same output size. The fact that the DX crop has a lower PDR results in getting the same noise as the FX image at half the ISO setting.

Simone thank you for the explanation. As for the test, why on earth would I do that? I will probably make the images blurry anyway :)

I don't see the point in shooting an image in DX mode and then blowing it up to FX size just to see the drop in dynamic range :)
At least I now know why you are a scientist and I am an engineer :D

Bjørn, thank you too, I won't go into the DX/FX magnification business for sure!
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 15, 2017, 18:05:42
As for the test, why on earth would I do that?

1) To verify that the statements written by Joseph James and others are not BS. I get it, you are not a scientist. But sometimes it's better to test things, if only for your own sanity.
2) Because you need to take an image using DX that you would have rather used FX, but don't have the possibility to do so, and you want to make sure to get exactly the same result (perhaps for a client that demands results that are consistent with images of a previous shoot).

Otherwise, it's just a thought experiment to clarify the concept. It is possible to get similar images from different formats, and if one does not need the similarity, one can leverage the advantages of one format vs. the other to get a better result.

Cheers!
Title: Re: Discussion of 'Equivalence'
Post by: Andrea B. on May 15, 2017, 18:08:32

And equivalent photos do NOT have the same density of light. So if you make "Equivalent" FX and DX fotos with the same camera, the DX FX foto will have been created using less dense light. Same framing, same TOTAL amount of light over the used sensor area but less dense light. I think this part is what gets missed by a lot of folks?

Added:  Apologies for the typo above.
Title: Re: Discussion of 'Equivalence'
Post by: Bjørn Rørslett on May 15, 2017, 18:27:47
OK, let's walk that list sequentially;


If this be the definition of "equivalence", the term is superfluous as it only means one takes another shot immediately following the first one.
 
Title: Re: Discussion of 'Equivalence'
Post by: Andrea B. on May 15, 2017, 18:41:06
How do you derive that you need the "same lens" from the Same Framing requirement? If using the same lens for the FX/DX photos, then for one of the photos you will have to change your distance from the subject. Thus changing the Same Perspective.

Focal Length must be changed to preserve "Equivalence".

Same Framing means same Angle-of-View.


Same Brightness => (scene brightness? unclear)
Well, then, go read the darned Definitions.

If different lenses were to be used, the aperture have to be different
Yes, the aperture setting is different. The physical aperture diameter on the two lenses at the appropriate setttings are the same.

and either DOF, diffraction, or amount of light reaching the sensor would not be the same

No. Display size comes into this.
Title: Re: Discussion of 'Equivalence'
Post by: Bjørn Rørslett on May 15, 2017, 18:53:22
Read the text. Alternatives are presented including the use of a different lens. I should perhaps have used angle of view instead of angle of coverage, though thus going back and rephrasing that point.

However, as one soon enough finds out, there a pretty few degrees of freedom available. One simply cannot avoid violating one or more of these criteria if different optical systems are compared. Simple fact of life. One cannot have two different systems to be the "same", if they really are different ...

Title: Re: Discussion of 'Equivalence'
Post by: Andrea B. on May 15, 2017, 18:55:21
Physical aperture. Not f-stop.
Display size. (CoC, etc.)

Equivalence is a specialized definition. But no physics is violated. The CoC thing and display size could be a bit of a fuzzy area. (pun probably intended).
Title: Re: Discussion of 'Equivalence'
Post by: Andrea B. on May 15, 2017, 19:02:34
One cannot have two different systems to be the "same", if they really are different.

Equivalence of perspective, framing, DOF, diffraction (physical aperture) total light, exposure time, brightness and display dimensions between two different systems DOES NOT IMPLY equality or sameness of other factors.

Image quality and noise, for example, are not part of the definition of Equivalence.
Title: Re: Discussion of 'Equivalence'
Post by: Bjørn Rørslett on May 15, 2017, 19:03:51
I'm very well aware of the difference between f-number (not "f-stop" which is undefined and should not be used) f/N and physical aperture N. I learned my calculations the hard way from German textbooks some of which from the '50s - they were thorough. To say the least.

My point however is that is well nigh impossible to avoid violating any of all these constraints so as to shoehorn the photographic data into the "Same display size" (and incidentally, probably throwing away a lot of the data already captured to make that final output unless it is huge). Drop the fixed display/print criterion and we gain a lot of degrees of freedom including the opportunity to violate a little here and there.
Title: Re: Discussion of 'Equivalence'
Post by: Bjørn Rørslett on May 15, 2017, 19:05:26
One cannot have two different systems to be the "same", if they really are different.

Equivalence of perspective, framing, DOF, diffraction (physical aperture) total light, exposure time, brightness and display dimensions between two different systems DOES NOT IMPLY equality or sameness of other factors.

Image quality and noise, for example, are not part of the definition of Equivalence.

I welcome you do spend some time setting up reliable test beds for this. Let us discuss further when results are obtained.
Title: Re: Discussion of 'Equivalence'
Post by: Andrea B. on May 15, 2017, 19:07:34
Drop the fixed display criterion and you no longer have Equivalence. Simple as that.

We may not like the definition of Equivalence, but it is what it is. It violates no princples of physics. Or of engineering either. (Or mathematics.  :D )

We all know that in the practice of photography, none of this really matters. Nobody is arguing differently on that score.

And yes, I've made the experiments. But who really cares to see them? We go make photographs with none of this in mind.
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 15, 2017, 19:11:11
OK, let's walk that list sequentially;

  • Same Perspective => means same distance camera-subject. This is the definition of the term 'perspective'
  • Same Framing => magnification of detail is identical (from 1). With (1) in mind, focal lengths must be identical, plus (2) adds the further constraint the angle of coverage view is the same; both combine to imply the same lens. If the lenses are different, magnification would have to be different to get the same framing, however, we might violate further points on the list below, in particular those of (3)
I agree on the first point.
On the second, I think that our usage of the word 'framing' is different.
For me and for Joseph James (I presume), same framing means that the same content is seen on the frame. I.e. if I frame a person to fill the frame on one format, I have to fill the frame on the other format as well. Since I am not allowed to change perspective (1), I cannot move my camera. Therefore I need to choose a focal length that gives me the correct angle of view.

What does 'framing' mean for you?

With framing (as I intend it) fixed, the magnification at the sensor will not be the same between formats, but it will be proportional to the linear size of the format! This is an important point, since a lot of things depend on the magnification.

Maybe the further points will be cleared up when we agree on what 'framing' means.
Title: Re: Discussion of 'Equivalence'
Post by: Andrea B. on May 15, 2017, 19:12:50
Bjørn, and I know what you are aware of and not (including thorough familiarity with Poisson processes). But I'm not sure what other readers might be knowing. Thus it is important to stress Physical Aperture to make the point about Same Diffraction, for example, for those readers who might not understand immediately the need for different focal length lenses with "equivalent"  f-stops to produce the same physical aperture, etc., even when performing this experiment on the same FX/DX camera.

Actually, I think you have got it now. I hope others have. If not, keep trying. It's only a few definitions and some physics/engineering/mathematics.

Framing - AoV. That's all.
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 15, 2017, 19:19:17
Image quality and noise, for example, are not part of the definition of Equivalence.
They are, to some extent, subsumed by 'Total Amount of Light on the Sensor', which is part of the definition.
That is a separate topic to do with photon noise and its statistical properties. The validity of theories about how photons behave is independent of 'Equivalence'. But most that is known about it, has been known for several decades, so there is little that cannot be found in textbooks.
Other sources of noise are specific to the way the sensor and A/D works, and much too complex to be included in such a coarse model. They have to be discussed on a case-by-case basis.
Regarding the discussion of PDR and Bill Claff's graphs, this is actually the bit we should focus on.
But let's first clarify the definitions.
Title: Re: Discussion of 'Equivalence'
Post by: JohnMM on May 15, 2017, 19:19:55
....the DX foto will have been created using less dense light.

Wrong way around ? Lower density of light (=lower exposure) on the larger (FX) sensor.
Title: Re: Discussion of 'Equivalence'
Post by: Bjørn Rørslett on May 15, 2017, 19:41:03
"Maybe the further points will be cleared up when we agree on what 'framing' means."

I rephrased following Andrea's suggestion to be same angle of view (of the captured scene that is).

However, as you apparently are aware of, if perspective is to be held and formats differ then primary magnification has to differ because different focal lengths are used, and with that follows a cascade of other issues to violate one or more of the listed 'Equivalence' criteria. In my humble opinion, this makes the whole approach fall like a house of cards. Others might disagree which is their prerogative. I'm not on a mission to convince anyone.

Whatever the attitude, we end up in a round-robin manner at the conclusion that a larger format in most cases needs a smaller degree of secondary magnification. A fact that is centuries old by now. A side effect of this not mentioned by any of the model articles (at least, I couldn't find it mentioned) is of course the danger of empty magnification also is reduced. An example: for true grand landscape photography one needs a longer lens to get details more magnified and avoid empty magnification; and unless stitching is applied the wider angle of view usually is provided by a [much] larger format than optimal elsewhere.
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 15, 2017, 20:02:16
"Maybe the further points will be cleared up when we agree on what 'framing' means."

I rephrased following Andrea's suggestion to be same angle of view (of the captured scene that is).

However, as you apparently are aware of, if perspective is to be held and formats differ then primary magnification has to differ because different focal lengths are used, and with that follows a cascade of other issues to violate one or more of the listed 'Equivalence' criteria.

Great. Yes, primary magnification is different between formats under the stated conditions (1,2). One can think about everything in terms of primary magnification. Using the concept of equivalence does not diminish the value of primary magnification in any way. But can you measure primary magnification from a finished print?

So, which of the following criteria (3-6) is violated without possible remedy?
Title: Re: Discussion of 'Equivalence'
Post by: Bjørn Rørslett on May 15, 2017, 20:04:35
It's more the impossibility of getting all in line at the same time. Trust me, I wasted a lot of time trying to accomplish this.
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 15, 2017, 20:16:01
A side effect of this not mentioned by any of the model articles (at least, I couldn't find it mentioned) is of course the danger of empty magnification also is reduced.

If you are thinking of empty magnification because the primary projection of the scene onto the sensor has a finite resolution due to aberrations, indeed this is not part of the model. Within the theory, there is a very coarse simplification that treats all lenses as diffraction-limited, or to have the same relative resolution in terms of line widths per picture height (which is again a very coarse description, because this is just for one value of the MTF curve, e.g. MTF50). There are far too many lenses to distinguish a clear trend, and far too many ways to measure 'sharpness'.
The compensation of diffraction-related loss in resolution in small formats by a suitable aperture is likewise not without limits; there are obvious theoretical and practical (i.e. availability) limits on the range of apertures that can be used.
Title: Re: Discussion of 'Equivalence'
Post by: Bjørn Rørslett on May 15, 2017, 20:34:04
Empty magnification  even rears its head for FX systems - just imagine a ultrawide lens used for a mountain landscape. That capture will very soon run into empty magnification even for pretty modest print sizes. The ubiquitous recommendations for landscape lenses fail to mention this simple fact.
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 15, 2017, 20:40:40
It's more the impossibility of getting all in line at the same time. Trust me, I wasted a lot of time trying to accomplish this.
What about the example I posted to Jakov? Is this not feasible?
Title: Re: Discussion of 'Equivalence'
Post by: Jack Dahlgren on May 15, 2017, 20:41:04
Using your analogy, you already have the 'equivalent' measure to compare cars: acceleration. This is what matters, let's say, on a test track. A more massive body has to be compensated by a stronger engine.
In engineering and physics, the reformulation of relationships in terms of dimensionless quantities usually represents a step towards a deeper understanding, since stuff that is irrelevant for the qualitative behavior of the system gets moved out of the way.
Of course 'qualitative behavior' might not be so straightforward to define in photography. Still, I think that the methods of Joseph James are not too far off the mark.

What is missing from this discussion ARE the units or dimensions.

Without units you end up with strange ideas like the number of pixels increases the dynamic range.

The desire for high dynamic range is present during capture and processing. I don't think there are any printers out there that have 14 stops of range. So defining at output is the wrong target   

So much fuzzy math here that I'm going to slap the helicon on and take photos.
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 15, 2017, 20:43:33
Empty magnification  even rears its head for FX systems - just imagine a ultrawide lens used for a mountain landscape. That capture will very soon run into empty magnification even for pretty modest print sizes. The ubiquitous recommendations for landscape lenses fail to mention this simple fact.

Yes, I know. I stitch a lot to get around this problem. But stitching is in a way like using a massive sensor (that quickly exceeds the size of any commercially available sensor), and the same equivalence principles can also be applied to this scenario in order to choose a suitable focal length, aperture etc.
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 15, 2017, 20:55:10
What is missing from this discussion ARE the units or dimensions.

Without units you end up with strange ideas like the number of pixels increases the dynamic range.

Who said that?

The desire for high dynamic range is present during capture and processing. I don't think there are any printers out there that have 14 stops of range. So defining at output is the wrong target   

I'm not following. The output target is merely there in order to get a fair comparison. It can be anything: a 1000px web image, a 10 meter print, etc. and depends on the application. The important point is that it is the same across the formats that are being compared.

Why do we need high DR during capture and processing? I think it is to be able to massage the file in order to get a good end result afterwards. If we don't have the DR to start with, our options are limited to some extent. However, how we make use of the available DR is entirely subjective and an artistic choice.

The point is that the DR of the capture device defines an upper limit of what can be achieved. Especially if we pull up shadows we are going to run into limits. We do not need an output medium with 14 stops of contrast range to get to see the effects.
Title: Re: Discussion of 'Equivalence'
Post by: Bjørn Rørslett on May 15, 2017, 20:55:37
You think from end to front, I think from the starting point and know where the end might be sought. Thus not necessary to even think about aperture or lens. Possibly a side effect of working with a wide range of formats for many years, going back even to large format view camera (I used up to 8x10"). Ingrained habits fade slowly.
Title: Re: Discussion of 'Equivalence'
Post by: Jack Dahlgren on May 15, 2017, 21:25:29
Who said that?

I'm not following. The output target is merely there in order to get a fair comparison.

John Maud said DRFX=1.5*sqrt(N)*DRPIX where N equals number of pixels.

If output is only to get a comparison, then why compare at all?

Equivalence is a tarbaby.
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 15, 2017, 21:50:41
John Maud said DRFX=1.5*sqrt(N)*DRPIX where N equals number of pixels.
From that formula your statement follows only if you keep DRPIX fixed. What is the problem? I think John made his notation quite clear. Otherwise you can always ask for clarification.

If you disagree with the formula for fixed DRPIX, please explain why!

You did not say why it is a strange idea, and where the units are missing. Incidentally, DR is dimensionless.

If output is only to get a comparison, then why compare at all?

This is not what I said. Please read again. I was talking about why the equivalence theory assumes a consistent output target.


Title: Re: Discussion of 'Equivalence'
Post by: David H. Hartman on May 15, 2017, 22:08:22
Will, for example, increasing the sheer number of pixels in the first step mitigate some of the problems caused later by large magnification?

I don't think it's just more pixels that needs to be considered. The larger format collects more light, more photons. So the question I have is does this greater sampling of photons, more light translate to improved dynamic range. At this point I'm just watching to see if something gets hammered out.

Another point I'd like to make is the graphs I posted without comment were FX v. DX simply because they were available and hopefully eliminated some variables between various cameras.

I invite consideration for DX v. FX v. mid medium v. traditional 6x4.5cm medium format though I have no idea how one might find a DX, FX and Medium Format camera with similar technical qualities. So what if any are the advantages of mid medium and medium format camera?

Dave who is reading and hoping to learn.
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 15, 2017, 22:21:40
The larger format collects more light, more photons.

This holds only for the same exposure.
Without the condition, the statement is too easy to misconstrue.
Title: Re: Discussion of 'Equivalence'
Post by: David H. Hartman on May 15, 2017, 22:49:20
This holds only for the same exposure.
Without the condition, the statement is too easy to misconstrue.

Yes, please add that (same exposure). I think I stated that in another thread many pages back. :)

Dave
Title: Re: Discussion of 'Equivalence'
Post by: Jack Dahlgren on May 16, 2017, 00:14:37
From that formula your statement follows only if you keep DRPIX fixed. What is the problem? I think John made his notation quite clear. Otherwise you can always ask for clarification.

If you disagree with the formula for fixed DRPIX, please explain why!

You did not say why it is a strange idea, and where the units are missing. Incidentally, DR is dimensionless.

This is not what I said. Please read again. I was talking about why the equivalence theory assumes a consistent output target.

DRPIX is the dynamic range of a pixel. Why would you multiply it by number of pixels? If we do then we find that dynamic range of D5 is half that of D800.

Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 16, 2017, 00:27:01
DRPIX is the dynamic range of a pixel. Why would you multiply it by number of pixels? If we do then we find that dynamic range of D5 is half that of D800.

Because the image is made up of more than one pixel, and it is multiplied by the square root of N (not N) because the SNR (mean over standard deviation) of a Poisson random variable scales with the square root of its mean. because the signal scales linearly while the standard deviations are added in quadrature (as John already said -- sorry I got myself confused for a moment).

The full well capacity and noise floor of a photosite are hardly the same between the D5 and D800, so DRPIX has to be set to a different value for each of the cameras when calculating DR. Check the tables on http://www.sensorgen.info (http://www.sensorgen.info) to get numerical values for some common cameras. The D5 has not been included in the list, but the D4 for example has a FWC of 118339 electrons and a read noise of 18.8 electrons, while the D800 has one of 48818 electrons and a read noise of 4.6 electrons. (please click on the camera model to get the read noises at base ISO, since the table oddly lists the minimum read noise across all ISO values).

Unsurprisingly, the fat pixels of the D4 are designed to swallow much more photons before saturating. That is good, because they will receive more photons during a normal exposure (because they have a bigger area).

When designing a sensor, the full-well capacity of a photosite has to be chosen such that it is filled with what is considered the correct amount of photons according to the ISO norm for base ISO (accounting for losses due to the filters in front of the sensor). Or in other words, sensors with differently-sized photosites, but having the same quantum efficiency and the same base ISO (as is the case of the D4 and D800) should not have the same FWC.
Title: Re: Discussion of 'Equivalence'
Post by: Andrea B. on May 16, 2017, 00:34:09
Wrong way around ? Lower density of light (=lower exposure) on the larger (FX) sensor.

JohnMM - yes of course. Sorry about that! A typo in the "heat" of the discussion. Has been corrected.
Title: Re: Discussion of 'Equivalence'
Post by: Jack Dahlgren on May 16, 2017, 01:12:29
Because the image is made up of more than one pixel, and it is multiplied by the square root of N (not N) because the SNR (mean over standard deviation) of a Poisson random variable scales with the square root of its mean. because the signal scales linearly while the standard deviations are added in quadrature (as John already said -- sorry I got myself confused for a moment).

The full well capacity and noise floor of a photosite are hardly the same between the D5 and D800, so DRPIX has to be set to a different value for each of the cameras when calculating DR. Check the tables on http://www.sensorgen.info (http://www.sensorgen.info) to get numerical values for some common cameras. The D5 has not been included in the list, but the D4 for example has a FWC of 118339 electrons and a read noise of 18.8 electrons, while the D800 has one of 48818 electrons and a read noise of 4.6 electrons. (please click on the camera model to get the read noises at base ISO, since the table oddly lists the minimum read noise across all ISO values).

Unsurprisingly, the fat pixels of the D4 are designed to swallow much more photons before saturating. That is good, because they will receive more photons during a normal exposure (because they have a bigger area).

When designing a sensor, the full-well capacity of a photosite has to be chosen such that it is filled with what is considered the correct amount of photons according to the ISO norm for base ISO (accounting for losses due to the filters in front of the sensor). Or in other words, sensors with differently-sized photosites, but having the same quantum efficiency and the same base ISO (as is the case of the D4 and D800) should not have the same FWC.

Dynamic range exists at the pixel level. Pixels are independent and self-contained.  If you have an image with 1000 pixels and one with 1000000 using the same photosite, the larger dynamic range is the same. Why would you multiply by the square root of Number of pixels?
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 16, 2017, 02:15:53
Dynamic range exists at the pixel level. Pixels are independent and self-contained.  If you have an image with 1000 pixels and one with 1000000 using the same photosite, the larger dynamic range is the same. Why would you multiply by the square root of Number of pixels?

Dynamic range begins at the pixel level, but the analysis does not stop there because you cannot build an image from one pixel.

At the pixel level, DR is the ratio of the maximum number of photons* that you can record in one photosite (full well capacity FWC), and the noise floor (read noise RN).

At the sensor level, DR is the ratio of the maximum number of photons that you can record (the sum of all FWC, or N*FWC), resulting in a completely white frame, and the noise floor of a completely dark frame. The noise standard deviation is sqrt(N)*RN if the read noise in different photosites is independent (which is an approximation that is valid to varying extent depending on the sensor readout technology). The 'fake signal' due to read noise will have a similar size on the order of sqrt(N)*RN.

So the sensor-wide DR is N*FWC / (sqrt(N)*RN) = sqrt(N) * FWC/RN = sqrt(N) * DRPIX

*I'm always writing photons, but the correct term would be photo-electrons.
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 16, 2017, 02:47:19
Earlier in the discussion I brought up Poisson statistics which, however, don't appear in the previous calculation. I will clear that up later if needed. Sorry for any confusion that arises because of this.
Title: Re: Discussion of 'Equivalence'
Post by: Andrea B. on May 16, 2017, 04:21:56
if the read noise in different photosites is independent (which is an approximation that is valid to varying extent depending on the sensor readout technology)

A question here -- is sensor readout a hardware based process or combo of hardware and algorithm process? Is it done per pixel or per row?
And I'm not even sure I am asking this question correctly.  ;D So let's just stipulate that I've forgotten whatever specific details I used to know about readout and read noise and thus I'm getting a bit tangled up as to whether independence can be assumed.

Added later:  It was binning I was trying to remember about. For independence there wouldn't be any binning, right? One photosite through one a/d converter (then amplifier) would map to one "pixel" in the photo? (Before demosaicing.)
Title: Re: Discussion of 'Equivalence'
Post by: Jack Dahlgren on May 16, 2017, 06:34:46
Dynamic range begins at the pixel level, but the analysis does not stop there because you cannot build an image from one pixel.

At the pixel level, DR is the ratio of the maximum number of photons* that you can record in one photosite (full well capacity FWC), and the noise floor (read noise RN).

At the sensor level, DR is the ratio of the maximum number of photons that you can record (the sum of all FWC, or N*FWC), resulting in a completely white frame, and the noise floor of a completely dark frame. The noise standard deviation is sqrt(N)*RN if the read noise in different photosites is independent (which is an approximation that is valid to varying extent depending on the sensor readout technology). The 'fake signal' due to read noise will have a similar size on the order of sqrt(N)*RN.

So the sensor-wide DR is N*FWC / (sqrt(N)*RN) = sqrt(N) * FWC/RN = sqrt(N) * DRPIX

*I'm always writing photons, but the correct term would be photo-electrons.

This definition of "sensor-wide" dynamic range is the part that makes no sense to me. Imagine for a moment that we are talking about film (chemical rather than electrical - but same idea). Cut a piece of tri-x in half and expose it to a step chart. All the pieces will record the same gradient from black to white. You won't magically get more dynamic range by increasing the size of the film. If that were the case 8x10 would have a tremendous dynamic range advantage as compared to 35mm. Sadly, it doesn't. The same is true of a silicon based sensor.

It appears that this definition is just part of a circular exercise where a smaller sensor has less dynamic range because the definition of dynamic range for sensors is based on their size, not in the range of values that they can capture (which in the case of selecting a dx sized area of an fx sensor is identical)
Title: Re: Discussion of 'Equivalence'
Post by: David H. Hartman on May 16, 2017, 08:11:09
Cut a piece of tri-x in half and expose it to a step chart.

The step chart is woefully short of the dynamic range available in a scene in nature. Think of a diffused highlight on a while object to a darker object in deep shadow.

Once one captures whatever dynamic range one can they will have to compress that dynamic range into that of their printing paper or display monitor. To get a pleasing image the shadows and highlights will be compressed. In the days of film the toe of the film compessed the shadows and the toe of the paper compessed the highlights.

I'm guessing here but I think I could represent 13 stops on grade 2 paper with N-1 development of Tri-X of the '80s and '90s along with dodging and burning. Anyway Tri-X the way I used it had good dynamic range. Velvia on the other hand sucked!

Dave Hartman who will go back to reading.

Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 16, 2017, 09:13:22
This definition of "sensor-wide" dynamic range is the part that makes no sense to me. Imagine for a moment that we are talking about film (chemical rather than electrical - but same idea). Cut a piece of tri-x in half and expose it to a step chart. All the pieces will record the same gradient from black to white. You won't magically get more dynamic range by increasing the size of the film. If that were the case 8x10 would have a tremendous dynamic range advantage as compared to 35mm. Sadly, it doesn't. The same is true of a silicon based sensor.

It appears that this definition is just part of a circular exercise where a smaller sensor has less dynamic range because the definition of dynamic range for sensors is based on their size, not in the range of values that they can capture (which in the case of selecting a dx sized area of an fx sensor is identical)

I don't understand this idea of 'circular exercise'. The definition of dynamic range at the sensor level that John and I gave, like the PDR definition by Bill Claff, is one that lends itself to comparison of images at a standardized output size. I think that this is the only way to meaningfully compare images of the same scene, and I think the definition is useful for that. Others prefer to think of secondary magnification.

It is not about 'better' or 'worse' definitions. Each definition has a specific, precisely delineated purpose.

If you use the per-pixel or per-unit-area DR, you will find that it does not correspond to what you can see in a standardized output size in terms of noise.
That is why sensor-level DR makes sense as a concept, and why Bill Claff uses PDR as a y-axis, as opposed to engineering dynamic range.

The definition is what it is. The non-trivial part is that if you accept the definition, you can do the little calculation that John and I did in order to estimate sensor-level DR. You get certain predictions. Then you can test those by looking at actual images and running statistical analyses on them. This part is definitely non-circular.
Using a different definition, you might end up with the same predictions, but using a slightly different calculation.

I said before that Bill Claff's graphs don't make sense if you don't understand the definitions. Perhaps too many people draw conclusions from the graph without understanding the definitions, but this is their own fault, not Bill Claff's, because he gives all required information.
The definition ensures that an iPhone 7 ends up lower on the scale than a D800, even though the individual photosites of the iPhone are likely as efficient as the ones in the D800, or even more efficient. And we probably agree that a given output from an iPhone does not look the same or better than from a D800.
It's all about how to present the data.

If the axis were engineering DR, all sensors from the same generation would be very close. One would have to separately calculate the effect on a standardized output by invoking secondary magnification, instead of being able to read off the difference in stops directly from the graph. Both lead to the same conclusions.

About your film example: If you print your gradients from the smaller piece of film, the grain of the film will be more strongly magnified, giving you less certainties about the dark tones of the gradient and blackness than if you print from a bigger piece of film. Again, looking at a standardized output size. This has been described by Bjørn as 'different secondary magnification', but both concepts are 'equivalent' in terms of predictive power, if you allow me to use a pun.

The analogy with film is not strict because a piece of film, differently from a digital sensor, does not have a full-well capacity, so dynamic range is not a precise number as for a sensor. I do not know the technical definition of dynamic range of film, but I guess that the upper limit involves some kind of cutoff where the density does not change meaningfully by exposing more.
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 16, 2017, 09:44:28
if the read noise in different photosites is independent (which is an approximation that is valid to varying extent depending on the sensor readout technology)

A question here -- is sensor readout a hardware based process or combo of hardware and algorithm process? Is it done per pixel or per row?
And I'm not even sure I am asking this question correctly.  ;D So let's just stipulate that I've forgotten whatever specific details I used to know about readout and read noise and thus I'm getting a bit tangled up as to whether independence can be assumed.

Added later:  It was binning I was trying to remember about. For independence there wouldn't be any binning, right? One photosite through one a/d converter (then amplifier) would map to one "pixel" in the photo? (Before demosaicing.)

I think it is mainly the transfer of charge from the pixel to the downstream circuitry.
There is also quantization noise in the A/D conversion, and thermal noise.

However, I have to pass on those questions. I would have to read some literature about what is known.

The correlations of all noise sources combined (except photon noise of course) can be measured by looking at correlations in the RAW image of a dark frame. I think I have seen some pictures of Discrete Fourier Transforms of dark frames somewhere, but I don't remember the place.
Title: Re: Discussion of 'Equivalence'
Post by: Les Olson on May 16, 2017, 11:21:42
[...] comparison of images at a standardized output size. I think that this is the only way to meaningfully compare images of the same scene [...]

The requirement that the same scene is reproduced at a standardised output size creates the circularity.  As soon as that requirement is accepted everything else follows, but if it is rejected the equivalence argument falls to the ground. 

But why should that requirement be accepted?  Other than to make the argument possible, of course, which is as clear a definition of circularity as you could ask for.  Under what real-world circumstances does anyone use different cameras to take the same photograph, for a standardised output size? 
Title: Re: Discussion of 'Equivalence'
Post by: Bjørn Rørslett on May 16, 2017, 11:22:03
A pity people today isn't more familiar with film technology. Then it would be obvious that even a small piece of the same emulsion would have had precisely the same ability to encompass a light range (or dynamic range) as a larger sheet of the same material. This would avoid the confusing step of making DR a direct function of area, and avoid the inherent circularity or at least eschew spuriously correlated parameters for later models. Is the film analogy without solid backing? Not at all. In the ancient age of film, cutting pieces of photosensitive material for testing of exposures was commonmplace, in particular for development of sheet film, plus the similar procedure of exposing strips of paper in the darkroom. If the final outcome had depended in any way on area we would have been royally screwed but fortunately we weren't. Using a fraction kept the properties of the whole. A lesson apparently quickly forgotten in the onslaught of digital photography and one of the reasons pseudo-science is generated.

The crucial aspect not addressed by this 'equivalence theory' is WHY are larger formats perceived as "better" in the sense that they allegedly have a superior dynamic range. Partly this arise because one fails to recognise the entire imaging chain as a self-contained system. Singling out a single component and drawing system-wide conclusions therefrom is fraught with problems. We need to include the optics, image circles projected, the nature of light, magnification requirements, and the purpose for which the system was created in the first place to relate different systems to each other and also at the same time understand for each end purpose there is an "optimal" system; no single system can be optimised for every user requirements.

So, why is a tiny format like those of a smartphone inherently less 'dynamic' than a larger format? Is it because the pixel count is lower? No, today's smartphones have pixels aplenty. Now, imagine then a smartphone sensor with photo sites of outstanding dynamic range and the same number of pixels as a larger camera format. By the presented calculations these formats should have the same inherent dynamic range. Oh well, the small format by the requirement of the fixed output target actually would need more (both pixels and DR even though pixels as such are dimensionless, but the constraints imposed on the "same framing" criterion and side effects on other parameters thereof induces a dimensionality dependence nonetheless). To make life easier for the smartphone, we might compare against a larger camera with a lower pixel count though. Again, we now have this über-super smartphone at our disposal fulfilling the wildest dreams of pixel count and dynamic range - so now we can end up with a print of same quality as from the lager format? Not likely, as the tiny format very soon will run into empty magnification and no more information can be conveyed to the output medium. Thus the band-width of information has a ceiling no amount of pixels or DR can remove. Any format will be constrained in this manner, but the problem will manifest itself much earlier with the smaller formats.

Is the example sketched above far-fetched? Absolutely not. It corresponds in the film era the practice of making ultra-high resolution images on small formats by extremely fine-grained film specially developed. It was easy to reach or surpass the output from say sheet film or medium format in that way, up to the inevitable ceiling set by empty magnification. This was of course facilitated by the fact that on an area basis, optics of smaller formats (within limits) can be made to resolve better. This outcome of events was easy to understand when one thinks in terms of the overall system, an enigma if the "sensor" (i.e. film) was thought to be the principal determinant. I once designed and built an underwater stereographic device following these principles, in which high-resolving film and 35 mm format lenses were used and tested it against the existing 6x6 cm Hasselblad-based device for the same documentation purpose (frame coverage on the sea bed were identical). The smaller system easily won out in image quality when we scrutinised the film under 20X magnification in a binocular loupe. However, frames from the Hasselblad could be be enlarged to mural size, the smaller frames couldn't unless  a severe drop in perceived quality was accepted.

Whatever one's attitude towards 'equivalence theory', it is pretty obvious that the 'theory' has little to do with practical photography. Just use the current thread as an example.
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 16, 2017, 11:47:48
Partly this arise because one fails to recognise the entire imaging chain as a self-contained system.

A very compact mathematical description of the entire imaging chain could be achieved by considering the transformation T between the (normalized) image space light density map (light intensity as a function of normalized image space coordinates) and the (normalized) reflectance map (for a print) or screen light intensity map (for on-screen viewing). The criteria of equivalence would be formulated as the changes of parameters of the intermediate stages of this transformation, such that T does not change. Or more concisely: two imaging chains are called equivalent if they have the same T. This terminology fits with the mathematical notion of 'equivalence class'.

I would argue that it is precisely equivalence theory that attempts to consider the whole chain, and treat it more or less as a black box with a small set of parameters, for better or worse. As someone already said: a useful (to some extent) abstraction and simplification.
Title: Re: Discussion of 'Equivalence'
Post by: Bjørn Rørslett on May 16, 2017, 11:53:02
One thing is for sure, the 'theory' creates a lot of confusion. Thus we now end up with 'theory of confusion' instead of blur circles :D

Can we close this debate soon? it is really not going anywhere. Even the debate is becoming circular.
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 16, 2017, 12:01:48
The requirement that the same scene is reproduced at a standardised output size creates the circularity.  As soon as that requirement is accepted everything else follows, but if it is rejected the equivalence argument falls to the ground. 

But why should that requirement be accepted?  Other than to make the argument possible, of course, which is as clear a definition of circularity as you could ask for.  Under what real-world circumstances does anyone use different cameras to take the same photograph, for a standardised output size?

You don't have to accept it. Nothing will change if you don't.

Under what real-world circumstances does anyone use different cameras to take the same photograph, for a standardised output size?
It happens all the time. People switch between systems with different formats but continue to view their images in the same or in a similar way, leading to perceptual differences in the output.
Or a new camera hits the market and you want to test its capabilities by comparing its output with a known benchmark.
Etc.

It's a conceptual tool. If you don't need a hammer, leave the hammer in the toolbox.  :D
Title: Re: Discussion of 'Equivalence'
Post by: Akira on May 16, 2017, 12:08:00
One thing is for sure, the 'theory' creates a lot of confusion. Thus we now end up with 'theory of confusion' instead of blur circles :D

No worries!  NikonGear is THE circle of confusion!   ;D
Title: Re: Discussion of 'Equivalence'
Post by: Bjørn Rørslett on May 16, 2017, 12:10:11
No worries!  NikonGear is THE circle of confusion!   ;D

FINALLY a point we all can wholeheartedly agree on. Or so I surmise :D
Title: Re: Discussion of 'Equivalence'
Post by: Jack Dahlgren on May 16, 2017, 14:38:48
FINALLY a point we all can wholeheartedly agree on. Or so I surmise :D
Wait... Is it a point or a circle?
Title: Re: Discussion of 'Equivalence'
Post by: Jack Dahlgren on May 16, 2017, 14:43:16
A very compact mathematical description of the entire imaging chain could be achieved by considering the transformation T between the (normalized) image space light density map (light intensity as a function of normalized image space coordinates) and the (normalized) reflectance map (for a print) or screen light intensity map (for on-screen viewing). The criteria of equivalence would be formulated as the changes of parameters of the intermediate stages of this transformation, such that T does not change. Or more concisely: two imaging chains are called equivalent if they have the same T. This terminology fits with the mathematical notion of 'equivalence class'.

I would argue that it is precisely equivalence theory that attempts to consider the whole chain, and treat it more or less as a black box with a small set of parameters, for better or worse. As someone already said: a useful (to some extent) abstraction and simplification.

Yes you are treating as a black box, but are hijacking pixel dynamic range(a constant for the same sensor) by multiplying by size (a variable) and then calling it sensor dynamic range. It is this redefinition which is bothering me. It needs a different name.
Title: Re: Discussion of 'Equivalence'
Post by: Andrea B. on May 16, 2017, 16:43:38
Bjørn, you fierce bear, you are correct to recognize that there is an "inevitable ceiling" reached for certain variables (magnification, for example) when comparing systems. You are also correct in recognizing that one specific characteristic of a sensor (dynamic range, for example) should not be used to judge the entire system.

But I want to stand up for the folks here on UVP, like Simone, myself and our other interested NG members, who also understand the various limitations of "Equivalence" and really have no argument against what you say (as long as the definitions, etc., are understood, etc., by all).

Using the "Equivalance" definitions (and the accompanying physics) to determine the differences between systems or the relative merits of each system is useful. It can help us configure any system to its most advantageous settings. It can help us understand better which system to choose for a particular photographic task. So I feel that we should not "throw out the baby along with the bathwater", as the old saying goes.  ;D

******

I'm still thinking about the Dynamic Range definitions. And what Jack has just observed. I'll spare everyone from my mental DR muddle for now.  :D :D :D

Title: Re: Discussion of 'Equivalence'
Post by: Les Olson on May 16, 2017, 16:47:38
A very compact mathematical description of the entire imaging chain could be achieved by considering the transformation T between the (normalized) image space light density map (light intensity as a function of normalized image space coordinates) and the (normalized) reflectance map (for a print) or screen light intensity map (for on-screen viewing). The criteria of equivalence would be formulated as the changes of parameters of the intermediate stages of this transformation, such that T does not change. Or more concisely: two imaging chains are called equivalent if they have the same T. This terminology fits with the mathematical notion of 'equivalence class'.

I would argue that it is precisely equivalence theory that attempts to consider the whole chain, and treat it more or less as a black box with a small set of parameters, for better or worse. As someone already said: a useful (to some extent) abstraction and simplification.

Mathematical descriptions of the imaging chain abound, and they are singularly useless for photographers.  The reason is that photographs are intended for viewing by humans, so that human evaluation is essential for any photographically credible notion of equivalence.  There is a vast scientific literature about how to do that, using "just noticeable difference" techniques.  There are plenty of studies using that procedure to evaluate reproduction of luminance maps - the term of art for the result is "image fidelity" - but that correlates quite poorly with evaluation of pleasingness.   
Title: Re: Discussion of 'Equivalence'
Post by: Andrea B. on May 16, 2017, 16:53:27
But Equivalence is not used to evaluate photographs. It is used to compare/contrast certain variables of the photo gear.
Title: Re: Discussion of 'Equivalence'
Post by: Les Olson on May 16, 2017, 17:08:36
Under what real-world circumstances does anyone use different cameras to take the same photograph, for a standardised output size?
It happens all the time. People switch between systems with different formats but continue to view their images in the same or in a similar way, leading to perceptual differences in the output.
Or a new camera hits the market and you want to test its capabilities by comparing its output with a known benchmark.
Etc.

People switching systems do not re-visit every location they have been for the past two years and re-photograph every image they made.  Why would you get a different camera if not to take different photographs? 

And as for comparing a new camera which has "hit" the market with other cameras, the reason that is a foolish undertaking is that the conditions under which a comparison is possible - same photograph, same output - are photographically irrelevant.
Title: Re: Discussion of 'Equivalence'
Post by: Andrea B. on May 16, 2017, 17:36:29
Be that as it may, some people do want to compare cameras on a technical, gear-head basis. I myself enjoy it because I like techie stuff. I also manage to separate my artistic, photographic aspirations (such as they may be, it is questionable, <smiling>) from techie stuff.

We can enjoy and maybe even be good at more than one thing:  Gear vs. Photographs, Tech vs. Art. The two opposites can be combined meaningfully. For example, for all his fussing in this thread, Bjørn is a master of Tech in addition to being a photographic Artist (my opinion). He just gets ahead of himself with these definitions in english (that crazy language!!).
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 16, 2017, 17:50:14
Mathematical descriptions of the imaging chain abound, and they are singularly useless for photographers.  The reason is that photographs are intended for viewing by humans, so that human evaluation is essential for any photographically credible notion of equivalence.  There is a vast scientific literature about how to do that, using "just noticeable difference" techniques.  There are plenty of studies using that procedure to evaluate reproduction of luminance maps - the term of art for the result is "image fidelity" - but that correlates quite poorly with evaluation of pleasingness.

I was not trying to -- as you seem to have interpreted -- derive mathematical conditions for the pleasingness of an image. That is a different subject for neuroscience to figure out, but not entirely futile, e.g. how certain geometrical relationships in faces can be correlated with pleasingness across multiple viewers.

Instead, I was describing a possible definition of what it means that an image can look 'the same' despite differences along the imaging chain. There is no notion of subjective quality or pleasingness to this, merely the notion of sameness of measurable quantities, like noise, DOF, framing etc.

Are you saying that humans might perceive two images as different, despite the fact that all possible measurements on the image say they are the same? If yes, what would you attribute the difference in perception to? The brain can only pick up stuff that is there, or would you like to invoke some supernatural senses?

For example, color management is an aspect of this that exists despite the fact that color perception is complicated. It is still possible to roughly model color perception, and derive methods to get consistent colors across media (or to get close, at least). Sometimes color accuracy is needed, sometimes not. But the fact that color management exists and is well developed indicates that there is a need for it that goes beyond pure academic interest. Even though it is grossly imperfect, having no color management at all would be much worse.

You are saying that such concepts are 'singularly useless' for photographers, as if this matters, or is an objective fact. Instead, we are merely discussing conceptual tools that any photographer, camera designer, scientist or interested individual can freely choose to make use of or not. Obviously, you have decided that they are useless for you, but the fact that there is a lot of literature on the subject indicates that there is considerable interest in them.
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 16, 2017, 17:58:46
People switching systems do not re-visit every location they have been for the past two years and re-photograph every image they made.  Why would you get a different camera if not to take different photographs? 

And as for comparing a new camera which has "hit" the market with other cameras, the reason that is a foolish undertaking is that the conditions under which a comparison is possible - same photograph, same output - are photographically irrelevant.

I don't know why people do what they do, but they do it. For example, they buy a lighter camera with a smaller sensor because of declining body strength and then compare their favourite print size to understand what they are trading against the weight saving.

Yes, sometimes they buy new gear to get different possibilities. But who am I to judge them or what they do?

I'm saying that if they want an apples-to-apples comparison, they are advised to evaluate images of the same size at the same viewing distance. Not by looking at a 10m print with a loupe for one camera, and at a 1000px web JPEG for the other camera. Is it so controversial to suggest that such a comparison would be totally meaningless?
Title: Re: Discussion of 'Equivalence'
Post by: Bjørn Rørslett on May 16, 2017, 18:05:10
But Equivalence is not used to evaluate photographs. It is used to compare/contrast certain variables of the photo gear.

To what end(s)?

I have nothing against counter-intuitive theories, after all any progress depends on such thoughts being formulated, tested, and refined. However, counter-productive theories that fail to predict phenomena already known, or muddles their understanding, are less than useful. It is a simple fact that the 'equivalence theory' fails us in understanding why there should be a difference in film vs a digital sensor regarding dynamic range when a part of the "sensor" (film strip, say, or a cropped DX from FX frame) is used to capture illumination. We *know* from film a piece of the film sheet responds identically to the entire sheet. It is not only intuitive it has been used in practice for ages. So why should a digital sensor be magically different? The proponents of the 'equivalence theory' repeatedly claim there is an area dependency involved. See, for example, the calculations for DX to FX in an earlier post.,

It all boils down to a small piece of film or photo site having a log D-E curve basically of the same functional shape. The film has the "toe" (= shadow noise in digital") and has a much higher headroom at the upper end of the curve simply because the linear relation log D - E starts breaking down; the digital sensor clips more brutally. No need to dismiss film because it "lacks full-well"capacity, it certainly has but is manifested differently. But then film and digital are different recording media. In any case, film is not area dependent on dynamic range meaning the range of light values that can be recorded; digital is claimed to be. This assertion has been posted over and over again here; sometimes stating that one "throws away" dynamic range by sampling from the initial file, which simply ignores the fact that the entire chain up to the final output (print of fixed size, another of those constraints that remove degrees of freedom) tends to "waste" information as much more is recorded initially than can be presented in the final stage. We expressly don't need 36 MPix cameras to make a 30x40 cm print or post a picture of 2000x1300 pix on a web page. Most of the collected data at the first step is discarded unless one want mural prints and in that particular case, few of today's consumer models deliver enough pixels to avoid significant extrapolation,  which negatively impacts the output quality in its own manner.
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 16, 2017, 18:22:46
Yes you are treating as a black box, but are hijacking pixel dynamic range(a constant for the same sensor) by multiplying by size (a variable) and then calling it sensor dynamic range. It is this redefinition which is bothering me. It needs a different name.

Dynamic range is always tied to a certain signal (or device, but in this case we use different DR definitions for the same device, so they must be referring to different signals). What is called a 'signal' differs from case to case. Sometimes the same quantity that is noise in one case, can be a signal in another.

- If the signal is the brightness of a patch of the normalized output image (either the entire image or a certain predetermined fractional part of it, does not matter), then the definition that was used by John and me, or alternatively Bill Claff's PDR, applies.

- For the per-pixel DR, the signal is the charge measured in the photosite.

The name is still valid because it is a ratio of the biggest and the smallest recordable signals.
What name would you propose?
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 16, 2017, 19:07:08
To what end(s)?

I have nothing against counter-intuitive theories, after all any progress depends on such thoughts being formulated, tested, and refined. However, counter-productive theories that fail to predict phenomena already known, or muddles their understanding, are less than useful. It is a simple fact that the 'equivalence theory' fails us in understanding why there should be a difference in film vs a digital sensor regarding dynamic range when a part of the "sensor" (film strip, say, or a cropped DX from FX frame) is used to capture illumination. We *know* from film a piece of the film sheet responds identically to the entire sheet. It is not only intuitive it has been used in practice for ages. So why should a digital sensor be magically different? The proponents of the 'equivalence theory' repeatedly claim there is an area dependency involved. See, for example, the calculations for DX to FX in an earlier post.,

The area dependency only applies for measurements performed on the standardized output size. Thus, you would have to blow up the cut pieces of film to different degrees, changing the secondary magnification. The graininess of the output would depend on the size of the piece of film that you used.
I do not claim any other area dependency, but I claim that the described area dependency is relevant to discussions of image quality that can be achieved with a certain system. I'm fine using both terminologies interchangably because they are completely synonymous.

It all boils down to a small piece of film or photo site having a log D-E curve basically of the same functional shape. The film has the "toe" (= shadow noise in digital") and has a much higher headroom at the upper end of the curve simply because the linear relation log D - E starts breaking down; the digital sensor clips more brutally. No need to dismiss film because it "lacks full-well"capacity, it certainly has but is manifested differently. But then film and digital are different recording media. In any case, film is not area dependent on dynamic range meaning the range of light values that can be recorded; digital is claimed to be. This assertion has been posted over and over again here;

I maintain that I did not claim what you understood.
The area dependence applies to a specific definition of DR, relating to the signal measured on a standardized output size.
I did not dismiss film in any way, I was merely pointing out that the upper limit for DR is not a precise point as in a digital sensor, where it is a precise number of electrons.

The per-pixel DR is not always useful. Two sensors (of the same size) might have vastly different photosite sizes, leading to very different per-pixel DR, but still produce final images with similar noise properties. If all sensors had the same size but not number of pixels, we could define a DR per unit area, and use this to compare sensors. But since sensors come in different sizes as well as pixel counts, Bill Claff and others proposes to use a notion of DR that is measured in a standardized output.

sometimes stating that one "throws away" dynamic range by sampling from the initial file, which simply ignores the fact that the entire chain up to the final output (print of fixed size, another of those constraints that remove degrees of freedom) tends to "waste" information as much more is recorded initially than can be presented in the final stage. We expressly don't need 36 MPix cameras to make a 30x40 cm print or post a picture of 2000x1300 pix on a web page. Most of the collected data at the first step is discarded unless one want mural prints and in that particular case, few of today's consumer models deliver enough pixels to avoid significant extrapolation,  which negatively impacts the output quality in its own manner.

Here again, I don't follow the reasoning, as already stated before. What does the fact that we can throw away information at any stage of processing bring to the table?
You can always degrade the output to erase all differences. But this is irrelevant. Deleting the image would be taking this to the extreme.

The question about 'required resolution' is quite complicated as it involves viewing distance and the dot gain of the paper, among other things. If the image is printed at a size such that the pixel size is smaller than the dot gain, the additional resolution is in some sense 'wasted', even though the result should not be any worse sharpness-wise than from a lower-res sensor.

Meanwhile, the sampling argument we had was referring to the PDR discussion and John's calculation, where a factor sqrt(N) appeared which you questioned.
I don't think we can derive from that much regarding spatial resolution. The PDR is about questions of noisiness when reproducing very bright and very dark objects in the same photograph.
Title: Re: Discussion of 'Equivalence'
Post by: Jack Dahlgren on May 16, 2017, 19:26:53
Dynamic range is always tied to a certain signal (or device, but in this case we use different DR definitions for the same device, so they must be referring to different signals). What is called a 'signal' differs from case to case. Sometimes the same quantity that is noise in one case, can be a signal in another.

- If the signal is the brightness of a patch of the normalized output image (either the entire image or a certain predetermined fractional part of it, does not matter), then the definition that was used by John and me, or alternatively Bill Claff's PDR, applies.

- For the per-pixel DR, the signal is the charge measured in the photosite.

The name is still valid because it is a ratio of the biggest and the smallest recordable signals.
What name would you propose?

Considering that pixel level DR is a constant for the same sensor, then definitely do not call the variable the same name. It would be most effective if it refers to what it is. I believe in the calculation it is the the square root of number of pixels x dynamic range of pixel. Since number of pixels is related to size, the name should be related to size (which is the actual variable here). I'll let someone more interested than I am figure that out, but probably should not use terms already in use.
Title: Re: Discussion of 'Equivalence'
Post by: bclaff on May 16, 2017, 19:51:54
Well, the fact remains that there has been asserted a relation that has to be supported by hard evidence. There is a frequently held view, quite prominently displayed on threads here on NG and seen elsewhere that once one "crops" to DX format in camera, or equivalently, on the sampled file, the DX will have a reduced DR. Can such a view be supported - yes or no? If it cannot be supported please stop talking about "equivalence" as anything else than a fancy label for overall magnifcation.

Here is an example of claimed DR difference using DX to FX from the same camera. Fact or fiction?
The chart you reference is fact. The interpretation is up to the knowledgeable reader.

There is at least one caveat that anyone attempting to apply "equivalence" should note.
DX Crop Mode Photographic Dynamic Range (http://www.photonstophotos.net/GeneralTopics/Sensors_&_Raw/Sensor_Analysis_Primer/DX_Crop_Mode_Photographic_Dynamic_Range.htm)
In short one must be careful to understand almost everyone fails to properly "do the math" in discussions of cropping.
(Possibly because most people don't have access to sufficient data to do it properly.)
Title: Re: Discussion of 'Equivalence'
Post by: Bjørn Rørslett on May 16, 2017, 19:55:16
I read that article before I posted in this thread.  I see no need to change my views.
Title: Re: Discussion of 'Equivalence'
Post by: bclaff on May 16, 2017, 19:56:16
...
DRFX=1.5*DRDX

...
However, this is an approximation and an example of erroneous math covered in DX Crop Mode Photographic Dynamic Range (http://www.photonstophotos.net/)

BTW, to make it perfectly clear, the (DX) PDR figures are measured they are not based on the (FX) figures.
Title: Re: Discussion of 'Equivalence'
Post by: bclaff on May 16, 2017, 20:04:35
Considering that pixel level DR is a constant for the same sensor, then definitely do not call the variable the same name. It would be most effective if it refers to what it is.
On other boards we distinguish "dynamic range" by calling pixel dynamic range Engineering Dynamic Range (EDR) rather than Photographic Dynamic Range (PDR).
I believe in the calculation it is the the square root of number of pixels x dynamic range of pixel. ...
If you are referring to PDR you are very wrong; PDR is not based on EDR.
On the other hand, DxOMark Landscape (print) score is based on EDR.
(In My Opinion this is a fundamental flaw in the DxOMark approach.)
Title: Re: Discussion of 'Equivalence'
Post by: Les Olson on May 16, 2017, 20:59:15
But Equivalence is not used to evaluate photographs. It is used to compare/contrast certain variables of the photo gear.

Yes, but surely it is only worth comparing variables if they have observable effects on the photographic merits of actual photographs. 

"Equivalence" started when Canon had 36 x 24 sensors and Nikon didn't.  Nikon said "Ah, but how much real-world difference does that actually make?", and "equivalence" was Canon's answer. 

Because of that marketing context, only "certain variables" were compared.  Other possible comparisons were simply ignored.  Everyone now knows, eg, that for the same framing and output size you need one stop larger aperture to get the same DoF on DX as on FX.  But no one has seen a calculation of how much larger your print has to be on DX to have the same DoF, given the same framing and aperture (and it is not true that output size does not affect DoF because bigger prints are viewed from greater distance, because bigger prints are not viewed from greater distances). 

The other fallacy of the equivalence-based comparison is that variables are continuous, rather than categorical.  A simple example is a photograph of a page of text: either I can read it or I can't.  If I can read it, more or less "sharpness" is irrelevant, and if I can't, the same.  There are only two categories: can read it, and can't read it.  Differences within the categories are of no importance.  The same is true of dynamic range.  There are only two categories: more than a printer (or monitor) can deliver, and less.  If the camera can deliver more, it does not matter how much more (OK, that is a slight exaggeration, but the point stands).

Title: Re: Discussion of 'Equivalence'
Post by: Birna Rørslett on May 16, 2017, 21:04:46
Plus in addition once the imaging chain exceeds the maximum magnification of detail it can deliver, it runs into empty magnification and then more pixels or more dynamic range whether EDR or PDR or what have you will not help. Empty magnification has been with us from the birth of photography and no format can escape it, all that differs is how far one can go.
Title: Re: Discussion of 'Equivalence'
Post by: Anthony on May 16, 2017, 21:25:53
  A simple example is a photograph of a page of text: either I can read it or I can't.  If I can read it, more or less "sharpness" is irrelevant, and if I can't, the same.  There are only two categories: can read it, and can't read it.  Differences within the categories are of no importance. 

I have to disagree with this, as my eyes age.  I can read many things, but some are much easier and more pleasant to read without optical assistance.  Differences in the "can read" category are of great importance.
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 16, 2017, 21:29:37
The other fallacy of the equivalence-based comparison is that variables are continuous, rather than categorical.  A simple example is a photograph of a page of text: either I can read it or I can't.  If I can read it, more or less "sharpness" is irrelevant, and if I can't, the same.  There are only two categories: can read it, and can't read it.  Differences within the categories are of no importance.  The same is true of dynamic range.  There are only two categories: more than a printer (or monitor) can deliver, and less.  If the camera can deliver more, it does not matter how much more (OK, that is a slight exaggeration, but the point stands).

This is an interesting point, but I'm not sure it is that simple. You say that either you can read a given letter or not, but that is like flipping a coin: it is either heads or tails.
On the other hand, if you display a set of letters corrupted by noise, some will be more easily readable than others. If you graph the probability of identifying the letter correctly as a function of noise level for experiments like this, most of the time you will get a psychometric curve that is smooth (but depending on the experiment, the steepness can vary widely). So the legibility of letters is actually not black and white, but can be related to a continuous measurable variable.

If entire words are displayed, letters can be inferred despite the fact that they would be barely legible on an individual basis.

About printing dynamic range: you can surely compress the input DR to match the output DR.
Title: Re: Discussion of 'Equivalence'
Post by: Bjørn Rørslett on May 16, 2017, 21:35:32
....
About printing dynamic range: you can surely compress the input DR to match the output DR.

Oops, now you are advocating making a loss of information something you questioned strongly earlier ??? The path to travel surely is narrow.
Title: Re: Discussion of 'Equivalence'
Post by: Les Olson on May 16, 2017, 21:38:54
I was not trying to -- as you seem to have interpreted -- derive mathematical conditions for the pleasingness of an image. That is a different subject for neuroscience to figure out, but not entirely futile, e.g. how certain geometrical relationships in faces can be correlated with pleasingness across multiple viewers.

Instead, I was describing a possible definition of what it means that an image can look 'the same' despite differences along the imaging chain. There is no notion of subjective quality or pleasingness to this, merely the notion of sameness of measurable quantities, like noise, DOF, framing etc.

Are you saying that humans might perceive two images as different, despite the fact that all possible measurements on the image say they are the same? If yes, what would you attribute the difference in perception to? The brain can only pick up stuff that is there, or would you like to invoke some supernatural senses?

[...]

You are saying that such concepts are 'singularly useless' for photographers, as if this matters, or is an objective fact. Instead, we are merely discussing conceptual tools that any photographer, camera designer, scientist or interested individual can freely choose to make use of or not. Obviously, you have decided that they are useless for you, but the fact that there is a lot of literature on the subject indicates that there is considerable interest in them.

I was not under the impression that you were trying to develop a mathematical description of pleasingness.  On the contrary, the problem is that you are ignoring pleasingness.

The issue is not that humans say images are different when measurements say they are the same but that humans say they are the same when measurements say they are different.  You must define a threshold of significance for each of the parameters you measure, and that threshold must be based on human evaluation - which, as of now, it is not.  That said, however, you cannot imagine that it is impossible for, eg, two portraits of a person to have identical dynamic range and SNR and whatever else you can measure but completely different emotional impact.

I may be old-fashioned, but the idea that what is or is not useless for photographers matters to photographers seems to me self-evident.  Of course, anyone can use any concept they like for any purpose they deem worthwhile.  I am just asking you to explain the photographic value of equivalence.  I am doing that because my claim is that equivalence is nothing but a marketing tool.
Title: Re: Discussion of 'Equivalence'
Post by: Les Olson on May 16, 2017, 21:51:47
I have to disagree with this, as my eyes age.  I can read many things, but some are much easier and more pleasant to read without optical assistance.  Differences in the "can read" category are of great importance.

Yes, but not from the point of view of information.  If you can, eventually, read the sign that says that the 12:45 train to Hogwarts leaves from Platform 9.75, you don't know any more because reading it was easier or more pleasant, or less because it was slower or more difficult. 

Title: Re: Discussion of 'Equivalence'
Post by: David H. Hartman on May 16, 2017, 21:56:46
Oops, now you are advocating making a loss of information something you questioned strongly earlier ??? The path to travel surely is narrow.

The Dynamic Range captured on a linear slope is all but never presented as linear on a print or on a computer display or any media. For as long as people made prints from negatives the toe of the film compressed the shadows and the toe of the printing paper compressed the highlights. This compression of dynamic range that came through the lens, exposed the film, came through the developed negative and exposed the printing paper is all I knew from about 1971 to 2003. One could never print all the dynamic range available in a Kodacolor 100 negative, not with any method I knew. [Even Tri-X gave more dynamic range that could be used. One can think of the enlarger as a giant shoe horn.]

There are two things one can do with the dynamic range if it's greater that can fit the display media. You can cut it off, truncate it, as one will do the brightness and contrast or compress it as one will do with levels and curves. [The camera and software tone map it so I'm not sure how to get a *totally* flat image out of a camera. It's already been tone mapped by the time I can start using it. I don't believe the Flat Picture Control offered by current Nikon cameras and Capture NX-D is totally flat.]

The compression of shadow and highlight was also present in the process of developing transparency films. All of the transparency films I used also truncated the dynamic range coming through the lens.

A print or other image with a linear slope looks flat, chalk and soot. I don't even know if a print would look right to me if it could be linear and that's what a grew up with. My brain is hopelessly programed to want good mid tone contrast and that must be at the expense of shadow and highlight process.

Give me a camera with lots of DR and I'll figure out how to use it but I'll compress where needed and may truncate it to satisfy me brain. I will always do this.

Dave Hartman

Now I'll have another cup of coffee to satisfy my brain in other ways. :)
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 16, 2017, 22:01:06
I was not under the impression that you were trying to develop a mathematical description of pleasingness.  On the contrary, the problem is that you are ignoring pleasingness.

The issue is not that humans say images are different when measurements say they are the same but that humans say they are the same when measurements say they are different.  You must define a threshold of significance for each of the parameters you measure, and that threshold must be based on human evaluation - which, as of now, it is not.  That said, however, you cannot imagine that it is impossible for, eg, two portraits of a person to have identical dynamic range and SNR and whatever else you can measure but completely different emotional impact.

I may be old-fashioned, but the idea that what is or is not useless for photographers matters to photographers seems to me self-evident.  Of course, anyone can use any concept they like for any purpose they deem worthwhile.  I am just asking you to explain the photographic value of equivalence.  I am doing that because my claim is that equivalence is nothing but a marketing tool.

Yes, I'm ignoring pleasingness because it is, for me at least, a completely different topic. Everyone can define their own criteria for pleasingness, and include or not include measurable parameters like SNR etc. I still think there are patterns to what humans generally find visually pleasing, which boil down to properties of the brain, but they are not easy to model.

Regarding the threshold of significance: Sorry for misunderstanding you. This can be added on top of everything that has been said. The threshold only blurs things and makes things the same that could be measurably different. I don't see how this interferes with the concept of equivalence. Isn't it nice that the technical concept is more discriminative than human judgement? Would you prefer it the other way round, a concept which claims images are the same when most humans could tell them apart?

I do not want to define the photographic value of the concept since that is entirely subjective.
Likewise for 'what matters', I think there is no generally accepted canon of relevance.
Does it matter that water is made up of hydrogen and oxygen (i.e. the fact)? For sure. Does it matter that we know? Probably not, humans have lived very long without having an idea.
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 16, 2017, 22:13:40
Oops, now you are advocating making a loss of information something you questioned strongly earlier ??? The path to travel surely is narrow.
I was only questioning it as a means to invalidate certain definitions.
If your output medium has a different dynamic range, you must do something.
But this does not mean that a worse sensor gets better just because you are evaluating it using an imperfect output medium. The output medium will only level out differences.
If you choose better and better output media, the differences will start to reappear. But they cannot exceed what was originally captured.

Title: Re: Discussion of 'Equivalence'
Post by: David H. Hartman on May 16, 2017, 22:14:01
All of this theoretical stuff is of no use to me if it doesn't help me make a pleasing image.

Dave
Title: Re: Discussion of 'Equivalence'
Post by: JohnMM on May 16, 2017, 22:23:46
However, this is an approximation and an example of erroneous math covered in DX Crop Mode Photographic Dynamic Range (http://www.photonstophotos.net/)

OK, an approximation - but good enough for its purpose, and of the sort used by the "gang" on the DPR boards.
Title: Re: Discussion of 'Equivalence'
Post by: JohnMM on May 16, 2017, 22:34:32
On other boards we distinguish "dynamic range" by calling pixel dynamic range Engineering Dynamic Range (EDR) rather than Photographic Dynamic Range (PDR).If you are referring to PDR you are very wrong; PDR is not based on EDR.

But doesn't the use of PDR imply use of your limit for the smallest signal and your method of normalisation using CoC. Here, rather arbitrarily, I've used the read noise for the lower signal limit and used all the pixels for normalisation. How might that be described? Joseph James sometimes uses the read noise and what he calls micropictures - one millionth of the total number of pixels.
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 16, 2017, 22:44:33
But doesn't the use of PDR imply use of your limit for the smallest signal and your method of normalisation using CoC. Here, rather arbitrarily, I've used the read noise for the lower signal limit and used all the pixels for normalisation. How might that be described? Joseph James sometimes uses the read noise and what he calls micropictures - one millionth of the total number of pixels.

My understanding was that while they are not based on each other, they are both based on the photon transfer curve. The cutoff point for EDR is a SNR of 1, for PDR it is a number that is computed based on the CoC, resulting in PDR being normalized to a photographic print size.

The value of your little calculation was to show another definition of dynamic range that exhibits a similar normalization.
My mind is not entirely clear on what the approximation in your calculation is, besides neglecting noise correlations. Perhaps Bill Claff can clarify his comment.
Title: Re: Discussion of 'Equivalence'
Post by: Jack Dahlgren on May 16, 2017, 22:53:52
My understanding was that while they are not based on each other, they are both based on the photon transfer curve. The cutoff point for EDR is a SNR of 1, for PDR it is a number that is computed based on the CoC, resulting in PDR being normalized to a photographic print size.

The value of your little calculation was to show another definition of dynamic range that exhibits a similar normalization.
My mind is not entirely clear on what the approximation in your calculation is, besides neglecting noise correlations. Perhaps Bill Claff can clarify his comment.

Quote
My definition of Photographic Dynamic Range is a low endpoint with an SNR of 20 when adjusted for the appropriate Circle Of Confusion (COC) for the sensor - Bill Claff

The normalization to COC means this is just a question of magnification. Since the print does not use the entire dynamic range anyway, this normalization makes little sense. Dynamic range is important during capture and editing, it is not as important at the print stage so normalization to a viewing format size is interesting, but not particularly useful. Calling it photographic dynamic range is a stretch of the concept.
Title: Re: Discussion of 'Equivalence'
Post by: Anthony on May 17, 2017, 00:07:44
Yes, but not from the point of view of information.  If you can, eventually, read the sign that says that the 12:45 train to Hogwarts leaves from Platform 9.75, you don't know any more because reading it was easier or more pleasant, or less because it was slower or more difficult.
Information is not such a simple concept.  At the margins, with imperfect eyesight, perceived information can change.  A 5 can look like an S.  One blinks, and it looks like a 5.  Or maybe one is looking at a digital representation of an image.  It may be "readable" but lots of interesting information may be missing.  Readable/not readable is not binary.  Unless one decides that the best readability technically available is what is meant by "readable".
Title: Re: Discussion of 'Equivalence'
Post by: David H. Hartman on May 17, 2017, 00:26:14
...Dynamic range is important during capture and editing, it is not as important at the print stage so normalization to a viewing format size is interesting, but not particularly useful. Calling it photographic dynamic range is a stretch of the concept.

The dynamic range of a print is limited by how much light that print can reflect and how little light that print can reflect. The base of the print can cheat a little by having optical brighteners added so it glows under some light like your T-shirt that's been washed in Tide. :)

Dave Hartman who prefers ALL "Free and Clear" to Tide. 

Photographically speaking all the dynamic range in the **cosmos is useless unless you know how to ***shoe horn it into your limited display medium.

**I thought "cosmos" sounded more important than world.

***"shoe horn" here means compress to represent.
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 17, 2017, 00:36:58
The normalization to COC means this is just a question of magnification. Since the print does not use the entire dynamic range anyway, this normalization makes little sense. Dynamic range is important during capture and editing, it is not as important at the print stage so normalization to a viewing format size is interesting, but not particularly useful. Calling it photographic dynamic range is a stretch of the concept.

I agree with your first sentence. But the rest are blanket statements.

Why do think that the print having a lower dynamic range makes the concept useless? Let me make an example:

Let's say that your desired output medium has a dynamic range of 10 stops, and you compare two cameras with equal sensor size and number of pixels, but different per-pixel DR, e.g. 10 stops and 14 stops respectively. Now you shoot a scene that has some details that you would like to display that are 11 stops darker than the brightest part of the image (which determines your exposure). With the 10 stop DR sensor, the details will be buried in the sensor noise, while they are 3 stops above the noise floor for the higher-DR sensor. On your 10 stop output medium, you will be able to see the details if you prepare your high-DR capture suitably (i.e. by lifting the shadows*). Using the same processing on the low-DR capture, you will still not see anything but noise. So the DR definition is useful even though one sensor has a higher DR than your output medium.

*Of course, if you don't lift the shadows, you will not see anything on the output image. But why would you that? Just to prove that DR of the capture device is meaningless?
Title: Re: Discussion of 'Equivalence'
Post by: bclaff on May 17, 2017, 00:48:01
My understanding was that while they are not based on each other, they are both based on the photon transfer curve.
Most dynamic range measures are not based on the Photon Transfer Curve (PTC) but simply on read noise at the pixel level.
My Photographic Dynamic Range (PDR) is based on a portion on the PTC in the region of interest.
Clearly this is what I think is appropriate, and although I don't want to argue it here; I think there's a good case for this approach.
.... Perhaps Bill Claff can clarify his comment.
Which part would you like clarified?
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 17, 2017, 00:59:58
Which part would you like clarified?

I'm wondering what other approximation (besides neglecting the correlation of read noise between photosites) is used in John's estimate of DR at the sensor level.
Title: Re: Discussion of 'Equivalence'
Post by: bclaff on May 17, 2017, 03:14:44
...
My mind is not entirely clear on what the approximation in your calculation is, besides neglecting noise correlations. Perhaps Bill Claff can clarify his comment.
...
Which part would you like clarified?
I'm wondering what other approximation (besides neglecting the correlation of read noise between photosites) is used in John's estimate of DR at the sensor level.
OK, I get it; I think John would be a better source for that explanation than I. :)
Title: Re: Discussion of 'Equivalence'
Post by: Les Olson on May 17, 2017, 09:30:37

*Of course, if you don't lift the shadows, you will not see anything on the output image. But why would you that?

Well, the obvious reason is that pictures with the shadows lifted one stop look hideous.  Another reason not to lift the shadows is that there is nothing wrong with them, and there need not be because there is no reason to accept your arbitrary condition that the exposure is set by the highlights.

Of course, you are perfectly entitled to disagree about the hideousness of lifted shadows, or the sacredness of highlights, but you are still basing your claim about the usefulness of knowing capture DR on an aesthetic judgement about the final image, which you previously said you weren't. 
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 17, 2017, 09:58:20
Well, the obvious reason is that pictures with the shadows lifted one stop look hideous.  Another reason not to lift the shadows is that there is nothing wrong with them, and there need not be because there is no reason to accept your arbitrary condition that the exposure is set by the highlights.

Of course, you are perfectly entitled to disagree about the hideousness of lifted shadows, or the sacredness of highlights, but you are still basing your claim about the usefulness of knowing capture DR on an aesthetic judgement about the final image, which you previously said you weren't.

It is distracting to always mix up aesthetic discussions with technical ones.

I did not talk about pleasingness. I did not say that the shadows have to be lifted in order to get a pleasing image.
The shadows have to be lifted to display the information at 11 stops below the highlight point because that was the brief for the image. By not displaying them on the output, you are not fulfilling the brief. The image can have pure documentation purposes and does not have to have any artistic value at all. Saying that the output medium only has 10 stops of dynamic range is no excuse for hiding the information that was supposed to be displayed.

Likewise for the highlights, the brief was that they cannot be clipped, and therefore they set the maximum exposure that can be set.

These are the conditions of my example. It is no use arguing the conditions, they were not up for debate.

The question is, how do you solve this photographic situation? It is a common one, e.g. when photographing interiors with windows.*

Another way to fullfill the brief is to light the dark parts of the image, or to do multiple exposures etc. But if for some reason you are forced to use one exposure and no additional lighting, then the example applies.

I know that lifting shadows too much can lead to unpleasing results, but that is subjective, and I'm not going to impose my subjective criteria on other people. But the point is that the information is there on the 14 stop DR capture, while it is totally buried in the noise in the 10 stop DR capture, and cannot be recovered with any kind of processing. This proves that capture DR that exceeds the output DR is useful.

* I have seen plenty of interior architectural photography with large dynamic range printed on paper, which has a much lower dynamic range. It can be done.
Title: Re: Discussion of 'Equivalence'
Post by: Les Olson on May 17, 2017, 12:52:18
But the point is that the information is there on the 14 stop DR capture, while it is totally buried in the noise in the 10 stop DR capture, and cannot be recovered with any kind of processing. This proves that capture DR that exceeds the output DR is useful.

No, it proves you should always carry a flash.  Of course it is your fantasy and you can set the conditions however you like so using a flash is impossible?  So it proves that if you are photographing interiors with windows and you can't use a flash you should choose an overcast day.  Then you will modify the conditions again - the interior is in Dubai so overcast days are rare? - so as to preserve the conclusion.  The point is not that you can't modify the conditions endlessly, but that doing so is circular: the only reason to accept the conditions is that they are pre-conditions for the conclusion to be true.

Title: Re: Discussion of 'Equivalence'
Post by: Bjørn Rørslett on May 17, 2017, 13:03:54
In fact, *any* definition used at the first step has to be independent of the end output - otherwise a feedback (or circularity) is present. This is notwithstanding the obvious logical flaws of requiring a *standard size output* (most photographers would never think this way anyway),, ignoring the problem of empty magnification, the underutilisation of any "dynamic range" (and pixels)  in the first step for the output, the impact of light conditions including scene contrast at the time of capture, and a raft of other potential pitfalls. For example, if one tries to capture a perfectly blank wall, no amount of "dynamic range" or pixels or format, large or small, will suffice simply because a photographic image can not be made in the first place as there is nothing for the system to focus on. Contrasts have to be present (and no, this is not related to AF failure, it is basic photography theory). There thus will be a single value of whatever statistic one implements as a measure of "dynamic range" as at it is not determinable it can be set at any value one wishes as it cannot be changed later. It is a singularity.
Title: Re: Discussion of 'Equivalence'
Post by: David H. Hartman on May 17, 2017, 13:47:49
* I have seen plenty of interior architectural photography with large dynamic range printed on paper, which has a much lower dynamic range. It can be done.

Yes, it can be done and it does not prove one should or should not use flash.

For well over a century the toe of the film compressed the shadows and the toe of the printing paper compressed the highlights. What was done with film and paper is now done in the camera or in software. The means of capturing and processing the image has changed. The way it's presented on the paper has not.

B&W printing paper has what, a reflective density of 2.0 or about 6.6 EV? With classic Tri-X, N-1 developing and some dodging and burning I could stuff 13 stops on to a sheet of B&W printing paper and make it look good. The EV range of the scene will most frequently exceed the EV range of a reflective print. That's no problem. What's a problem is when the EV range of the scene exceeds the capacity of the capture process.

DXO says my D800 has a range of 14.4 EV. How do I represent that on a sheet of paper with a range of say 7 EV? Levels and curves? LCH (lightness, chroma, hue)? I prefer LCH because LCH separates the contrast from the saturation. Using the Master Lightness I'll start by drawing down the 1/4 tone and raising up the 3/4 tone to create a soft "S" curve. I don't have to clip the shadows or the highlights to have pleasing mid tone contrast. If the full 14.4 EV were compressed evenly to fit the paper the resulting image would be "Calk and Soot."

Dave Hartman

In the film age did the dynamic range of Tri-X and Paper make a difference compared to that of Velvia? Knowing the capacity of one's capture process is just as important today as it was yesterday.
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 17, 2017, 13:58:02
No, it proves you should always carry a flash.  Of course it is your fantasy and you can set the conditions however you like so using a flash is impossible?  So it proves that if you are photographing interiors with windows and you can't use a flash you should choose an overcast day.  Then you will modify the conditions again - the interior is in Dubai so overcast days are rare? - so as to preserve the conclusion.  The point is not that you can't modify the conditions endlessly, but that doing so is circular: the only reason to accept the conditions is that they are pre-conditions for the conclusion to be true.

That's the thing about examples; they tend to be crafted in order to illustrate an idea.
I merely wanted to give you an example of a situation where the output dynamic range is lower than what the camera records, but still the additional detail that was recorded can be displayed on the output medium. You wish to make this into a universal statement that it was not intended to be. It was an example, and I already conceded to the obvious fact that lighting the dark parts is another way to get around the problem, as is shooting at night ect.

It is a much harder case, in my opinion, to argue in favour of the statement you seem to make; namely that the additional dynamic range is never useful. Still you seem to be insisting on this point, or you try to put forward a set of rules how things should be done, e.g. solving the above dynamic range problem with flash, and excluding other equally valid possibilities.
Title: Re: Discussion of 'Equivalence'
Post by: Bjørn Rørslett on May 17, 2017, 14:31:18
"That's the thing about examples; they tend to be crafted in order to illustrate an idea. "

Or see whether a theory can deal with it.
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 17, 2017, 15:32:59
"That's the thing about examples; they tend to be crafted in order to illustrate an idea. "

Or see whether a theory can deal with it.

We started by dissecting the concept of PDR or similar normalized measures of DR.
Now we got so far as also questioning the concept of DR in general. At least this is what Les is doing, if I understand him correctly. It is not clear to what end? Even if we all agreed on this board that it is meaningless to talk about DR of cameras, this would nevertheless not stop anyone from being interested in it.
I do not understand what the end goal is. To conclude that all theoretical concept are useless?

To summarize, the current discussion revolves around what are useful notions of DR (dynamic range).
- Some people (including you, if I understood your statements correctly) argue that per-pixel DR is useful, but not normalized measures since they lead to confusion. You prefer to think of it in terms of secondary magnification.
- Les Olson argues (again, if I understood him, if not, please correct me and provide clarification) that all DR measures are useless
- My point of view is that depending on the question, either per-pixel DR or PDR can be useful
- Jack has argued that while PDR is useful, it should not be called a DR
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 17, 2017, 15:46:27
For example, if one tries to capture a perfectly blank wall, no amount of "dynamic range" or pixels or format, large or small, will suffice simply because a photographic image can not be made in the first place as there is nothing for the system to focus on. Contrasts have to be present (and no, this is not related to AF failure, it is basic photography theory). There thus will be a single value of whatever statistic one implements as a measure of "dynamic range" as at it is not determinable it can be set at any value one wishes as it cannot be changed later. It is a singularity.

The example you are giving is interesting, because it suggests that you are thinking of a per-image definition of DR. I don't think that this would be useful. If we are talking about properties of the scene, we should IMO use the term 'scene contrast' to describe the range of tones that is present in the scene. DR applies to the capture device, either on a per-photosite basis or on a per-area or per-image basis, depending on the question.

We would not measure DR by trying to record a scene with no scene contrast. We present a scene with excessive scene contrast and then determine to what extent this can be reproduced by the capture device. Depending on the question, we then normalize the measurement to a fixed output size, or we don't. The resulting number gives an upper bound to the range of tones that can be reproduced with an ideal output medium. On a limited output medium, the bottleneck changes from the capture device to the output medium. This does not invalidate the concept of DR of the capture device, in a way that is analogous to other concepts like resolution.
Title: Re: Discussion of 'Equivalence'
Post by: Bjørn Rørslett on May 17, 2017, 15:54:47
Normalisation removes degree(s) of freedom. Instead assumptions are inserted that might not apply for the person trying to use the model. All of this makes it quite hard to come up with testable predictions of any practical value. I leave the scene there. So many pitfalls and loopholes to attend to. The theorists should start removing any of the many sources for circular reasoning. That'll save us a lot of time in futile discussions, for sure.


Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 17, 2017, 16:06:10
Normalisation removes degree(s) of freedom. Instead assumptions are inserted that might not apply for the person trying to use the model. All of this makes it quite hard to come up with testable predictions of any practical value. I leave the scene there. So many pitfalls and loopholes to attend to. The theorists should start removing any of the many sources for circular reasoning. That'll save us a lot of time in futile discussions, for sure.

Simplification requires assumptions. There is no magic in this. But the various descriptions can happily coexist, I don't see a major danger involved. People that use the concept should first familiarise themselves with it and be aware of the assumptions, but this holds for any field or topic.
Title: Re: Discussion of 'Equivalence'
Post by: bclaff on May 17, 2017, 17:16:49
Normalisation removes degree(s) of freedom. Instead assumptions are inserted that might not apply for the person trying to use the model. All of this makes it quite hard to come up with testable predictions of any practical value. I leave the scene there. So many pitfalls and loopholes to attend to. The theorists should start removing any of the many sources for circular reasoning. That'll save us a lot of time in futile discussions, for sure.
Normalization allows "apples to apples" comparisons and with further appropriate math can yield "oranges to oranges" as well.
In My Opinion (IMO), that far better than "apples to oranges" without normalization.

Not quite on topic but I'd like to point out that I do have per pixel measures such as Read Noise (http://www.photonstophotos.net/Charts/RN_ADU.htm) and Input-referred Read Noise (http://www.photonstophotos.net/Charts/RN_e.htm) at my site.
(As well as investigations of Fixed Pattern Noise (FPN) visualized as heatmaps (http://www.photonstophotos.net/Charts/Sensor_Heatmaps.htm#mode=22,camera=Hasselblad%20X1D-50c,suffix=16).)

Naturally, whenever one consults technical data, it's important to choose the right data given the needs.
I think PDR is a very valuable measure; but certainly not appropriate for all discussions.
Title: Re: Discussion of 'Equivalence'
Post by: David H. Hartman on May 17, 2017, 18:14:12
... This is notwithstanding the obvious logical flaws of requiring a *standard size output* (most photographers would never think this way anyway)...

This is for test purposes only. Why is it a logical flaw?

Title: Re: Discussion of 'Equivalence'
Post by: Andrea B. on May 17, 2017, 18:24:32
Again, having the same output photo dimensions is only for the purpose of determining comparative gear characteristics. It has nothing to do with the photographic end purpose.

Two.Separate.Goals. -----> Compare Gear Characteristics versus Make Photographs.

Why Bjørn persists in conflating the two goals, I don't know.
Title: Re: Discussion of 'Equivalence'
Post by: Les Olson on May 17, 2017, 18:32:38
[quote author=simsurace link=topic=5905.msg95526#msg95526 date=1495027979
- Les Olson argues (again, if I understood him, if not, please correct me and provide clarification) that all DR measures are useless
[/quote]

Photography is, as John Szarkowski put it, a process of visual editing: of keeping some things and leaving others out.  Everything outside the frame and everything before and after the image is made is left out.  The fact that in some cases, a bit of shadow or highlight detail is also left out is just part of the same process.  The idea that more sensor dynamic range means better photographs because less is left out is as inane as the idea that colour is better than B&W or video is better than still photography because less is left out.

Of course, there is such a thing as photography that is instrumental rather than aesthetic, where, it is easy to suppose, more dynamic range would always be better.  But for many such uses technical demands are low: telemedicine, eg, where phone images are perfectly acceptable for diagnosis of skin lesions.  There are also areas, such as anatomy, where photographs cannot compete with drawings despite their vastly superior resolution and dynamic range.
Title: Re: Discussion of 'Equivalence'
Post by: Les Olson on May 17, 2017, 18:37:48
This is for test purposes only. Why is it a logical flaw?

Because the test is supposed to be a test of the gear for photography, and requiring a standardised output means it isn't.
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 17, 2017, 19:08:18
[quote author=simsurace link=topic=5905.msg95526#msg95526 date=1495027979
- Les Olson argues (again, if I understood him, if not, please correct me and provide clarification) that all DR measures are useless


Photography is, as John Szarkowski put it, a process of visual editing: of keeping some things and leaving others out.  Everything outside the frame and everything before and after the image is made is left out.  The fact that in some cases, a bit of shadow or highlight detail is also left out is just part of the same process.  The idea that more sensor dynamic range means better photographs because less is left out is as inane as the idea that colour is better than B&W or video is better than still photography because less is left out.

Of course, there is such a thing as photography that is instrumental rather than aesthetic, where, it is easy to suppose, more dynamic range would always be better.  But for many such uses technical demands are low: telemedicine, eg, where phone images are perfectly acceptable for diagnosis of skin lesions.  There are also areas, such as anatomy, where photographs cannot compete with drawings despite their vastly superior resolution and dynamic range.

I understand your argument, but I was never making any claims regarding artistic importance. More dynamic range does not equal better photographs. More dynamic range equals an objectively and technically better tool to record light signal, in terms of quantifiable, measurable criteria. It has nothing to do with art.

It's a similar question as 'how many keys does a piano have'. More keys provide a bigger frequency range to make music with. It does not equal 'better' music, nor do you have to use all the keys in every piece you play on that instrument.

Is there, in your opinion, any instance where more dynamic range in the capture device would be a disadvantage?

Many photographers are finding that digital cameras have vastly improved since their inception. You will not find many that want to go back to the first cameras with their rather small dynamic range. And they will continue to get better. This is just an opportunity for artistic expression, not a guarantee that artistic expression will become better in any way (this is a different topic altogether, and I wonder why it is so difficult to distinguish between the two).

Soon, we will get new output devices (screens) that are able to display more of the dynamic range that we are able to record. This will allow us to see photographs in a way that was not possible before. Some people will be head over heels about it, others will find it irrelevant for their artistic goals. Tastes differ.
Title: Re: Discussion of 'Equivalence'
Post by: bclaff on May 17, 2017, 19:09:11
... This is notwithstanding the obvious logical flaws of requiring a *standard size output* (most photographers would never think this way anyway),...
FWIW, my Photographic Dynamic Range (PDR) does not specific a standard size output although the Signal to Noise Ratio (SNR) criteria is phrased using a specific example.
The measurement does assume a "typical" ratio of viewing distance to output size. It covers any size by applying the principle of similar triangles.
Most users of the chart would find the PDR value (on the y-axis) that meets their personal requirements by following the curve for a camera with which they are familiar up the the highest ISO setting for which they tend to get good results.
This process accounts for typical viewing distance and size for that photographer.
Comparing to other cameras at the same PDR level is sensible (to me and many photographers have reinforced this in their comments).

Although only one ratio of viewing distance to output size is measured, other ratios would not move thee relative positions of the sensors.

In many ways the central principle of PDR relates to the Circle Of Confusion (COC).
I haven't seen any intense debates about the use of COC in the computation of Depth Of Field (DOF).
To me taking PDR to task over the choice of using COC is roughly equivalent.

BTW, I'm only defending PDR here. I think DxOMark has made a serious error in their normalization strategy.
Title: Re: Discussion of 'Equivalence'
Post by: bclaff on May 17, 2017, 19:22:46
- Les Olson argues (again, if I understood him, if not, please correct me and provide clarification) that all DR measures are useless
...  The fact that in some cases, a bit of shadow or highlight detail is also left out is just part of the same process.  The idea that more sensor dynamic range means better photographs because less is left out is as inane as the idea that colour is better than B&W.
...
If you want to frame a shot in a particular way then you might need lens with an appropriate focal length.
If you want a shallow Depth Of Field (DOF) you wouldn't choose a f/4 lens.
If you want a certain level of shadow detail then you must have sufficient dynamic range to capture it.

If you don't frame your shot perfectly, often you can crop to taste; but not if what you require is out of frame.
If you get too much DOF, often you can soften is post processing; but if you got too little you cannot recover what is lost.
If you get too much detail in the shadows, that's easy to hide; but if you didn't get enough you cannot recover it.

Colour isn't "better" than B&W; but if you want color photographs you don't purchase a monochrome camera.
Title: Re: Discussion of 'Equivalence'
Post by: David H. Hartman on May 17, 2017, 21:30:06
Because the test is supposed to be a test of the gear for photography, and requiring a standardised output means it isn't.

The test is supposed to be what it creator states that it is, nothing more, nothing less.

Can one safely generalize from it? Not with complete safety. Is it good enough to use to help select a camera? I think so.

Dave Hartman

Title: Re: Discussion of 'Equivalence'
Post by: David H. Hartman on May 17, 2017, 21:45:03
Normalisation removes degree(s) of freedom.

Not for the photographer who buys a camera *in part* based on these test. Not for the photographer who buy a camera for practical use.


Instead assumptions are inserted that might not apply for the person trying to use the model.

They should read and understand the test and its limitations. Whether they will or not isn't the fault of the test or its creator.


All of this makes it quite hard to come up with testable predictions of any practical value.

I think the test has good degree of value. I don't think it's perfect.

The theorists should start removing any of the many sources for circular reasoning.

Yes and maybe version II will do this. Maybe this discussion will help.

Cheers!

Dave

Looking for the flaws certainly has value. I don't find them sufficient to toss the graphs in the dust bin.

---

Bjørn,

With your very significant training in science and it's methods this discussion must be trying. My degree of training in science amounts to a learners permit.

Cheers!

Dave
Title: Re: Discussion of 'Equivalence'
Post by: David H. Hartman on May 17, 2017, 21:56:13
The idea that more sensor dynamic range means better photographs because less is left out is as inane as the idea that colour is better than B&W or video is better than still photography because less is left out.

You lost me here. Not that I don't understand what you wrote. I was agreeing with you up to this point.

Dave
Title: Re: Discussion of 'Equivalence'
Post by: Les Olson on May 17, 2017, 22:04:37
I understand your argument, but I was never making any claims regarding artistic importance. More dynamic range does not equal better photographs.

Then why are we talking about it?  Why would a photographer be interested in the dynamic range of different cameras if dynamic range does not have some connection with better photographs? 

Of course, it is possible that greater dynamic range could find a photographic use, but the counter-example of colour is relevant to that hope: "Most color photography, in short, has been either formless or pretty. In the first case the meanings of color have been ignored; in the second they have been at the expense of allusive meanings. While editing directly from life, photographers found it difficult to see simultaneously both the blue and the sky." (John Szarkowski).
Title: Re: Discussion of 'Equivalence'
Post by: Les Olson on May 17, 2017, 22:10:06
You lost me here. Not that I don't understand what you wrote. I was agreeing with you up to this point.

You think that if my camera takes 10 frames per second I have done twice as much photography as if it did five? 
Title: Re: Discussion of 'Equivalence'
Post by: David H. Hartman on May 17, 2017, 22:24:25
You think that if my camera takes 10 frames per second I have done twice as much photography as if it did five?

You've lost me again. I found the greater DR of my D300s a great relief compared to my D2H. So much so that I finally sold my D2H for $185.00 (USD). I had more respect for the D2H than to use it as a door stop but not enough to use it as a camera.

Dave
Title: Re: Discussion of 'Equivalence'
Post by: bclaff on May 17, 2017, 22:30:55
You think that if my camera takes 10 frames per second I have done twice as much photography as if it did five?
I think that there are situations (sport, some wildlife, etc.) where you're more likely to miss your shot if you have insufficient Frames Per Second (FPS).
Do we take these photographers to task for considering FPS in choosing their gear?
Some people have a need for high dynamic range. Why is there a problem with that?
Title: Re: Discussion of 'Equivalence'
Post by: David H. Hartman on May 17, 2017, 23:29:54
I don't think there is a problem.

Dave Hartman

Except that I don't have both, damn it!

---

There are other problems in my life...

As I need a Nikon D820H and Nikon D820X: I need a '69 Boss 429 Mustang for burnouts and a '69 Boss 302 Mustang for the road course. This is important!

Dave Hartman
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 18, 2017, 00:02:47
Then why are we talking about it?  Why would a photographer be interested in the dynamic range of different cameras if dynamic range does not have some connection with better photographs?

The reason we are talking about it here is because I opened this thread.
I opened this thread after seeing some disagreements, questions, and uncertainties expressed in another thread.
The reason I opened the thread was that there is controversy about what I considered more or less well-established concepts that originate from ideas that are fairly standard in science and engineering.
Making models to try and simplify a complicated situation for other people to facilitate understanding is a normal process, as well as checking the model against experimental data.
That there is so much resistance here and elsewhere makes this an interesting topic for me. There must be reasons for the controversy. One of them is that many people always conflate technical discussions with art, and demand that any discussion has a net result in terms of artistic growth. You don't get that from this discussion. You might walk away with a better sense of how to evaluate your tools.

Among people that are not well versed in the science behind photography, you also see a lot of misconceptions about what different systems or formats can do for you. There is a lot of wishful thinking and sometimes even 'magic' being invoked. Some people like it that way, preferring to keep a sense of mystery to their photographic process, but for others, getting a better understanding of the way images are recorded has some interest. This is a reason people might take an interest in 'equivalence' or other stuff that we discuss here, because it is an attempt (an imperfect one, but I challenge anyone to come up with one that is as comprehensive and superior) to give a simple description of what might happen in an imaging chain that does not matter*, from a measurement point of view, and by extension, for humans.

*(effects that cancel each other out, resulting in an equivalent image)

Of course there is some connection with photography, but that connection is up to the individual photographer to establish. Some find such discussions crucial and very important for their photography, others seek subtler forms of expression where image quality is secondary.
Title: Re: Discussion of 'Equivalence'
Post by: Bjørn Rørslett on May 18, 2017, 00:05:28
"Bjørn, With your very significant training in science and it's methods this discussion must be trying."

Not at all. Entertaining; no. There is genuine disagreement present, for example about theory applicability or the assumptions invoked, which is nothing unheard of in scientific circles, but perhaps less easy to deal with on a 'net forum.

I have many decades of practical (and scientific) photography experience without ever once thinking in the direction of the suggested theory. I have used all kinds of photographic tools and formats, from the tiniest imaginable to 8x10" and never needing such concepts. One simply does not take the "equivalent" picture with the different gear, it's that simple. A knowledgeable photographer knows the limitations and constraints of her or his tools, and works accordingly; sometimes the perceived limits are transgressed but then mainly for artistic reasons, which also is an integrated part of photography although less capable of being put into a theoretical framework.
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 18, 2017, 00:54:34
"Bjørn, With your very significant training in science and it's methods this discussion must be trying."

Not at all. Entertaining; no. There is genuine disagreement present, for example about theory applicability or the assumptions invoked, which is nothing unheard of in scientific circles, but perhaps less easy to deal with on a 'net forum.

I have many decades of practical (and scientific) photography experience without ever once thinking in the direction of the suggested theory. I have used all kinds of photographic tools and formats, from the tiniest imaginable to 8x10" and never needing such concepts. One simply does not take the "equivalent" picture with the different gear, it's that simple. A knowledgeable photographer knows the limitations and constraints of her or his tools, and works accordingly; sometimes the perceived limits are transgressed but then mainly for artistic reasons, which also is an integrated part of photography although less capable of being put into a theoretical framework.

You are basically saying that because you have so much experience, the concept is redundant.
But what if one does not have that background? Does a photographer today have to start with 8x10" and film to get proficient?
Is there a way to compress all that information that you gathered in decades about what to expect from which format, into something that can be learned in a few hours?
Is it so complicated that you need a lifetime of learning, or is it fairly straightforward such that you can quickly learn it, and then focus on other (more important) stuff?
Are there things that were relevant in the analog era, but are no longer relevant today, and therefore don't have to be dragged along?

It is precisely because most photographers from the digital age do not have the background you have, that new concepts are needed.
Forgetting all previous baggage can sometimes enable a new look at the situation and newer, simpler descriptions.

Also, the fact that today most images are viewed on digital screens has led to new problems.
One of them is that people look at images at 100% and wonder about noise.
Another is that some people think that all 10MP images (to give an example) are the same, irrespective of the size of the sensor (you sometimes see this among laypeople).
A digital image has lost its physical dimension, it's just a bunch of pixels, which can lead to the situation that secondary magnification is forgotten about.
Why would you then not expect the same image quality from a 10MP smartphone sensor vs. a 10MP DSLR on a big print?
This is very different from holding a piece of negative and putting it on the enlarger, thinking about which enlarger lens to use, in which case you are acutely aware of the process that leads to the final result.
New processes require new theoretical concepts, because the old way of doing things will soon be completely forgotten.

For me, the concepts from the analog era are not very natural because I did not grow up using them, even though I can understand them.
I understand that people had to spend hours in the darkroom for what can be achieved today by a few clicks.
At the same time, I do not believe that I have to always think in terms of the old process. I can invent new concept that more compactly describe what I need to know in order to know what I have to do to reach a certain goal.
I tend to think in terms of the digital signal processing and try to simplify the imaging chain to a point where things that do not matter, do not show up in the analysis. This is what is natural to me and what I'm spending my whole working day with anyway. So an assumption that introduces invariance (or reduces degrees of freedom, as you put it) is actually something I embrace, because it makes my life simpler and allows me to focus on the other variables.
The fact that the entire development of digital systems was built upon ideas from information theory suggests that those same ideas should also facilitate a simpler understanding of digital systems, even though the whole thing could probably also be analysed using the concepts from the analog era.
If I know how to get roughly the same image from two formats (an equivalent one), I know by extension how to take a different photograph, or I can predict how the photographs will differ before taking them.
Title: Re: Discussion of 'Equivalence'
Post by: Bjørn Rørslett on May 18, 2017, 01:04:38
Please refrain from personalised arguments and questioning somebody's qualification or insight.  This direction is not helpful at all. If continued it can only lead to locking down the thread because violations of NG Guidelines.

Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 18, 2017, 01:12:31
I did not question your qualifications at all.
If anything, I said that your qualifications are too great to be assumed 'standard', and not representative of a generation of photographers that only knows the analog process from second-hand accounts.
Feel free to lock down anything you want, it does not matter to me.
Title: Re: Discussion of 'Equivalence'
Post by: Bjørn Rørslett on May 18, 2017, 01:16:13
I have no immediate need. Exchange of information has the potential of being useful.
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 18, 2017, 01:32:21
I hope so. But this should not be a dialogue. I will stop for a moment and see what others have to say.
Title: Re: Discussion of 'Equivalence'
Post by: JohnMM on May 18, 2017, 03:27:38
I will stop for a moment and see what others have to say.

I cannot speak for others but my own experience does seem to have something in common with yours.

I started taking photos about 10 years ago when I bought a Nikon D40X (+bits) following early retirement. My instruction in the arts of photography was via internet forums such as this. Unfortunately the forums were dominated by (35 mm) film users and experiences relating to film were somehow expected to translate to digital. So that I was told to "expose for the highlights and develop for the shadows" - or maybe it was the other way round. "Exposure" was after all "universal" and if you understood it for film then skills were easily transferable to digital. Only later (see below) did I discover that digital is essentially linear, film is non-linear, there are differences between the two, and  setting the exposure for digital is in fact much easier than for film. The "exposure triangle" caused problems. I could never work out how ISO affected "exposure". Digital noise was described as digital grain - again something I couldn't understand.

And then I discovered a number of well written essays which described, in simple scientific terms, the basic principles underlying photography. These include the "Equivalence" paper by Joseph James, a paper on noise by Martinec, "Toothwalker" talking about abberrations, and quite a few others. And a veil had been lifted. The idea that light itself was noisy was a revelation. I understood "Exposure". There was no "good exposure", no "bad exposure", no "underexposure" .... I was the photographer and I was in control of it. And there was a  lot of other stuff....

Finally understanding the basics of photography made the subject more enjoyable and it made me into a better photographer. I speak for no-one else. However if someone asks, I would encourage them to read the papers by James and others and to consider adopting their methods. Some will say that they don't have the background. I am fortunate in that I have had many years studying and teaching physical sciences. So the language and methods of James and others are quite natural and accessible. However I think that with a bit a work they are accessible to many others. The theories are more are less sound and accepted by many in the scientific and engineering community - I am talking about physicists and electronic engineers - people who design the sensors that you use in your cameras. Sure - they argue about details. But the basic physics is sound - most light sources are noisy - the use of the term "digital grain" is not helpful. And the theories make practical photography easier - for some of us at least.

Those photographers who reject the ideas of James might like to ask why his ideas are endorsed by so many of those who design the cameras that they themselves use.

I would not seek to force anyone to use ideas that make them uncomfortable. I have taught at all levels and and think it quite proper to use simple explanations where possible. However sometimes the simplification goes so far that it is simply "wrong". Some of the ideas which are the legacy of the film era fall into that category. I choose to use alternative, and what I consider to be better, methods. What I fail to understand is why there is so much hostility to my choice.

Finally, many thanks to Simone. Even if you don't agree with him (and I do for the most part) the clarity of his discourse is impressive. In the past I have actively participated in discussions such as this. Personal difficulties have prevented me from offering more support on this occasion.

 

Title: Re: Discussion of 'Equivalence'
Post by: bclaff on May 18, 2017, 04:30:20
... The "exposure triangle" caused problems. I could never work out how ISO affected "exposure". ...
A "fools errand" since the ISO setting does not directly affect exposure only indirectly (if at all) through the effect on aperture and/or shutter.
(This is entire "argument" on it's own. Let's no go there. :) )
...
And then I discovered a number of well written essays which described, in simple scientific terms, the basic principles underlying photography. These include the "Equivalence" paper by Joseph James, a paper on noise by Martinec, ...
Emil Martinec's paper is a classic and since he has let the original copy fall into disrepair with his permission there is a copy on my site where I have repaired all the broken links.
Noise, Dynamic Range and Bit Depth in Digital SLRs by Emil Martinec (http://www.photonstophotos.net/Emil%20Martinec/noise.html)
Title: Re: Discussion of 'Equivalence'
Post by: tommiejeep on May 18, 2017, 05:14:27

If you want to frame a shot in a particular way then you might need lens with an appropriate focal length.
If you want a shallow Depth Of Field (DOF) you wouldn't choose a f/4 lens.
If you want a certain level of shadow detail then you must have sufficient dynamic range to capture it.

If you don't frame your shot perfectly, often you can crop to taste; but not if what you require is out of frame.
If you get too much DOF, often you can soften is post processing; but if you got too little you cannot recover what is lost.
If you get too much detail in the shadows, that's easy to hide; but if you didn't get enough you cannot recover it.

Colour isn't "better" than B&W; but if you want color photographs you don't purchase a monochrome camera.

Well Bill, why let common sense enter into a discussion of Equivalence  ;)

Since I am a shooter, and not a photographer, I have to learn for myself what a camera is capable of in meeting my needs/desires.  When making a new purchase I do pay attention to what you and a few others say , start sending emails and PMs to people that use cameras for what I shoot (normally Sport and birds) .  Normally looking for hints on usable ISO, Auto ISO, Auto WB, tint, Focus Pt. selection, Buffer.  I rarely get to set up for an image which has great BG, and light .  When I go to the effort of setting up the 500 vr  in, what I think,  is the perfect location, the intended targets and light seldom cooperate (birds, athletes or Gods) .  Which is why I normally have a favourite camera with the 300 2.8 standing beside me  :) 

 It is all about the amount of light, and more importantly,  the quality of light and direction.  I do therefore spend more time in recovery mode than I would like but some cameras are just faster, and better, at reaction to fast changing conditions than others.  Yes, some cameras are better at recovery than others.

I think Bjorn's advice to Jakov, early in the thread, works for me.

I rarely, this could be a first, enter into these "Equivalence Discussions",
Just my two Rupees
Good to see you here  :)
Edit: ....and I remember when I thought the D300/D3 were the best things I could ever dream of  ;)
Title: Re: Discussion of 'Equivalence'
Post by: bclaff on May 18, 2017, 05:23:02
Well Bill, why let common sense enter into a discussion of Equivalence  ;)
...
I rarely, this could be a first, enter into these "Equivalence Discussions",
Nor I. I'm here on the sub-topic of normalized dynamic range, in particular my Photographic Dynamic Range (PDR).
"Equivalence Discussions" don't interest me.
Good to see you here  :)
Thanks :)
Title: Re: Discussion of 'Equivalence'
Post by: Andrea B. on May 18, 2017, 05:54:29
That there is so much resistance here and elsewhere makes this an interesting topic for me. There must be reasons for the controversy. One of them is that many people always conflate technical discussions with art, and demand that any discussion has a net result in terms of artistic growth. You don't get that from this discussion. You might walk away with a better sense of how to evaluate your tools.

That is a good summation, Simone.
I also thank you for opening the discussion. You write very clearly. And you are also able to understand how others veer off the topic because of personal or emotional reactions and respond rationally to that.

Thanks also to Bill for joining in.


I'm still sorting through various notions of 'dynamic range'. The range of zones in a scene. The range of light capturable by the photosite. The range of tones in the actual photograph. No one has mentioned the dynamic range (or is that engineering dynamic range?) of our own little eyeballs.  :D I need to go look that one up because I'm wondering if the camera sees more zones than I do or is it the opposite?
Title: Re: Discussion of 'Equivalence'
Post by: Jack Dahlgren on May 18, 2017, 06:55:50
The dominant term in PDR for a given generation of sensor technology is sensor size.

Yet, this dominant factor is cloaked in mysterious equations and models.

To those who have used film in various sizes it is the case that "we hold these truths to be self-evident". Size makes a difference.

Whether it is a good difference or a bad one is dependent on use. I love the small sensor size in my phone as I get large depth of field. I love the larger sensor (and pixel) size in my Df as I can see in near darkness.

Made-up dimensions such as photographic dynamic range (which differentiates mostly on size rather than true dynamic range) only baffle newcomers and offers a dimension of comparison which is not tied to the true practice of photography which is selecting a system which will deliver on the vision of the photographer.

Selection and use of a format is best done by understanding the capabilities that each format brings, rather than some normalized measure in which the dominant factor is the size of the format.

It is this narrowing of discussion, this focus on normalization, the act of reduction, which experienced photographers react to.And in my case, attaching a label to this measure which bears little resemblance or relation to accepted definition puts the nail in the coffin.

As a thought exercise, I can see the value this might have to some photographers, but for those of us who have used more than a couple of formats, the results of size are already self-evident.
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 18, 2017, 10:09:20
I'm still sorting through various notions of 'dynamic range'. The range of zones in a scene. The range of light capturable by the photosite. The range of tones in the actual photograph. No one has mentioned the dynamic range (or is that engineering dynamic range?) of our own little eyeballs.  :D I need to go look that one up because I'm wondering if the camera sees more zones than I do or is it the opposite?

I dont know how far you got in your research, but this might be of some help.

Total dynamic range of the human visual system is huge. It goes from brightness that is damaging to the eye down to individual photon detection (even though that is subconsious, a brain signal in response to a single photon can be recorded).

However, this huge dynamic range requires adaptation, some of which requires time. There is iris adaptation, photoreceptor adaptation as well as adaptation in the visual pathway. There are rods and cones and they don't have the same sensitivity. In a short time interval, the range of light that can be seen is still estimated to be on the order of 20 stops, and thus higher that what we can currently record.

Cameras function very differently from our eyes. They linearly record the light level, while our eye has processing built in at the earliest stages to extract relevant features (eyes evolved because they enable better control of muscles in order to survive in a complex environment).

We could have designed cameras to be more similar to eyes, but we haven't, because they would double the processing that is going on in our visual system when we look at the final image. This is probably why we design cameras and output media to yield the most accurate recording that is possible, even though we have a long way to go. It gives us more freedom to manipulate the process to achieve what we want (and this might be different than what the naked eye sees).

Cameras that are designed for robotics, on the other hand, can be more closely modeled after human eyes, because conventional cameras record too much information that is not required for artificial vision, requiring too much digital processing that either requires a lot of energy, or is too time consuming to enable fast reaction times. Designing camera sensors that are functioning similarly as the retina is an active research topic, there was some important work done also at my institute (I do not work in this field though). Have a look at http://siliconretina.ini.uzh.ch/wiki/index.php (http://siliconretina.ini.uzh.ch/wiki/index.php) if you are interested in this topic.
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 18, 2017, 10:44:31
The dominant term in PDR for a given generation of sensor technology is sensor size.

Yet, this dominant factor is cloaked in mysterious equations and models.

To those who have used film in various sizes it is the case that "we hold these truths to be self-evident". Size makes a difference.

Whether it is a good difference or a bad one is dependent on use. I love the small sensor size in my phone as I get large depth of field. I love the larger sensor (and pixel) size in my Df as I can see in near darkness.

Made-up dimensions such as photographic dynamic range (which differentiates mostly on size rather than true dynamic range) only baffle newcomers and offers a dimension of comparison which is not tied to the true practice of photography which is selecting a system which will deliver on the vision of the photographer.

Selection and use of a format is best done by understanding the capabilities that each format brings, rather than some normalized measure in which the dominant factor is the size of the format.

It is this narrowing of discussion, this focus on normalization, the act of reduction, which experienced photographers react to.And in my case, attaching a label to this measure which bears little resemblance or relation to accepted definition puts the nail in the coffin.

As a thought exercise, I can see the value this might have to some photographers, but for those of us who have used more than a couple of formats, the results of size are already self-evident.

Ok, I get your point. I think for your uses (and way of thinking), a per-area measure of DR would be more suitable, as that does not include the size of the sensor, but it normalizes for the different pixel densities.
It would be quite easy to make a display of this data similar to what Bill Claff did with PDR. Sensors that are made of the same circuitry, but have different sizes, would end up giving the same values. But also sensors of the same size that do not use the same circuitry, but have different pixel densities while having similar noise properties per unit area, would be rated at the same value. The effect of format size would be left for the reader to consider in addition.

Maybe one could have display toggles for 'normalize for secondary magnification' and 'normalize for pixel density', to allow the reader to choose the method he/she prefers. I don't know whether Bill Claff is open to such a possibility.
Title: Re: Discussion of 'Equivalence'
Post by: Bent Hjarbo on May 18, 2017, 11:40:32
An important thing to know when we decide on which equipment to use is how it impact the picture, DOF, AOV etc.
I think that all agree that there are difference in noise performance on different sensor sizes, and that this changes with time due to development in the IC technology.
I will not go into the discussion on the PDR, but I would like to throw a couple of pictures into this thread ;)
I took my D800, which has a DX mode, and found a 24mm and a 35mm as they have nearly the AOV in the two modes.
I placed three objects to see the DOF in the two modes.
The result was that the background blur was less in DX mode, but it also seems that the exposure was different even though the D800 was in manual mode?
Maybe there is an explanation in some of this discussion.
Title: Re: Discussion of 'Equivalence'
Post by: Les Olson on May 18, 2017, 11:43:50
That there is so much resistance here and elsewhere makes this an interesting topic for me. There must be reasons for the controversy. One of them is that many people always conflate technical discussions with art, and demand that any discussion has a net result in terms of artistic growth. You don't get that from this discussion. You might walk away with a better sense of how to evaluate your tools.

No one has mentioned the dynamic range (or is that engineering dynamic range?) of our own little eyeballs.  :D I need to go look that one up because I'm wondering if the camera sees more zones than I do or is it the opposite?

Of course the discussion has to come back to actual photographs.  Otherwise we are like doctors evaluating a drug without asking whether the patients get better, or cooks talking about emulsion stability while ignoring the taste of the hollandaise. 

The issue is not whether evaluating cameras would be useful but whether the metrics we are being presented with are, in fact, useful for evaluation.  People like to feel that they understand and are in control, and that creates a powerful urge to evaluate and rank, and a metric is the tool you need to achieve that.  The urge to evaluate and rank is so strong that people will use metrics knowing they are misleading because that is all they have, like the drunk looking for his keys under the street-lamp because that is where the light is, or a photographer comparing Imatest results across systems.  As a result, the world is full of metrics, and people are so relieved by the feeling of understanding and control that they ignore the fact that very few metrics are adequately tested, and of those that are very few perform adequately.  Many metrics widely used for evaluation and ranking do not, in fact, predict with any accuracy the outcome we are really interested in: BMI as a metric of obesity-related illness and exam results as a metric of school performance, eg. 

To assess the performance of a metric we need a gold standard: "the outcome we are really interested in".  In some cases "the outcome we are really interested in" is clear - morbidity and mortality in the case of BMI, eg.  So what is "the outcome we are really interested in" when we use a metric to evaluate cameras?  If it is not the appearance of the photographs, what is it, and what is the basis for that choice?  What we have at present is circularity: "Dynamic range is the relevant outcome because that is what we measure to evaluate cameras", or "Equivalence at identical framing and output size is the relevant outcome because that is how we calculate equivalence".   

There is also the issue that metrics are promoted and criticised for commercial reasons.  Of course someone who wants to sell you equipment to measure MTF50 or perceptual megapixels will tell you that is the metric you need to evaluate your equipment and that looking at your prints is inadequate. 

Asking for the dynamic range of the eye is the wrong question: we never see the image formed in the eye, only a heavily post-processed version.  The dynamic range of the human visual system is far higher than any photographic system, but we cheat.  If we look at a high contrast scene - a person sitting in deep shade and a background in bright sun, eg - we have the impression we see both at once because we switch between the two areas without being aware of it. It is the same as depth of field, which we are not aware of because we switch from near to far without noticing, so everything appears to be in focus all the time. 
Title: Re: Discussion of 'Equivalence'
Post by: JohnMM on May 18, 2017, 12:24:04
Maybe there is an explanation in some of this discussion.

Is the aperture f/5.6 in each case?
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 18, 2017, 12:57:51
Of course the discussion has to come back to actual photographs.  Otherwise we are like doctors evaluating a drug without asking whether the patients get better, or cooks talking about emulsion stability while ignoring the taste of the hollandaise. 

The issue is not whether evaluating cameras would be useful but whether the metrics we are being presented with are, in fact, useful for evaluation.  People like to feel that they understand and are in control, and that creates a powerful urge to evaluate and rank, and a metric is the tool you need to achieve that.  The urge to evaluate and rank is so strong that people will use metrics knowing they are misleading because that is all they have, like the drunk looking for his keys under the street-lamp because that is where the light is, or a photographer comparing Imatest results across systems.  As a result, the world is full of metrics, and people are so relieved by the feeling of understanding and control that they ignore the fact that very few metrics are adequately tested, and of those that are very few perform adequately.  Many metrics widely used for evaluation and ranking do not, in fact, predict with any accuracy the outcome we are really interested in: BMI as a metric of obesity-related illness and exam results as a metric of school performance, eg. 

To assess the performance of a metric we need a gold standard: "the outcome we are really interested in".  In some cases "the outcome we are really interested in" is clear - morbidity and mortality in the case of BMI, eg.  So what is "the outcome we are really interested in" when we use a metric to evaluate cameras?  If it is not the appearance of the photographs, what is it, and what is the basis for that choice?  What we have at present is circularity: "Dynamic range is the relevant outcome because that is what we measure to evaluate cameras", or "Equivalence at identical framing and output size is the relevant outcome because that is how we calculate equivalence".   

There is also the issue that metrics are promoted and criticised for commercial reasons.  Of course someone who wants to sell you equipment to measure MTF50 or perceptual megapixels will tell you that is the metric you need to evaluate your equipment and that looking at your prints is inadequate. 

Asking for the dynamic range of the eye is the wrong question: we never see the image formed in the eye, only a heavily post-processed version.  The dynamic range of the human visual system is far higher than any photographic system, but we cheat.  If we look at a high contrast scene - a person sitting in deep shade and a background in bright sun, eg - we have the impression we see both at once because we switch between the two areas without being aware of it. It is the same as depth of field, which we are not aware of because we switch from near to far without noticing, so everything appears to be in focus all the time.

I agree that we have to connect back to photographs.
However, I claim that this is different than connecting to judgements about artistic merits or usefulness. That, I believe, is for each individual photographer to figure out.

The metrics we use to evaluate cameras do have a clear connection with appearance of photographs. The differences in DR, for example will show up given a suitable output medium. The differences will not show up given inadequate viewing conditions, but the question of perceptual threshold comes in, merely smoothing out the differences that could potentially exist.

One reason why we tend to think that it does not matter is that cameras are already very good. This difference is, however, quantitative, not qualitative. If cameras were much worse, we would feel an urgent need to improve them, and that would involve said metrics. The fact that cameras are so good means that we have to push harder to get to the limits, but many photographers will go to these limits and will make use of the full DR or whatever technical capability is available. If they established that the current gear does not allow them to do what they want, they look at these metrics to decide where to find improvement.

The point about commercial interests is taken. However, I think that we get more resistant to marketing BS because of discussions like this. Even if some concepts were originated (allegedly) by camera manufacturers, they have been taken up by others (e.g. is Joseph James an employee of a camera maker? I think not) and can now be used against the same companies, so the net result is that it does not matter where the concepts originally came from, but what we can use them for today.
Title: Re: Discussion of 'Equivalence'
Post by: Bent Hjarbo on May 18, 2017, 15:02:04
Is the aperture f/5.6 in each case?
Yes f5.6 in each case, but different focal lenght. 
Title: Re: Discussion of 'Equivalence'
Post by: Bjørn Rørslett on May 18, 2017, 18:21:16
Basic knowledge of photographic theory (*not* the equivalence kind)  would immediately point out why you got  the observed result (hint: magnification of detail differs although perspective and angle of captured view are similar (angle of true coverage of each lens are is different though, but is clipped to a common framing by the different formats)). One also immediately realises there is no change in dynamic range, again as expected from photographic theory because the same sensor is involved and the same aperture/exposure time is used.
Title: Re: Discussion of 'Equivalence'
Post by: bclaff on May 18, 2017, 18:47:03
... One also immediately realises there is no change in dynamic range, again as expected from photographic theory because the same sensor is involved and the same aperture/exposure time is used.
You may not see a difference in this scene and at this resolution but it is there and would show in the right circumstances.
Analysis of NEF files would be ideal but even a quick and dirty check on these small jpegs shows a difference in favor of the FX.
I selected a 120x80 pixel area in the "uniform" gray in the back.
For the DX I got a Signal to Noise Ratio (SNR) of 6.7. For the FX it is 7.5
Additional enlargement to the same viewing size comes with an unavoidable cost.
Title: Re: Discussion of 'Equivalence'
Post by: Bjørn Rørslett on May 18, 2017, 19:10:22
I've done similar experiments myself and arrive at the exact same result as Bent. Different magnification impacts noise as well. I'll stop there.

Title: Re: Discussion of 'Equivalence'
Post by: Bent Hjarbo on May 18, 2017, 19:15:18
My surprise was mostly based on the fact that there is a lot of websites that explain that we nede to use other f-stops on the smaller formats ;)
But the DOF is consistent with other observations, and facts.
Title: Re: Discussion of 'Equivalence'
Post by: Bjørn Rørslett on May 18, 2017, 19:23:12
Changing the aperture would directly alter exposure as exposure is about intensity of light. Not the photographed area. Another of those cornerstones of basic photographic theory apparently forgotten in the digital age.

One should at this stage take a break and try to understand that two different things can *never* be the same over all possible dimensions, unless they are identical in the first place. No manner of normalisation and/or clever massaging of metrics chan change this fundamental fact. Again obviously forgotten in the digital era.
Title: Re: Discussion of 'Equivalence'
Post by: Bent Hjarbo on May 18, 2017, 19:33:45
The change in aperture to get the same DOF, the shutter speed needs to be changed to get the same exposure.
But I was surprised that the DOF did not look exactly the same
Title: Re: Discussion of 'Equivalence'
Post by: Bjørn Rørslett on May 18, 2017, 19:42:58
Remember magnification. There is no escape for differences introduced though this primary variable.
Title: Re: Discussion of 'Equivalence'
Post by: Bent Hjarbo on May 18, 2017, 19:52:17
Ok, I see, and get educated  :)
Title: Re: Discussion of 'Equivalence'
Post by: JohnMM on May 18, 2017, 19:55:59
The change in aperture to get the same DOF, the shutter speed needs to be changed to get the same exposure.
But I was surprised that the DOF did not look exactly the same

Why do you need the same exposure?

Which apertures did you use when you tried to get the same DOF?
Title: Re: Discussion of 'Equivalence'
Post by: bclaff on May 18, 2017, 19:57:32
Changing the aperture would directly alter exposure as exposure is about intensity of light. Not the photographed area. Another of those cornerstones of basic photographic theory apparently forgotten in the digital age.

One should at this stage take a break and try to understand that two different things can *never* be the same over all possible dimensions, unless they are identical in the first place. No manner of normalisation and/or clever massaging of metrics can change this fundamental fact. Again obviously forgotten in the digital era.
Agreed. This is why equivalence doesn't interest me.
Obviously I feel normalization can be appropriate. It's the attempt to "massage" normalized results to form "equivalence" that I take issue with.
Title: Re: Discussion of 'Equivalence'
Post by: Bjørn Rørslett on May 18, 2017, 20:00:01
At least some aspects of which we can agree.
Title: Re: Discussion of 'Equivalence'
Post by: Bent Hjarbo on May 18, 2017, 20:00:21
Why do you need the same exposure?

Which apertures did you use when you tried to get the same DOF?
I used the same aperture to get the same DOF, and the small difference is explained by Bjøn above.
I need the same exposure as the sensor is the same, and has a constant ISO regardless of how much of it I use.
Title: Re: Discussion of 'Equivalence'
Post by: Bjørn Rørslett on May 18, 2017, 20:02:34
Why do you need the same exposure?
--

Ref. answer #34 and what follows in the thread therefrom.
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 18, 2017, 20:11:58
I used the same aperture to get the same DOF, and the small difference is explained by Bjøn above.
I need the same exposure as the sensor is the same, and has a constant ISO regardless of how much of it I use.

You did not get the same size of blur circles, so you should have used different apertures if the goal was to get the same degree of background blur.
The same with DOF as it is also expressed in terms of blur circles.

I will post a detailed answer later.
Title: Re: Discussion of 'Equivalence'
Post by: bclaff on May 18, 2017, 20:16:07
I used the same aperture to get the same DOF,...
Not that I care (because it's an equivalence argument), but you did not get the same DOF.
Title: Re: Discussion of 'Equivalence'
Post by: Bjørn Rørslett on May 18, 2017, 20:19:23
You did not get the same size of blur circles, so you should have used different apertures if the goal was to get the same degree of background blur.
The same with DOF as it is also expressed in terms of blur circles.

I will post a detailed answer later.

This is not the approach a photographer would follow in practice. Entirely different considerations determine the aperture and exposure settings. By heavily massaging factors one might shoehorn something into a semblance of what genuine theory had provided easily without the extra steps. Feel free to follow down that lane if you deem it fruitful and productive.

Not that I care (because it's an equivalence argument), but you did not get the same DOF.

Of course not - anything else would be a surprise given the setup. It is, however, the exact result expected by basic photographic theory. Nice to see yet another validation of principle known for a very long time.
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 18, 2017, 20:23:00
This is not the approach a photographer would follow in practice. Entirely different considerations determine the aperture and exposure settings. By heavily massaging factors one might shoehorn something into a semblance of what genuine theory had provided easily without the extra steps. Feel free to follow down that lane if you deem it fruitful and productive.

Of course not - anything else would be a surprise given the setup. It is, however, the exact result expected by basic photographic theory. Nice to see yet another validation of principle known for a very long time.

Yes, we established a few pages ago that there is no contradiction between what you call standard theory and equivalence theory. It is just a matter of different definitions. I'm willing to demonstrate that using the example provided.

There is no 'approach of the photographer' here, since the photographic goal is not stated (even though Bent himself stated that he wanted the same DOF, and he did not obtain that result).

It's just two test shots on different formats that will help illustrate the different concepts that have been discussed so far.
Title: Re: Discussion of 'Equivalence'
Post by: Bjørn Rørslett on May 18, 2017, 20:25:48
Whatever floats your boat. It is very difficult to keep an interest in the Emperors' Cloths over time. Sorry.

I'm more worried about these futile discussions flooding our NG site and make members not equally eager on these theories to shun the site. Participants of this thread should perhaps follow my advice about taking a break, or time-out, and contribute on other topics here on NG? I think I'll do that myself. Starting as of now.

This is not a signal about any immediate clamping down on the thread, merely that I have posted [more than] enough here. Real and significant disagreement exists and is still unchanged. No amount of further posting can alter that.

We need fresh material (images etc.) elsewhere on NG. Let that be the new challenge.
Title: Re: Discussion of 'Equivalence'
Post by: Andrea B. on May 18, 2017, 20:56:22
Bjørn:  One should at this stage take a break and try to understand that two different things can *never* be the same over all possible dimensions, unless they are identical in the first place.

Equivalence does not yield the "Same" or "Identical" and does not claim to do so.
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 18, 2017, 21:04:03
I don't understand the emotional reaction to a perfectly rational, civilised discussion. My impression was that there is a very good spirit in this.
Did site traffic go down because of this? Maybe people are just out enjoying the sun?  :D
The reactions to this thread are mixed, some very positive. So I think it is reasonable to go on and see where we end up.
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 18, 2017, 21:29:44
An important thing to know when we decide on which equipment to use is how it impact the picture, DOF, AOV etc.
I think that all agree that there are difference in noise performance on different sensor sizes, and that this changes with time due to development in the IC technology.
I will not go into the discussion on the PDR, but I would like to throw a couple of pictures into this thread ;)
I took my D800, which has a DX mode, and found a 24mm and a 35mm as they have nearly the AOV in the two modes.
I placed three objects to see the DOF in the two modes.
The result was that the background blur was less in DX mode, but it also seems that the exposure was different even though the D800 was in manual mode?
Maybe there is an explanation in some of this discussion.

First of all, thanks for running the test shots and providing an example.

Let me analyse what we see. I encourage anyone to point out any flaws in my reasoning or other kinds of mistakes!

- First of all, you resized both shots to the same final resolution of ~1200x800px. I think this is good, as it gives us an apples-to-apples comparison.
- You framed the shots the same or very close, by having almost the same perspective and choosing appropriate focal lengths of roughly f2/f1=1.5, which is the ratio
of the linear dimensions of FX/DX.
- You chose the same relative aperture f/5.6
- You shot both at base ISO 100 and 1/200s
- There is, as you point out, a very minor brightness difference. Let's ignore that here, because it is very small and probably within the tolerance of the stop-down lever of the camera

So, according to Joseph James' definitions, the two shots are equivalent in terms of perspective, framing, exposure time, brightness, and display dimensions.
They are not equivalent in terms of DOF, diffraction, and total light on the sensor.

You stated that you want the same DOF for the two shots. You did not achieve that in my opinion. Also, the distant background is not blurred to the same extent. Do you agree with my observations?

----------

Regardless of what you wanted, let us do a little calculation, using standard photographic theory, AKA geometrical optics, to see what kind of blur circles we would expect from the distant background. Using elementary geometry, we find that the diameter of the blur circles from a light source at infinity in the image plane is equal to

d=M*f/N,

where M is the magnification, f is the focal length and N is the aperture number. The size on the final image is given by

dfinal= m*M*f/N, where m is the secondary magnification.

Now, since the two images are displayed at the same size, the product of primary and secondary magnification are the same:

m1*M1=m2*M2.

Therefore, we obtain that

dfinal2/dfinal1 = f2/f1 * N1/N2.

Now, since you chose the same aperture number, but different focal lengths, we obtain

dfinal2/dfinal1 = 35/24 * 5.6/5.6 = 1.46.

Thus, from optics, we expect the blur circles of the distant background to be roughly 50% bigger in diameter on the FX image compared to the DX image.
Can you verify this?

----------

If you had wanted to get the same background blur (double emphasis on the if, since I'm not saying that this is in any way desirable in general, but a hypothetical situation), you would have needed to choose either f/8 on the FX shot and f/5.6 on the DX, or f/5.6 on the FX shot and f/4 on the DX. Indeed,

dfinal2/dfinal1 = 35/24 * 5.6/8 = 1.02, and
dfinal2/dfinal1 = 35/24 * 4/5.6 = 1.04,

which are both close enough to 1.

The same would be predicted by equivalence, where it is stated that the aperture that is equivalent to f/5.6 on FX is f/4 on DX, and the aperture that is equivalent to f/8 on FX is f/5.6 on DX.

Note that the exposure will be different by doing this, so we have to change something else to compensate and get back the equal brightness.
There is no normative aspect to any of this discussion. I merely explained how to get the same background blur on both formats, and explained that equivalence and geometrical optics are fully consistent in that regard.

Next up will be exposure and noise. But let's first see whether this is understandable.
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 18, 2017, 22:03:38
Changing the aperture would directly alter exposure as exposure is about intensity of light. Not the photographed area. Another of those cornerstones of basic photographic theory apparently forgotten in the digital age.

This is an aspect (perhaps the only one) where I think our disagreement is genuine and not merely a question of definitions.
I think this is an instance where the theory from the analog era falls short and leads to methods that are no longer optimal.
I would really welcome an exchange on this aspect.

With digital sensors, the only sweet spot for exposure is exposing to the right. There is no other sweet spot in my view. For any given scene, we determine what the brightest part is that needs to be recorded. We ensure that this part of the scene does not clip. Any other exposure setting would be suboptimal.

Often, our exposure is suboptimal. But there is no other 'more correct' exposure than ETTR. I think this is a fundamental difference to film, where the sweet spot is somewhere in the middle.

The desired brightness of the final image can be achieved by post processing. The ETTR ensures that if we have to brighten the shadows, we have the best possible signal-to-noise ratio given the circumstances. If we have to darken it, there is no penalty to have exposed more than necessary since ETTR ensured that we did not clip anything that was pictorially relevant. The only reason to not ETTR is because of motion blur.

Our cameras are still designed according to the old way of doing things, but my opinion is that it would be better to change them to make it easy to expose optimally for the way digital sensors work.
Title: Re: Discussion of 'Equivalence'
Post by: bclaff on May 18, 2017, 22:20:21
Changing the aperture would directly alter exposure...
This is an aspect (perhaps the only one) where I think our disagreement is genuine and not merely a question of definitions.
I think this is an instance where the theory from the analog era falls short and leads to methods that are no longer optimal.
I would really welcome an exchange on this aspect.
...
My reply might not be what you have in mind :-)

Remembering that I'm not a proponent of "equivalence"...

As I understand it "equivalence" would be from the reference point of the final image.
So DX at f/5.6 1/200s and FX at f/8 1/100s
These are the same Exposure Value and the same Depth Of Field (DOF).
(However noise will be more apparent in the DX image, lower Signal to Noise Ratio (SNR) due to nigher noise.)

Title: Re: Discussion of 'Equivalence'
Post by: bclaff on May 18, 2017, 22:26:49
...Often, our exposure is suboptimal. But there is no other 'more correct' exposure than ETTR. ...
Implicit in what you said is that ETTR at a higher setting ISO is also suboptimal compared to ETTR at a lower ISO setting.

Another way to phrase the philosophy of ETTR is to simple say "gather as much light as is possible without clipping relevant highlights (specular is OK for example)"
Generally this means operating at the lowest (non-extended) ISO setting since this is where Full Well Capacity (FWC) is available.
Title: Re: Discussion of 'Equivalence'
Post by: JohnMM on May 18, 2017, 22:27:11
This is an aspect (perhaps the only one) where I think our disagreement is genuine and not merely a question of definitions.
I think this is an instance where the theory from the analog era falls short and leads to methods that are no longer optimal.
I would really welcome an exchange on this aspect.
...

My reply might not be what you have in mind :-)

Remembering that I'm not a proponent of "equivalence"...

As I understand it "equivalence" would be from the reference point of the final image.
So DX at f/5.6 1/200s and FX at f/8 1/100s
These are the same Exposure Value and the same Depth Of Field (DOF).
(However noise will be more apparent in the DX image, lower Signal to Noise Ratio (SNR) due to nigher noise.)

One is allowed to take photos with different exposure values. This can result in photos with the same amount of Total Light and thus the same amount of Photon Shot Noise.
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 18, 2017, 22:32:43
This is an aspect (perhaps the only one) where I think our disagreement is genuine and not merely a question of definitions.
I think this is an instance where the theory from the analog era falls short and leads to methods that are no longer optimal.
I would really welcome an exchange on this aspect.
...

My reply might not be what you have in mind :-)

Remembering that I'm not a proponent of "equivalence"...

As I understand it "equivalence" would be from the reference point of the final image.
So DX at f/5.6 1/200s and FX at f/8 1/100s
These are the same Exposure Value and the same Depth Of Field (DOF).
(However noise will be more apparent in the DX image, lower Signal to Noise Ratio (SNR) due to nigher noise.)

Different shutter speeds have implications on motion blur. That has to be kept in mind. Practically speaking you trade noise with motion blur. However, to stay with Joseph James' definition, the equivalent shot would be obtained with the same shutter speed and a higher ISO or gain in post (to preserve brightness).

You don't have to be a proponent, neither am I, in my understanding. My question for this thread was merely whether there is something wrong with equivalence in terms of the science behind it. I'm fairly convinced that there isn't, but I'm open to be proven wrong.
Title: Re: Discussion of 'Equivalence'
Post by: JohnMM on May 18, 2017, 22:34:10
The only reason to not ETTR is because of motion blur.

Or because the choice of a large DOF, or the need to use a lens at it's "sweet spot", requires us to use a smaller aperture. Or is that implicit in our story so far?
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 18, 2017, 22:36:02
Or because the choice of a large DOF, or the need to use a lens at it's "sweet spot", requires us to use a smaller aperture. Or is that implicit in our story so far?

If you have to stop down, you can also expose longer to compoensate unless you have moving stuff. So I understood that scenario to be accounted for.
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 18, 2017, 22:43:40
Implicit in what you said is that ETTR at a higher setting ISO is also suboptimal compared to ETTR at a lower ISO setting.

Another way to phrase the philosophy of ETTR is to simple say "gather as much light as is possible without clipping relevant highlights (specular is OK for example)"
Generally this means operating at the lowest (non-extended) ISO setting since this is where Full Well Capacity (FWC) is available.

I've generally found ETTR to yield more usable exposures than any other method.
But what I've been puzzled with is that some people propose 'exposing to the left' for digital.
Regardless of whether you want the final shot to have a dark mood, what good is it to push the histogram to the left? Can it even be defined as anything else than having no light recorded at all?

Also a mystery is the fact that people keep referring to how different sensors have different degrees of 'recoverable highlights'. In my mind, this is an illusion created by the fact that we don't see whether we clip RAW data, short of running the file through RAWdigger. But I'm not so certain, maybe there still is something to it. My understanding is that sensor clipping occurs at a certain precise light intensity, up to uncertainties due to photon noise. But you might know more about this..
Title: Re: Discussion of 'Equivalence'
Post by: David H. Hartman on May 18, 2017, 22:59:52
One simply does not take the "equivalent" picture with the different gear, it's that simple.

Sort of equivalent? Close but no cigar?

I have a problem with Equivalence also: I'll sign on. Equivalence' six parameters don't work for me.

Dave

Different Gear, Different Format = Different Photograph

---

I've started reading Equivalence by Joseph James. I suspect I'll have a problem with the six parameters as important ones are left out. I'm probably will skim it more but it's unlikely that I'll read the article.

---

Equivalence does not yield the "Same" or "Identical" and does not claim to do so.

Perhaps a less confusing name would help: perhaps "Sort of Equivalence." I believe the name *implies* that it can give an identical or nearly identical photography but that is not possible.

Dave Hartman

---

Oh My **God! He is going to post another site with graphs...

Background blur versus background distance with four real world cameras and real world lenses. (http://asklens.com/howmuchblur/#compare-1.5x-70mm-f2.8-and-1x-105mm-f2.5-and-0.64x-160mm-f4.8-and-0.3x-300mm-f9-on-a-0.9m-wide-subject)

This is a quick and hopefully not dirty illustration of what can be done with cameras I now own and in one case a lens I could buy but haven't. There are sites on the Net that clearly show that DoF and Background blurring are not one and the same. If one is interested in a more scholarly site or paper they are available by searching.

When the background is well outside the DoF zone longer lenses with larger physical apertures will blur the background more. One may also wish to consider pupillary magnification. The additional background blurring is not proportional to the increase in focal length so if one wishes to keep the perspective identical the larger format with the longer lenses will produce the greatest background blurring.

If greater background blurring is desired use a larger format. If less background blurring is desired use a smaller format.

Dave Hartman who now should probably don fire protective clothing and hug a fire extinguisher.

**Please substitute your favorite deity or Charles Darwin as you prefer.

Title: Re: Discussion of 'Equivalence'
Post by: bclaff on May 18, 2017, 23:15:21
...
But what I've been puzzled with is that some people propose 'exposing to the left' for digital.
...
These people are generally misapplying the concept of "ISO Invariance" (not close to the topic of this thread)
Title: Re: Discussion of 'Equivalence'
Post by: David H. Hartman on May 18, 2017, 23:16:37
But what I've been puzzled with is that some people propose 'exposing to the left' for digital.

If may be forced by the need for DoF or to freeze subject movement or both. One may use a lower ISO out of fear that the well will run dry.

I'm afraid my sorry attempts at comic relief may not be well received so I'll take a brake from this thread. :)
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 18, 2017, 23:26:38
These people are generally misapplying the concept of "ISO Invariance" (not close to the topic of this thread)

Some of them. Others are doing it at base ISO in what seem studio or landscape conditions. I don't get it.
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 18, 2017, 23:30:04
Perhaps a less confusing name would help: perhaps "Sort of Equivalence." I believe the name *implies* that it can give an identical or nearly identical photography but that is not possible.
Things that are not the same, but can be regarded to be the same applying some precisely defined criteria are called 'equivalent'. This is both standard usage in ordinary language as well as mathematics and science.

I'm interested why you think that it is not possible (given the right equipment) to make two nearly identical photograph using different formats. What fundamental limit are you thinking of?
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 18, 2017, 23:32:32
If may be forced by the need for DoF or to freeze subject movement or both. One may use a lower ISO out of fear that the well will run dry.

I'm afraid my sorry attempts at comic relief may not be well received so I'll take a brake from this thread. :)

No, on the contrary, some humour is always welcome.
Yes, but that is not the same as recommending 'exposure to the left' as some kind of sweet spot. It is a spot where you would rather not be. Therefore my question.
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 18, 2017, 23:39:27
There are sites on the Net that clearly show that DoF and Background blurring are not one and the same.
Thanks for pointing this out. They are by definition not the same. One of them is measured as a size in image space, the other as a distance along the optical axis in the object space (the space in front of the lens).
DOF is more complicated to calculate than background blur. This is why I focused on background blur in my little calculation.
Title: Re: Discussion of 'Equivalence'
Post by: bclaff on May 19, 2017, 00:38:44
Some of them. Others are doing it at base ISO in what seem studio or landscape conditions. I don't get it.
Those could be the same people. People who think they don't need to raise the ISO setting and can get the same lightening effect in post.
Depending on the camera they may get away with it but it's a poor habit and not something to blindly recommend.
Title: Re: Discussion of 'Equivalence'
Post by: Roland Vink on May 19, 2017, 03:38:48
Quote
There are sites on the Net that clearly show that DoF and Background blurring are not one and the same.
They are two sides of the coin - related, but mutually exclusive.
- DoF is the part of the image which is regarded as in focus.
- Background blurring or Bokeh relates to the part of the image which is out of focus.
Where you draw the line between the two (sharp vs OOF) depends on the image size, viewing distance, how critical the viewer is etc, there is no hard and fast rule here.
Title: Re: Discussion of 'Equivalence'
Post by: Les Olson on May 19, 2017, 09:21:42

With digital sensors, the only sweet spot for exposure is exposing to the right. There is no other sweet spot in my view.


Once again, you are not acknowledging the essential but dubious assumption underlying this claim: that post-processing is fun.  So why wouldn't you commit to post-processing every image?  One reason, apart from not thinking it was fun, is that you want or need to send images directly from the camera.  Another - more important in my view, since I don't use Twitface and I must have been out when New York Vogue called to book me for the spring collections - is that there is a penalty to reducing image brightness (misleadingly called "exposure" - in Lightroom, eg) in post-processing to compensate for over-exposure, which is that changing brightness changes colour relationships.  So, once again, what purports to be a technical issue is a concealed, arbitrary preference - in this case, for shadow detail over colour fidelity. 

ETTR is also a slipperier concept than you are letting on.  The sensor cannot detect clipping, it can only predict it from the fact that a number of pixels reach FWC.  How, precisely, does the camera make that prediction?  And if you have (say) 24MP on the sensor, and (say) 1.3MP on the viewfinder/LCD, (roughly) 20 sensor pixels map to one viewfinder/LCD pixel.  How?  Does a viewfinder pixel blink if all of "its" sensor pixels saturate, or if more than half of them saturate, or if any of them saturate?  Is it's decision to blink influenced by its neighbours?  How many adjacent viewfinder pixels have to blink before you can see them?  So, what you really mean is that you have to not clip the highlights more than some amount the camera designers decided was important but didn't tell you about, for reasons they didn't tell you about either.       
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 19, 2017, 10:09:16
Once again, you are not acknowledging the essential but dubious assumption underlying this claim: that post-processing is fun.  So why wouldn't you commit to post-processing every image?  One reason, apart from not thinking it was fun, is that you want or need to send images directly from the camera.  Another - more important in my view, since I don't use Twitface and I must have been out when New York Vogue called to book me for the spring collections - is that there is a penalty to reducing image brightness (misleadingly called "exposure" - in Lightroom, eg) in post-processing to compensate for over-exposure, which is that changing brightness changes colour relationships.  So, once again, what purports to be a technical issue is a concealed, arbitrary preference - in this case, for shadow detail over colour fidelity.

I will not disagree on the fun part, but let me keep that emotional reaction to the process separate from a definition of what an optimal exposure is for digital cameras.
Personally, I certainly do not expose every shot to the right, not by far. It requires time both for setting up the shot and time for post processing. Sometimes, multiple shots are required to get it right because the camera does not allow me to see where exactly the clipping point is, despite using tricks such as UniWB etc.
But the exposures that are not to the right are merely convenient, they do not represent a sweet spot of the medium. Anytime I end up with a shot which has several stops to the right of the brightest part of the histogram, I end up with more noise than what I would have needed to put up with, and if my exposure was not constrained by movement, I could have gotten a better image quality by exposing more. But for some shots, the quality is already sufficient and high DR in today's sensors means that I can get away with it most of the time. Still, I would claim, the exposures are not in a sweet spot of the medium in any way.

I hope it is clear that the discussion is about base ISO only. Anytime you are raising ISO from base, you are anyway not using the full capacity of the sensor, reducing dynamic range. It is still good then to have the histogram to the right, but for different reasons.

I'm not sure I understand what you are talking about regarding color fidelity. Do you talk about individual color channels being clipped? Do you have an example?

ETTR is also a slipperier concept than you are letting on.  The sensor cannot detect clipping, it can only predict it from the fact that a number of pixels reach FWC.  How, precisely, does the camera make that prediction?  And if you have (say) 24MP on the sensor, and (say) 1.3MP on the viewfinder/LCD, (roughly) 20 sensor pixels map to one viewfinder/LCD pixel.  How?  Does a viewfinder pixel blink if all of "its" sensor pixels saturate, or if more than half of them saturate, or if any of them saturate?  Is it's decision to blink influenced by its neighbours?  How many adjacent viewfinder pixels have to blink before you can see them?  So, what you really mean is that you have to not clip the highlights more than some amount the camera designers decided was important but didn't tell you about, for reasons they didn't tell you about either.     

I disagree that the sensor cannot detect clipping. It has access to the RAW data, so why not? The camera manufacturers simply choose not to make that information available to the photographer. This requires to approximate the clipping point by using a UniWB which removes scaling in the color channels, and using a flat profile to prevent clipping by simply increasing contrast on top of the RAW capture.

I do not understand why they will not give us the option of displaying RAW histograms. This has been requested from the very beginning. My suspicion is, that this comes from being too rooted in the way of thinking of the analog times. The user interface and the available metering modes reflects that. The middle gray criterion, for example, is not very adequate in high-contrast situations.

Again, this point is only a practical hurdle that can be solved, in principle. It does not change anything about what the digital sensor's sweet spot is.

EDIT: Just a word of warning for anyone reading along: Do not mistake this discussion for a practical guide to exposure. The topic is not presented in a way that would be suitable for didactic purposes. If you want to push the limits, you are on your own. I do not want to have you botch your exposures on an important shoot. :)
Title: Re: Discussion of 'Equivalence'
Post by: Les Olson on May 19, 2017, 14:09:13

I'm not sure I understand what you are talking about regarding color fidelity. Do you talk about individual color channels being clipped? Do you have an example?

I disagree that the sensor cannot detect clipping. It has access to the RAW data, so why not?

Full is full.  The sensor can only infer clipping, presumably from the number of pixels at FWC.  How many? 

Here is a quick and dirty example of the problem with using brightness to correct over-exposure.  This is the cover of Steve Anchell's The Darkroom Cookbook.  The top image was taken at camera's idea of correct exposure using aperture priority.  The second was made +1 with exposure compensation, and the third was +2.  Then in LR the second was given -1 "exposure" and the third was given -2 "exposure".  There is obvious colour shift.

Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 19, 2017, 14:20:51
Thanks for the example!
In addition to the loss of color in the background, I see lower contrast and motion blur in the 2nd and 3rd shots, which are probably related.
Did you look at RAW histograms to ensure that no channel was clipped?
If no channel was clipped, then the LR exposure slider is not applying the correct transformation (this has been a question of mine for some time: what is the exposure slider exactly doing?)
Title: Re: Discussion of 'Equivalence'
Post by: Andrea B. on May 19, 2017, 18:45:41
A quick note about ETTR. Raw Digger has a nice tutorial about how to determine the maximum amount of exposure compensation to apply for good ETTR which will make best use of your camera's dynamic range. (Of course the method involves Raw Digger, but that's ok. It's a great tool. For the record, no affiliation by me or NG.)

Good ETTR is defined as metering for those highlights for which you want to preserve the most detail followed by the appropriate amount of EV+. The appropriate amount of EV+ will vary for different camera models and different ISO settings. But once you have figured it out for your camera and ISO setting, you will have a solid guide for how much to push the exposure to the right. Less trial & error!! Easier to set up the shot.

[THen, yes, you have to recover the photo in the converter.]
Title: Re: Discussion of 'Equivalence'
Post by: Andrea B. on May 19, 2017, 18:47:21
Forgot the link!!

https://www.fastrawviewer.com/blog/how-to-use-the-full-dynamic-range-of-your-camera
Title: Re: Discussion of 'Equivalence'
Post by: Les Olson on May 19, 2017, 19:22:44
There may be some blue channel clipping in the 2-stop version but not in the 1-stop version. 

The LR "exposure" slider changes brightness, but according to Jeff Schewe's book (The Digital Negative: Raw Image Processing in Lightroom Camera Raw and Photoshop, 2nd ed, 2015) it changes the mid-tones more than the extremes.  That would account for its introducing colour shifts. 

It is possible there is a perceptual effect as well.  The Hunt effect is the phenomenon of greater perceived colourfulness of a stimulus with increased luminance and the Stevens effect is the phenomenon of greater perceived contrast with increased luminance, so an overall reduction of brightness would reduce perceived colourfulness and contrast (the Stevens effect is why old B&W movies often seem to us to have too much contrast: they were designed with high contrast because they were intended for viewing at lower luminance levels than we use today; the Hunt effect is why the stained glass in medieval cathedrals, and slides on a light table, have such brilliant colours). 
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 19, 2017, 19:49:08
There may be some blue channel clipping in the 2-stop version but not in the 1-stop version. 

The LR "exposure" slider changes brightness, but according to Jeff Schewe's book (The Digital Negative: Raw Image Processing in Lightroom Camera Raw and Photoshop, 2nd ed, 2015) it changes the mid-tones more than the extremes.  That would account for its introducing colour shifts. 

It is possible there is a perceptual effect as well.  The Hunt effect is the phenomenon of greater perceived colourfulness of a stimulus with increased luminance and the Stevens effect is the phenomenon of greater perceived contrast with increased luminance, so an overall reduction of brightness would reduce perceived colourfulness and contrast (the Stevens effect is why old B&W movies often seem to us to have too much contrast: they were designed with high contrast because they were intended for viewing at lower luminance levels than we use today; the Hunt effect is why the stained glass in medieval cathedrals, and slides on a light table, have such brilliant colours).

I think what the exposure slider 'should' do is apply a linear scaling to the RAW levels before the translation to the color space occurs. Can you determine that from the book?

Why would the perceptual effects apply after you have adjusted the brightness?
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 19, 2017, 20:06:39
Forgot the link!!

https://www.fastrawviewer.com/blog/how-to-use-the-full-dynamic-range-of-your-camera

Thanks for providing that link.
A few years ago I figured out a similar methodology after realizing at what value the camera places a grey card in spot metering, and being puzzled.
I realized that Nikon is not doing what is optimal for data capture, but what probably minimizes the complaints from clueless customers (or those coming from film that have a clue).
It is nice to have such a nice writeup for reference.

The article reinforces my gut feeling of what an optimal exposure for digital is.
Title: Re: Discussion of 'Equivalence'
Post by: Les Olson on May 20, 2017, 12:52:13
I think what the exposure slider 'should' do is apply a linear scaling to the RAW levels before the translation to the color space occurs. Can you determine that from the book?

Why would the perceptual effects apply after you have adjusted the brightness?

Because the brightness is not reduced equally at all luminances, so colours that were only slightly less bright in the original are a lot less bright after adjustment. 

No.  Adobe is reluctant to say much about the algorithms, because that is proprietary.  Their bias is that you should look at the picture and adjust the sliders until you like what you see and not worry your cute little head about the details.  The important point is that whatever the "exposure" slider does, it does not change exposure, so that compensating for over-exposure is much more complicated than it looks. 
Title: Re: Discussion of 'Equivalence'
Post by: David H. Hartman on May 20, 2017, 13:35:00
Nikon is following an ANSI standard. Here is more...

Meters Don't See 18% Gray by Thom Hogan (http://www.bythom.com/graycards.htm)

and...

... It turned out that Bob [Shanebrook] was the man who signed off on the 18% value during the 1979 revision. At that time a number of people at Kodak wanted to change the card to 12.5 % reflectance to simplify the work of photographer who were using the cards. They were all ready to make the change when Ansel Adams got wind of the coming change. As Bob tells the story, Ansel Adams came to Rochester and basically camped out in one of the offices and said he would not leave until they agreed to keep the Gray Card 18% because it matched one of the Zones in his Zone System. To get rid of him the folks at Kodak gave in and left the card 18%. When we revised the card in 1999 we left it 18% because the manufacturing technology was known for producing 18% and there was no budget to do the R & D which would have been necessary to change it. ...

--A clip from Bob Shell's Talk delivered to NECCC on July 12th and 14th, 2002


Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 20, 2017, 16:06:27
Because the brightness is not reduced equally at all luminances, so colours that were only slightly less bright in the original are a lot less bright after adjustment. 

No.  Adobe is reluctant to say much about the algorithms, because that is proprietary.  Their bias is that you should look at the picture and adjust the sliders until you like what you see and not worry your cute little head about the details.  The important point is that whatever the "exposure" slider does, it does not change exposure, so that compensating for over-exposure is much more complicated than it looks.

For me it is not at all intuitive to think of the transformation in the color space, but in the linear Raw level space it is clear what should happen. Exposure slider should be more or less like a digital post-hoc adjustment of ISO.

On RAW converters that are fully documented (like RawTherapee) this is exactly what the exposure slider does.

This is different from brightness or luminance, which refer to the HSB, HSL, or Lab spaces.
Title: Re: Discussion of 'Equivalence'
Post by: David H. Hartman on May 20, 2017, 22:44:24
I'm interested why you think that it is not possible (given the right equipment) to make two nearly identical photograph using different formats. What fundamental limit are you thinking of?

OK, here is a chart. I hope the formulas behind it are accurate...

http://asklens.com/howmuchblur/#compare-1.5x-70mm-f1.7-and-1x-105mm-f2.5-and-1.5x-70mm-f1.4-and-1x-105mm-f1.4-and-1.5x-85mm-f2-on-a-0.9m-wide-subject

I own a 105/2.5 and a couple of 105/2.8 lenses so I can have what's shown. I don't know of any 70/1.7, 1.8 or 1.4 lens. There is a 75/1.4 ASP lens for Leica M but I know of none in Nikon F bayonet.

I can have the same blur circles with a 105/2.5 or 2.8 on FX but not the same perspective with an 85/2.0 on DX. To have the framing of the subject the same I'll have to move back and accept flatter perspective.

If I had a 70 or 75/1.7 or 1.4 I could have similar blur circles but the lens would have to give good results wide open or stopped down less than a stop. To do this the lens would probably have to be aspheric. Would the bokeh be as good as the 105/2.5 or 105/2.8 lenses? The DoF would be different. Maybe not in a bad way. Anyway in a short search I didn't find a fast 70 or 75mm lens in F-Bayonet. If there is it would be priced out of my reach.

Back when Nikon didn't have an FX camera people recommended a 105/2.5 for portraits. They recommended a 50/1.8, 1.4 or 1.2. The 105mm is a longer lens for DX that for FX. The 50mm is a shorter lens on DX compared to a 105mm on FX.

The only solution for me for DX was an 85/2.0 as that's what I own. I also own an 85/1.4 AIS but it's lube contaminated.

On DX I can have the blurring I want but not the perspective I want. I can have the perspective I want but not the blurring I want. I can't have both together.

It doesn't make sense to me to compare with crop factors and all. Equivalence doesn't work for me. One should lean what a 75mm lens does on 4x5 and a 105mm lens does on FX and what a 300/4.0 lens does on DX and go from there.

Dave Hartman

Title: Re: Discussion of 'Equivalence'
Post by: David H. Hartman on May 20, 2017, 23:00:54
I though my Nikon F2As was mis-calibrated until I read about the 12~13% ANSI standard for meters. All of my Nikons in the film era were calibrated that way.

Anyway it's all Ansel Adam's fault. :)

But what I've been puzzled with is that some people propose 'exposing to the left' for digital.

What I hate is a camera that gives you ETTR and ETTL at the same time, my D2H for example.

I'm not sure the last paragraph made any sense so I clipped it. I'll have another cup of coffee and hope for the best. :)

Dave Hartman
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 22, 2017, 10:41:37
We have covered a lot of ground in this thread. Let me try to give a summary of positions expressed. I will just list the statements, without repeating the arguments for or against a particular thesis. Even though I think that not all of the positions are equally defensible or relevant, I will leave it to the reader to read the full discussion as well as other material and make up his/her mind about it.

My opinion is that all of the technical criticisms have been debunked (that's why I list them separately). However, this has not been actively acknowledged in all cases, and therefore I think that the discussion is not fully settled. In the interest of not being confusing about issues that should, in my opinion, have a clear objective answer, it would be ideal if we could reach some form of consensus.

As for the meta-level issues; I think that no consensus is required since they can be left to personal preference.

Please let me know if you want to add something or if you think that something is unfairly represented.
I will afterwards also add this to the opening post for new people that come across this thread.

Equivalence

Meta-level criticism

Technical criticism (debunked)

Arguments in favor

Photographic dynamic range

Meta-level criticism

Technical criticism (debunked)

Arguments in favor

----------

Re-reading my own answers, I was again reminded that I haven't been completely clear on the distinction between photon noise and read noise. If there is anything to clarify, please ask.
Title: Re: Discussion of 'Equivalence'
Post by: Les Olson on May 22, 2017, 12:33:04

  • In order for two pictures to be equivalent, they have to be exactly the same (i.e. the criteria are too restrictive).


The problem is not that things can be "equivalent" despite being only "nearly identical" but that you will not define "nearly identical" otherwise than "what makes the answer come out how I want it to".       

For example: we all know that 35mm on DX is equivalent  = nearly identical to 50mm on FX.  But 35mm x 1.5 = 52mm (to two significant figures).  And the DX sensor is not 24 x 16mm and the FX sensor is not 36 x 24mm.  DX sensors have been as small as 23.1 x 15.4mm (D3000) but are currently 23.5 x 15.7mm (D500 and D7500; the D7200 is 23.5 x 15.6mm), FX is currently 35.9 x 23.9mm.  So the DX crop factor is not 1.5, it is (currently) 1.53. To two significant figures, 35mm x 1.53 = 54mm.

But on FX at 2m 50mm at f/1.4 has the same DoF as 54mm at f/1.8!  So if a one stop difference is so important when DX is compared to FX, why can you ignore a 2/3 stop difference and say 35mm on DX is equivalent to 50mm on FX? 

Another example: if I use the 105/2.5 for a portrait the DX equivalent is 70mm (105/1.5; 105/1.53 = 69mm).  DoF is the same for FX with 105mm at f/2.8 and 3m as for DX with 75mm at f/2.8 at 2.9m.  By what non-arbitrary criterion are the DX framing and perspective not "nearly identical" to the FX framing and perspective?  Of course, if I am using the 70-200/2.8G VRI its actual focal length at "70mm" and 3m is 75mm, which I would never know if someone had not tested it because its successor had marked focus breathing (http://www.bythom.com/nikkor-70-200-VR-II-lens.htm).  Many lenses have actual focal lengths 3% or so different from the values they are designated by, even at infinity: if you can ignore that, how can you justify saying that 75mm is not nearly identical to 70mm?   
Title: Re: Discussion of 'Equivalence'
Post by: bclaff on May 22, 2017, 14:53:13
...
As for the meta-level issues; I think that no consensus is required since they can be left to personal preference.

Please let me know if you want to add something or if you think that something is unfairly represented.
I will afterwards also add this to the opening post for new people that come across this thread.

Photographic Dynamic Range (PDR)

Meta-level criticism
  • 1) It is a metric that does not correlate with what human viewers find pleasing.
Feedback over the years from many photographers contradicts this criticism.
  • 2) It should not be called 'dynamic range'
Dynamic range is generically a logarithmic value of a maximum or a minimum; PDR meet this criteria.
  • 3) It is strange that a property of the sensor should be related to output size.
It isn't, not related to a fixed output size; it's related to visual acuity which in turn forms a COC, not unlike DOF calculations.
  • 4) (related) It is a normalized measure that removes degrees of freedom, and is therefore confusing.
Not sure why that's confusing provided it is stated (which it is).
  • 5) It is a circular definition that is used to prove a pre-conceived statement.
Don't understand the circularity. There is no pre-conceived statement by the author. Are you referring to the misuse but those who don't understand the measure?
  • 6) Dynamic range is an irrelevant property of camera gear.
I accept that some people think so but feedback from others (on other fora, by email etc.) would indicate otherwise.


----------

Re-reading my own answers, I was again reminded that I haven't been completely clear on the distinction between photon noise and read noise. If there is anything to clarify, please ask.
I added my views above in the quote.

Regarding read noise and photon noise; there is a huge difference.

Read noise is a variance is the accuracy of reading the pixel value; it affects shadow performance and establishes (EDR, DxOMark) or influences (PDR) dynamic range.

Photon noise is a property of light that increases with the amount of light; it becomes relevant at mid to high light level and has nothing to do with dynamic range.
Title: Re: Discussion of 'Equivalence'
Post by: Jack Dahlgren on May 22, 2017, 17:52:00
Quote
Dynamic range is generically a logarithmic value of a maximum or a minimum; PDR meet this criteria.

The issue with coopting an accepted term in the same domain is that it leads to confusion. For example, if we use aperture in the generic sense as an opening to refer to the mount size of a particular camera and then called it "photographic aperture", people would get confused. Certainly the measure is interesting and has some effect, but it would just be odd to overload the term in that way.
Title: Re: Discussion of 'Equivalence'
Post by: bclaff on May 22, 2017, 19:08:46
The issue with coopting an accepted term in the same domain is that it leads to confusion. For example, if we use aperture in the generic sense as an opening to refer to the mount size of a particular camera and then called it "photographic aperture", people would get confused. Certainly the measure is interesting and has some effect, but it would just be odd to overload the term in that way.
I disagree. I consider the term dynamic range as generic (logarithm of a ratio of high to low) and always needing to be put into context.
The careless use of "dynamic range" (unqualified) is what leads to confusion.
On other (unnamed) boards it's common to distinguish between Engineering Dynamic Range (EDR), my Photographic Dynamic Range (PDR), DxOMark Landscape Dynamic Range, etc. to avoid confusion.
FWIW, on spec sheets EDR is often in dB (more common in engineering, hence the name) and PDR (and the like) are in EV (or photographic stops).
Title: Re: Discussion of 'Equivalence'
Post by: Andrea B. on May 22, 2017, 20:34:53
The careless use of "dynamic range" (unqualified) is what leads to confusion.

Agreed! I am going to try to be more careful myself about using the terms (EDR, PDR, LDR) correctly.
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 23, 2017, 10:10:04
The problem is not that things can be "equivalent" despite being only "nearly identical" but that you will not define "nearly identical" otherwise than "what makes the answer come out how I want it to".       

Ok, I'm confused by this terminology. Is 'nearly identical' related to the perceptual thresholds?
I do not have particular answers in mind. What would those answers be? I think that it is not particularly useful if I proclaim a certain value for perceptual threshold because 1) it might not apply to all viewers, and 2) I think that the conceptual framework of equivalence can be equipped with arbitrary perceptual thresholds, depending on application, i.e. it is a free parameter of the model.

If you fit the parameter to each individual comparison, of course the model will produce nonsense. Maybe that's what you are getting at. But if you set the parameter once for all comparisons, depending on your purposes, you can apply the model and get useful predictions.

For example: we all know that 35mm on DX is equivalent  = nearly identical to 50mm on FX.  But 35mm x 1.5 = 52mm (to two significant figures).  And the DX sensor is not 24 x 16mm and the FX sensor is not 36 x 24mm.  DX sensors have been as small as 23.1 x 15.4mm (D3000) but are currently 23.5 x 15.7mm (D500 and D7500; the D7200 is 23.5 x 15.6mm), FX is currently 35.9 x 23.9mm.  So the DX crop factor is not 1.5, it is (currently) 1.53. To two significant figures, 35mm x 1.53 = 54mm.

But on FX at 2m 50mm at f/1.4 has the same DoF as 54mm at f/1.8!  So if a one stop difference is so important when DX is compared to FX, why can you ignore a 2/3 stop difference and say 35mm on DX is equivalent to 50mm on FX? 

Another example: if I use the 105/2.5 for a portrait the DX equivalent is 70mm (105/1.5; 105/1.53 = 69mm).  DoF is the same for FX with 105mm at f/2.8 and 3m as for DX with 75mm at f/2.8 at 2.9m.  By what non-arbitrary criterion are the DX framing and perspective not "nearly identical" to the FX framing and perspective?  Of course, if I am using the 70-200/2.8G VRI its actual focal length at "70mm" and 3m is 75mm, which I would never know if someone had not tested it because its successor had marked focus breathing (http://www.bythom.com/nikkor-70-200-VR-II-lens.htm).  Many lenses have actual focal lengths 3% or so different from the values they are designated by, even at infinity: if you can ignore that, how can you justify saying that 75mm is not nearly identical to 70mm?

I don't follow all your calculations, in particular those about DOF (did you take into account secondary magnification?).
Are you saying that because of generous evaluation, i.e. "70mm and 75mm yield very similar angle of view that I will consider, for my purposes, to be the same", they are indistinguishable? In practise, there are caveats because of the differences between nominal and actual focal length, focus breathing, distortion, etc. But all caveats aside, 75mm and 70mm are not exactly equivalent because it is a difference that can be measured or noticed (there will be things included in the 70mm shot that are not included in the 75mm shot). Again, if you set your thresholds of 'significantly different' to generous levels, you will simply ignore those measurable differences.

-----

I think that the criticism by Bjørn that you responded to was saying something else, but I may still be misunderstanding him.
His statement was that the list of criteria that are enumerated by Joseph James are too restrictive. To me, this means that if two pictures are the same to some precision, then they must have been shot with the same parameters on the same format under the same conditions, i.e. they must be the same shot.

My view is that because of the dimensionless nature of the digitally recorded images, given a model (any model) of the complete imaging chain (from the scene to the final resized image), and given two final images that are the same to arbitrary precision (even more so if perceptual thresholds are present), you cannot be certain that they were shot with the same parameters, i.e. they might have been shot with different parameters. This allows us (in theory) to get the same shot with different formats and parameters adjusted accordingly.

(If you disagree with this, what would be a model where it is always possible to say that if two images look the same, they must have been shot with the same parameters on the same format?)

In practise, there are limits imposed by availability of gear and physical limits (e.g. diffraction limits, maximum theoretical numerical aperture, etc.).
To me, it makes an important difference whether something is possible in theory, but sometimes may be hard to carry out precisely in practise, or whether it is impossible even in theory. YMMV.

Perceptual thresholds make it 'easier' to achieve this, not harder. If viewing conditions are very poor, all images look the same. Even under good but not exceptional viewing conditions, we get away with things like focal lengths not being the same by fractions of a mm, the entrance pupil not being at the same location, minor variations in exposures due to the tolerance of the shutter and aperture mechanism, etc.

Note that this is not the definition of equivalence that Joseph James is using, but a generalization thereof. Joseph James uses quite a simple model of imaging (but not too simple to be meaningless) and subdivides the comparison into a list of criteria, which makes his definition easier to use in practise.

An example for how this principle works was shown in the background blur calculation in Reply #184.
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 23, 2017, 12:32:13
    I added my views
above in the quote.

Regarding read noise and photon noise; there is a huge difference.

Read noise is a variance is the accuracy of reading the pixel value; it affects shadow performance and establishes (EDR, DxOMark) or influences (PDR) dynamic range.

Photon noise is a property of light that increases with the amount of light; it becomes relevant at mid to high light level and has nothing to do with dynamic range.[/list]

Thanks for adding your views. I fully agree.

Thanks also for reminding us of the difference between read noise and photon noise. I will add some notes to my previous posts where I got momentarily confused about the two.
Title: Re: Discussion of 'Equivalence'
Post by: Les Olson on May 23, 2017, 20:41:56
Ok, I'm confused by this terminology. Is 'nearly identical' related to the perceptual thresholds?

No.  It is simply pointing out that "nearly identical" means nothing.  It is like the often heard inanity that humans and chimpanzees are nearly identical because they have 98% (or whatever the number is) of their DNA the same.  But 98% is only "nearly identical" to 100% if you already think chimpanzees and humans are "nearly identical", and if you think humans and chimpanzees are quite different 98% is quite different to 100%.

In science, one common meaning of "equivalent" would be "the same within measurement error".  In the case of lens focal length, eg, the measurement error is +/- 3%, so the precision of DoF calculations cannot be greater than that, so DoF within +/- 3% must be "equivalent".  In everyday photography, some measurements have much larger error - subject distance, eg.  You may blithely say "suppose I am 3m away", but that distance is, in practice, rarely measured at all, let alone to +/- 3%.  And measurement errors accumulate: if the focal length is measured to +/- 3%, and the distance is measured to +/- 3%, the DoF is accurate to +/- 18% - because DoF is proportional to the square of distance and inversely proportional to the square of focal length. (This is leaving aside the issue of significant figures, and the indefensible practice of calculating DoF to four or five significant figures when the CoC has one).

So, is your definition of "equivalent", "within measurement error"?  If so, you will know what all the errors are, so you can tell us. 

Another common meaning of "equivalent" would be that the difference is less than is practically important.  My professional expertise is in medicine, so I will use a medical example.  In the pre-antibiotic era, 3% of previously healthy people with pneumococcal pneumonia died.  With penicillin, none die.  So you have to treat 100/3 = 33 previously healthy people with pneumonia with penicillin to prevent one death.  This is called the "number needed to treat", or NNT.  Everyone can set the NNT they care about for themselves, but most ordinary people regard numbers over a few hundred as not worth bothering about, even for serious outcomes.  So if you have to give 500 people treatment A compared to treatment B to prevent one day off work, the two treatments are "equivalent" (there are plenty of commonly used treatments with NNT to prevent one death in the thousands - 5000 for taking a statin if you are a woman under 50 with no known heart disease, eg).  Thresholds of worthwhileness come in here, in photography as they do in medicine - the difference being that you can't get a D5 on the NHS, like you can a statin, so there is no need for a public concensus. 

So, what is your definition of a meaningful difference, and, if you want everyone else to accept it, where did it come from?  (If it is for your personal use only, you can choose whatever you like - but it would help if you said what it was).  I  am not saying that 70mm and 75mm are not different: I am asking why, if 50 and 54mm are not different, how come 70mm and 75mm are? 

Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 23, 2017, 23:21:54
No.  It is simply pointing out that "nearly identical" means nothing.  It is like the often heard inanity that humans and chimpanzees are nearly identical because they have 98% (or whatever the number is) of their DNA the same.  But 98% is only "nearly identical" to 100% if you already think chimpanzees and humans are "nearly identical", and if you think humans and chimpanzees are quite different 98% is quite different to 100%.

In science, one common meaning of "equivalent" would be "the same within measurement error".  In the case of lens focal length, eg, the measurement error is +/- 3%, so the precision of DoF calculations cannot be greater than that, so DoF within +/- 3% must be "equivalent".  In everyday photography, some measurements have much larger error - subject distance, eg.  You may blithely say "suppose I am 3m away", but that distance is, in practice, rarely measured at all, let alone to +/- 3%.  And measurement errors accumulate: if the focal length is measured to +/- 3%, and the distance is measured to +/- 3%, the DoF is accurate to +/- 18% - because DoF is proportional to the square of distance and inversely proportional to the square of focal length. (This is leaving aside the issue of significant figures, and the indefensible practice of calculating DoF to four or five significant figures when the CoC has one).

So, is your definition of "equivalent", "within measurement error"?  If so, you will know what all the errors are, so you can tell us. 

Another common meaning of "equivalent" would be that the difference is less than is practically important.  My professional expertise is in medicine, so I will use a medical example.  In the pre-antibiotic era, 3% of previously healthy people with pneumococcal pneumonia died.  With penicillin, none die.  So you have to treat 100/3 = 33 previously healthy people with pneumonia with penicillin to prevent one death.  This is called the "number needed to treat", or NNT.  Everyone can set the NNT they care about for themselves, but most ordinary people regard numbers over a few hundred as not worth bothering about, even for serious outcomes.  So if you have to give 500 people treatment A compared to treatment B to prevent one day off work, the two treatments are "equivalent" (there are plenty of commonly used treatments with NNT to prevent one death in the thousands - 5000 for taking a statin if you are a woman under 50 with no known heart disease, eg).  Thresholds of worthwhileness come in here, in photography as they do in medicine - the difference being that you can't get a D5 on the NHS, like you can a statin, so there is no need for a public concensus. 

So, what is your definition of a meaningful difference, and, if you want everyone else to accept it, where did it come from?  (If it is for your personal use only, you can choose whatever you like - but it would help if you said what it was).  I  am not saying that 70mm and 75mm are not different: I am asking why, if 50 and 54mm are not different, how come 70mm and 75mm are? 

I think for the uncertainties in measurement similar ideas apply as for the uncertainty in the comparison of images. Both just increase the space of parameters that is equivalent to a certain parameter.

To give you an example, I will consider again the calculation of blur circles that I showed earlier. We saw that equivalent blur circles imply that the ratio of focal length and aperture number (also known as the absolute aperture) is the same (where it is understood that the format is appropriate to achieve the same angle of view as the focal length changes -- this is important, I hope that this does not cause too much confusion, otherwise go back to Reply #184 to read the full example).

I plotted the focal length and aperture number that will produce the same size of the blur circle as 24mm and f/5.6 on DX.

In the upper left, it is assumed that the focal length and aperture number on DX are known exactly (i.e. f=24.00000000...) and that blur circle diameter can be measured to arbitrary precision. Unsurprisingly, the equivalent parameters lie on a line. Anything that is ever so slightly besides the line would not be equivalent. This definition of 'equivalent' would not be very practical, but note that even in this case, there are equivalent parameters that are different from f=24mm, N=5.6!

In the remaining three plots, I introduce uncertainties for the focal length and aperture number and/or the blur circles. The parameters that are equivalent now occupy an entire wedge of the space, so hitting the wedge would be easier. I took 3% measurement errors as an example. The wedge would get wider as the error increases.
Title: Re: Discussion of 'Equivalence'
Post by: David H. Hartman on May 24, 2017, 03:50:19
Could some less confusing terms be use rather than EDR and PDR. It seems to me it would be less confusing to reserve "dynamic range" for the image sensors performance. Obviously cropping half a frame out of a D810's image doesn't change the D810's performance but enlarging that half frame to various common sizes may cause a practical difference in image quality.

For old timers like myself a Tri-X negative's characteristics don't change if you crop half a frame out of a full frame but if you print that half frame to 8x10 or worse 11x14 the image quality really takes a dive. Fortunately cropping half a frame out of a 36MP full frame isn't so costly nor is shooting with today's DX cameras.

I fear to suggest a different term for PDR but perhaps there should be one, one that won't confuse a fool devil's advocate like me.

Dave Hartman
Title: Re: Discussion of 'Equivalence'
Post by: Les Olson on May 24, 2017, 10:53:16
I think for the uncertainties in measurement similar ideas apply as for the uncertainty in the comparison of images. Both just increase the space of parameters that is equivalent to a certain parameter.

Yes, but the question to be asked in relation to "equivalence" is, what is the relationship between the uncertainty in the size of the blur circles caused by measurement error, and the uncertainty in DoF when you add subject distance with its measurement error, to the difference between DX and FX calculated, as it always is, on the assumption of no measurement error?  My rough calculation suggests that answer is that the difference between DX and FX is within the uncertainty caused by measurement error, taking +/- 3% as a plausible value for each of the errors.  Subject distance is the key measurement, because an error of +/- 3% in a subject distance of 3m would be plausible for an averagely careful person using an ordinary domestic tape measure with the camera on a tripod, but if the distance is guesstimated, or the camera is hand-held, the error would be much bigger. 
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 24, 2017, 13:44:48
Yes, but the question to be asked in relation to "equivalence" is, what is the relationship between the uncertainty in the size of the blur circles caused by measurement error, and the uncertainty in DoF when you add subject distance with its measurement error, to the difference between DX and FX calculated, as it always is, on the assumption of no measurement error?  My rough calculation suggests that answer is that the difference between DX and FX is within the uncertainty caused by measurement error, taking +/- 3% as a plausible value for each of the errors.  Subject distance is the key measurement, because an error of +/- 3% in a subject distance of 3m would be plausible for an averagely careful person using an ordinary domestic tape measure with the camera on a tripod, but if the distance is guesstimated, or the camera is hand-held, the error would be much bigger.

Just to clarify: my calculation in Reply #184 and my plots in Reply #225 are for blur circles of point light sources at infinity. They do not apply for DOF calculations.

My intuition is that it is possible that the difference in DOF between DX and FX is masked by uncertainties when subject distance is not carefully controlled, if the errors are propagated in a very nonlinear way. I would be interested to see your calculation.

Right now I don't fully understand where you want to go with this, but I will follow you to see where we end up.
Title: Re: Discussion of 'Equivalence'
Post by: Les Olson on May 25, 2017, 10:16:54

Right now I don't fully understand where you want to go with this, but I will follow you to see where we end up.

Another way to put the point is to imagine a group of photographers with DX cameras, and an equally skilled group with FX cameras. They set out, each on her own, to make photographs of a range of subjects - portraits, buildings, landscapes, nature, etc - almost like real life.  The aim is to have the same framing and perspective on DX as for the corresponding image on FX.  Aperture is up to each photographer.  Because measurement error is inescapable, they rarely succeed in achieving exactly the same framing and perspective for the corresponding DX and FX images.  Because both groups are equally skilled,  the images are equally good, just slightly different.  Because the difference in DoF between FX and DX for identical framing, perspective and aperture is less than the variability of DoF arising from measurement error, some of the DX images have less DoF than some of the corresponding FX images, some have more, and some have the same.   

Of course, if there were a person whose sins were so grievous that their penance was to examine all the images, and the photographers were trying hard to achieve the same framing and perspective, the penitent sinner would, over time, see more FX images with shallower DoF than the corresponding DX image than vice versa.  How many more will depend on two factors.  One is the size of the measurement errors - and as anyone knows who has done anything depending on accurate measurement, whether it is scientific research or carpentry, measurement errors are more common and larger than most people expect. 

The other, far more important, factor, is the nature of the images.  DX with 16mm at f/8 and FX with 24mm at f/8 both have infinite DoF, and near limits of a couple of meters, so the penitent sinner will see no difference for the landscapes or the buildings.  And there will be no discernible difference for full length portraits using a plain backdrop.  Or - although here I am straying beyond the narrow issue of equivalence and into the realm of - gasp - photography - for portraits where the photographer wants the background in focus, as in Annie Liebowitz' brilliant portraits of Queen Elizabeth, where she is referencing the royal portraits by Gainsborough and Thomas Lawrence rather than blindly re-using photographic cliches like shallow DoF.  Or where the photographer, noting that the finest royal portrait of the past 150 years (Lucien Freud's 2001 portrait of Queen Elizabeth) is referencing photographic style - it is the size of a photograph and uses a tight, photographic crop - decides to get a bit closer and/or crop a bit tighter and not have a background at all. 

Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 25, 2017, 13:54:58
Another way to put the point is to imagine a group of photographers with DX cameras, and an equally skilled group with FX cameras. They set out, each on her own, to make photographs of a range of subjects - portraits, buildings, landscapes, nature, etc - almost like real life.  The aim is to have the same framing and perspective on DX as for the corresponding image on FX.  Aperture is up to each photographer.  Because measurement error is inescapable, they rarely succeed in achieving exactly the same framing and perspective for the corresponding DX and FX images.  Because both groups are equally skilled,  the images are equally good, just slightly different.  Because the difference in DoF between FX and DX for identical framing, perspective and aperture is less than the variability of DoF arising from measurement error, some of the DX images have less DoF than some of the corresponding FX images, some have more, and some have the same.   

Well, whether the two groups will get different images will depend on (among other things) what lenses they carry and at what apertures they use them. But what is the point of this? You are introducing a whole bunch of external variables in order to make a point about how the concept of equivalence does not matter. I understand your argument, but I don't think it is a criticism of equivalence. Equivalence is a concept that is based on optics; to see optical effects clearly it is best to work in a controlled environment. First of all, it is about how to set the parameters to get pictures that look the same on different formats, and as a corollary, how to set the parameters if you want them to differ in certain ways. It is not (as you seem to imply with your example) about some claim of FX getting systematically lower DOF when you look at a random sample of photographs*. And if you don't look closely or introduce a bunch of uncertainties and other variables, you might get pictures that look more or less the same despite not setting the parameters exactly right.

*Especially if the aperture is chosen randomly as well. What does random mean in this context?
Title: Re: Discussion of 'Equivalence'
Post by: Les Olson on May 25, 2017, 19:03:12
Well, whether the two groups will get different images will depend on (among other things) what lenses they carry and at what apertures they use them. But what is the point of this? You are introducing a whole bunch of external variables in order to make a point about how the concept of equivalence does not matter. I understand your argument, but I don't think it is a criticism of equivalence. Equivalence is a concept that is based on optics; to see optical effects clearly it is best to work in a controlled environment. First of all, it is about how to set the parameters to get pictures that look the same on different formats, and as a corollary, how to set the parameters if you want them to differ in certain ways. It is not (as you seem to imply with your example) about some claim of FX getting systematically lower DOF when you look at a random sample of photographs*. And if you don't look closely or introduce a bunch of uncertainties and other variables, you might get pictures that look more or less the same despite not setting the parameters exactly right.

*Especially if the aperture is chosen randomly as well. What does random mean in this context?

You have not made the claim that FX is about systematically less DoF (for the same perspective and framing), but that claim is made: "Larger format, longer lenses for the same perspective and framing, more background blurring" (http://nikongear.net/revival/index.php/topic,5942.15.html).  And right of the bat Joseph James says "This essay is about relating different systems [...] Equivalence relates the visual properties of photos from different formats [...] the advantage of a larger sensor system over a smaller sensor system is that the larger sensor system will generally have lenses that have wider aperture (entrance pupil) diameters for a AOV (diagonal angle of view) than smaller sensor systems, which allows for more shallow DOFs". 

I did not propose comparing a random sample of photographs, I proposed comparing a random sample of photographs that were intended to be equivalent. 

If it is true that equivalence is intended to tell people "how to set the parameters to get pictures that look the same on different formats, and as a corollary, how to set the parameters if you want them to differ in certain ways", it is an abject failure.  The reason is simple: I learn nothing from being told how to change the parameters to take the same photographs, or photographs with some desired difference, on different systems, if I do not already know the parameter settings on at least one system.  What is helpful is to say "DoF is directly proportional to F-number (double the F number = double the DoF); DoF is directly proportional to the square of subject distance (double the subject distance = four times the DoF); DoF is inversely proportional to the square of focal length (double the focal length = one quarter the DoF)". 

And "setting the parameters exactly right" takes us back to circularity: the only reason to do that is to make it possible to make statements about "equivalence".
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 25, 2017, 20:52:22
You have not made the claim that FX is about systematically less DoF (for the same perspective and framing), but that claim is made: "Larger format, longer lenses for the same perspective and framing, more background blurring" (http://nikongear.net/revival/index.php/topic,5942.15.html).  And right of the bat Joseph James says "This essay is about relating different systems [...] Equivalence relates the visual properties of photos from different formats [...] the advantage of a larger sensor system over a smaller sensor system is that the larger sensor system will generally have lenses that have wider aperture (entrance pupil) diameters for a AOV (diagonal angle of view) than smaller sensor systems, which allows for more shallow DOFs". 

I do not want to convince anyone that they should think in a certain way or adopt a certain conceptual framework. But I believe that when criticising something, one should at least not take things out of their proper context.

The reality is that different formats are more or less well developed in terms of lens lineup. There is a scarcity of fast primes for DX. To give an example, I have a 50/1.2 from Nikon. This lens is fairly special because of the fast aperture and rendering wide open. I do not find an equivalent lens for DX, which would have to be a 35/0.8 or thereabouts. If I conclude that FX is superior (for me) than DX because there is no 35/0.8 lens, it is not a conclusion that can be deduced from equivalence. The latter only told me that I have to look for a 35/0.8. The closest thing I can get is one of various 35/1.4 lenses. But despite all measurement errors that occur in casual photography, I will still not be convinced that the 50/1.2 on FX and a 35/1.4 on DX give the same rendering wide open (not only because of DoF, but this is the easiest aspect to compare). In a parallel universe, it could be different, with hundreds of native lenses for DX and almost none for FX.

Now some people are telling me that I should just accept each format for what it is and not try to do the same things on both formats. Well, fine, but that doesn't change the fact that certain lenses don't have equivalents in other formats, and if I decide that I want to do certain things with my photography, that rules out certain formats for me. To draw that conclusion, I either need to do a lot of experiments, or have some knowledge of optics (the relevant parts of which are neatly summarized in equivalence). By the way, short of owning every possible format there is, I sometimes have to do stuff on a format which is not ideal, and knowledge of optics helps in that situation as well.

The statement by Dave is along a similar vein. The problem that I see, is that we often see statements like these without all the proper qualifications, but this is part of informal language/conversations. It can be confusing for people starting out in photography. But if you have to make a decision whether to purchase an FX or DX camera, you have to take lens selection into consideration, and a big part of that is maximum aperture.

If it is true that equivalence is intended to tell people "how to set the parameters to get pictures that look the same on different formats, and as a corollary, how to set the parameters if you want them to differ in certain ways", it is an abject failure.  The reason is simple: I learn nothing from being told how to change the parameters to take the same photographs, or photographs with some desired difference, on different systems, if I do not already know the parameter settings on at least one system.
Well, if you have a general formula for taking a photograph, you might as well program a computer to go out and take pictures.
The reason you are giving for the failure is not about the intended purpose.

What is helpful is to say "DoF is directly proportional to F-number (double the F number = double the DoF); DoF is directly proportional to the square of subject distance (double the subject distance = four times the DoF); DoF is inversely proportional to the square of focal length (double the focal length = one quarter the DoF)". 
That is exactly what equivalence is doing. However, if you open a book (or the Wikipedia article) on DoF, you will see that there are a dozen ways to rearrange the formulae depending on what is fixed and what is variable. The average person that takes up photography is not very comfortable in manipulating formulae like this. In addition, DoF is confusing and trips up many people, even experienced ones. Equivalence makes a certain decision what is to be held fixed when going from one format to the other, and gives the optical relationships that remain to be specified.

Title: Re: Discussion of 'Equivalence'
Post by: David H. Hartman on May 26, 2017, 00:44:40
DoF is based on the limitations of human vision and the expectations of the viewer. If all the factors are accounted for it should be possible to calculate DoF nicely for one viewer. For 2? For 3? For many? The odds can go down quickly as DoF isn't a hard fact.

If graphs and charts and articles don't help with understanding background blurring practical experience with 35mm, 6x6 and larger should give the photographer an intuative feeling of what to expect. No experience with film? Then CX, DX and FX or similar formats should do it.

If one has a fast zoom try a little systematic observation. Choose a perspective then zoom to frame. While zooming observe the background blurring. A clipboard and pen will make people watching think one really knows what they are doing. Paper on the clipboard isn't required.

If perspective is the concern a distance of say 1.9m to 2.1m should be close enough if 2.0 was desired. The distance need not be measured. One can see perspective with their eyes.

Instead of using a prime and "zoom with your feet" one might try setting the perspective with their feet and then zooming to frame the subject. The exact focal length when focused and zoomed isn't important, only that the subject is framed as one likes.

One can select a point of view without lifting the camera to their face. I'm sure I look like a fool bobbing around but it's much easier to select one's point of view by eye than to hold a monrail view camera to my face. I'm sure my Karden Colour 45s taught me to select my point of view by eye.

I posted charts and grafts hoping they might be helpful. I'm not sure if they were or not.

Dave Hartman

Caution: using a zoom with a variable maximum aperture may cause vertigo. I read it in a magazine.
Title: Re: Discussion of 'Equivalence'
Post by: Les Olson on May 26, 2017, 14:54:17

To give an example, I have a 50/1.2 from Nikon. This lens is fairly special because of the fast aperture and rendering wide open. I do not find an equivalent lens for DX, which would have to be a 35/0.8 or thereabouts. If I conclude that FX is superior (for me) than DX because there is no 35/0.8 lens, it is not a conclusion that can be deduced from equivalence. The latter only told me that I have to look for a 35/0.8.


Then it mislead you, because you do not have to look for a 35/0.8: you can use the 50/1.2 perfectly well on DX, and the properties you value - rendering and large aperture - will be the same; other properties will be better - vignetting, eg.  Of course, you will have to adapt a little: you can keep the same perspective and DoF and have a tighter crop, or you can have the same framing and move a little further away and therefore have a few centimetres more DoF.  Why is that a problem?  You could even innovate, and use the lens differently. 

Of course, if, for reasons of your own, you particularly like the framing, perspective etc you get with your current subject on FX with a 50mm lens at f/1.2, then you will want FX. As I said in the other thread, you can get any framing, perspective, DoF etc you want with DX, unless your requirements are highly specialised.  An example of highly specialised requirements is wanting exactly what you get on FX with 50mm at f/1.2.  That is fine, and that is a reason you, individually, need FX.  But the fact that you are perfectly happy using the 50/1.2 on FX is also a reason buying a DX camera would never cross your mind: so far from showing why "equivalence" is important, it shows why it is pointless.   

Of course lens selection plays a role in deciding whether DX or FX is a better fit.  If you use wide angle focal lengths at large apertures a lot then you are a natural FX user (people complain because Nikon has not provided wide DX primes, but because DX and FX have the same registration distance the complexity and size of wide lenses scales nearer actual focal length than equivalent focal length, so a 14mm DX prime would be very hard to make competitive with the 20/1.8 FX).  Conversely, if you use 500mm a lot you are a natural DX user.   
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 26, 2017, 15:25:29
Then it mislead you, because you do not have to look for a 35/0.8: you can use the 50/1.2 perfectly well on DX, and the properties you value - rendering and large aperture - will be the same; other properties will be better - vignetting, eg.  Of course, you will have to adapt a little: you can keep the same perspective and DoF and have a tighter crop, or you can have the same framing and move a little further away and therefore have a few centimetres more DoF.  Why is that a problem?  You could even innovate, and use the lens differently.

The character of the 50/1.2 changes completely when mounted on FX vs. DX. I'm not saying it is not useful on DX, I would even say it is very useful, but for completely different purposes. Moving a few steps backwards does not do much to breach that abysmal difference. Cropping a significant chunk from the image circle has profound implications on rendering on temperamental lenses like the 50/1.2. It does not do much on technically perfect, more clinical lenses. Moreover, the difference between the AOV of a normal lens and a short tele is also not small by any means.

buying a DX camera would never cross your mind
This is certainly false. I have owned DX cameras and I will consider them again in the future. I have used the 50/1.2 on DX and it was marvellous, much less temperamental, I wish there were something equivalent for FX 8).

so far from showing why "equivalence" is important, it shows why it is pointless.   

I'm afraid that I cannot give better explanations; we just have to agree to disagree.
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 26, 2017, 15:30:15
DoF is based on the limitations of human vision and the expectations of the viewer. If all the factors are accounted for it should be possible to calculate DoF nicely for one viewer. For 2? For 3? For many? The odds can go down quickly as DoF isn't a hard fact.

The perceived absolute amount of DoF is maybe dependent on the viewer. But different viewers would probably agree about changes in DoF, or about which of two images has more or less DoF. For background blur it's even easier to notice changes. Do you agree?
Title: Re: Discussion of 'Equivalence'
Post by: Andrea B. on May 26, 2017, 17:23:54
Here is a legitimate question for which the optics of Equivalence can be useful.

I have the Nikkor 200-500 f/5.6E ED VR lens for taking photos of birds.
(Disclaimer: I am not a "bird photographer". I simply like to make a little field record of some of the birds I see when out walking or hiking. :-) )

What are the trade-offs between using this zoomer on my 36MP D810 versus my 24MP D500 versus my 24MP D750??

Leave aside for the moment, the question of which camera body can provide better tracking, auto-focus or frames/second. As it turns out, I've actually successfully used the 200-500/5.6 VR on all 3 cameras for bird shooting. Which is not to say that I don't have my preferences. I do. But I want to better understand what I might be losing/gaining amongst these three camera choices.

For now, I'll leave this as an exercise for the interested reader. (My math books used to use that phrase!) Meanwhile, I'll try to work out an answer and post it later.

Edit:  I had to remove a double post.
Title: Re: Discussion of 'Equivalence'
Post by: JohnMM on May 26, 2017, 17:25:05
Of course, if, for reasons of your own, you particularly like the framing, perspective etc you get with your current subject on FX with a 50mm lens at f/1.2, then you will want FX.

I like the "look" but I can't afford FX. I want to know if I can get the same "look" with DX. How do I find out?
Title: Re: Discussion of 'Equivalence'
Post by: Les Olson on May 26, 2017, 19:41:21
I like the "look" but I can't afford FX. I want to know if I can get the same "look" with DX. How do I find out?

A new D610 costs $1500.  B&H has a D610 in near new condition for $1000, exactly $100 more than a new D7200 at its currently discounted price.  A 50mm f/1.2 is $700 and a 50mm f/1.4D is $300.  You can afford the extra shaving of lens speed but not the FX body?  Well, OK, if you say so. 

However, if you want to know if you can get "the look" on DX I recommend a website (http://www.naturfotograf.com/lens_norm.html), where it says: "There is an endearing slight softness (bokeh) when the lens is deployed on a D1/D2-series camera and shot wide open, but the image even at f/1.2 has plenty of detail. Stopped down in the range f/2.8-f/5.6, image contrast is enhanced, sharpness is very good to excellent, and veiling flare has gone entirely. Quality deteriorates rapidly as expected with the lens stopped down beyond f/8. [...] Image contrast even at f/1.2 is higher on the D3, so pictures come across crisper and appearing sharper with this camera. Focusing the lens on a D3 was easy." 

The 50/1.2 is unique.  There is no 24/1.2.  There is no 85/1.2.  It will not be the same lens on FX as on DX: on FX it is a "normal" lens - ie, focal length is close to the image diameter - and on DX it is a - loose portrait lens?  Personally, I would have more use for a loose portrait lens with f/1.2 than a normal lens with f/1.2, but to each his own.  I wouldn't pay $400 to use an f/1.2 lens for the contrast, so I think the answer is "Yes if you buy the lens".  However, you are free to disagree - in which case you had better grab that D610 while you can.  Either way, that was easier than reading God knows how many pages of bumph about "equivalence", wasn't it? 

Title: Re: Discussion of 'Equivalence'
Post by: David H. Hartman on May 26, 2017, 21:26:13
The perceived absolute amount of DoF is maybe dependent on the viewer. But different viewers would probably agree about changes in DoF, or about which of two images has more or less DoF. For background blur it's even easier to notice changes. Do you agree?

Yes I agree.

I have or had 20/80 - 20/50 vision uncorrected by my ophthalmologist gets me 20/15 corrected. I think 20/20 is pretty standard. 

Dave
Title: Re: Discussion of 'Equivalence'
Post by: David H. Hartman on May 26, 2017, 21:30:49
I like the "look" but I can't afford FX. I want to know if I can get the same "look" with DX. How do I find out?

To get a similar look for a 105/2.5 on FX you'd need a 70~75/1.4. If you'd be satisfied with a rangefinder camera you are in luck. Leica makes a 75/1.4 ASP lens. :)

Dave
Title: Re: Discussion of 'Equivalence'
Post by: David H. Hartman on May 26, 2017, 21:53:08
What are the trade-offs between using this zoomer on my 36MP D810 versus my 24MP D500 versus my 24MP D750??

I would not loose sleep over using the 200-500/5.6 on a D500. If you do you'll be using it because you have the D500 with you or you need the reach you get with the combination.

[The D750 has an AA filter so I would generally the D810. The D810 is also a larger camera so it would balance better with the 200-500/5.6. A grip would be useful for both the D500 and D810.]

Dave Hartman

Does a 300/4.5 have more reach on 35mm than a 300/9.0 on 5x7? Why are some so sensitive to the use of word "reach" with telephotos and DX v. FX.

I was short on time and added a little more above
Title: Re: Discussion of 'Equivalence'
Post by: bclaff on May 26, 2017, 22:07:20
[/sup]What are the trade-offs between using this zoomer on my 36MP D810 versus my 24MP D500 versus my 24MP D750??
Not speaking to equivalence but assuming you are concerned with reach and would effectively be using the D810 and D750 in DX Crop mode it would appear the D750 would be a better choice than the D810.
The D750 in DX Crop would be very much like the D500 so I think that might come down to whether you would use the D750 without DX Crop mode for flexibility in later cropping.
Chart (http://www.photonstophotos.net/Charts/PDR.htm#Nikon%20D500,Nikon%20D750(DX),Nikon%20D810(DX)):
Title: Re: Discussion of 'Equivalence'
Post by: David H. Hartman on May 26, 2017, 22:34:45
A hour ago I recalled that when making large prints from the same negative, Tri-X Pan (not Pro) from 35mm or 6x6/6x7 I'd need higher contrast printing paper for the same look as 8x10. As I recall this was mentioned in Ilford literature.

Dave
Title: Re: Discussion of 'Equivalence'
Post by: Les Olson on May 27, 2017, 08:45:35
A hour ago I recalled that when making large prints from the same negative, Tri-X Pan (not Pro) from 35mm or 6x6/6x7 I'd need higher contrast printing paper for the same look as 8x10.

That is because very large prints are intended for longer viewing distances, and higher contrast enhances the impression of sharpness at longer viewing distances.  The 35mm or medium format images had less detail than the 8 x 10 so they needed more contrast to compensate.  Here is an example (from Allen & Triantaphillidou, Manual of Photography): the right hand image has more fine detail but lower contrast.  From normal reading distance the right hand image looks sharper, but from 2m the left hand image looks sharper.
Title: Re: Discussion of 'Equivalence'
Post by: Les Olson on May 27, 2017, 09:01:26

I have or had 20/80 - 20/50 vision uncorrected by my ophthalmologist gets me 20/15 corrected. I think 20/20 is pretty standard. 

20/20 means you can see detail at 20 feet that a person with normal visual acuity can also see at 20 feet; 20/80 means they can see detail at 80 feet that you can only see at 20 feet and 20/15 means you can see at 20 feet what they have to be at 15 feet to see.  Outside the US 20/20 is called 6/6. 
Title: Re: Discussion of 'Equivalence'
Post by: Les Olson on May 27, 2017, 11:31:03

Does a 300/4.5 have more reach on 35mm than a 300/9.0 on 5x7? Why are some so sensitive to the use of word "reach" with telephotos and DX v. FX.

Because it is misleading.  A 300mm on 35mm does not have more anything than a 300mm on 5 x 7.  It is a 300mm punto e basta.  If you use a 300mm on 5 x 7 then take a 35mm-sized piece out of the middle and print both the same size, did the lens mysteriously acquire more reach?  Then how does it make sense to say that 300mm has more reach just because the 35mm-sized piece of film is in a different camera?

Title: Re: Discussion of 'Equivalence'
Post by: David H. Hartman on May 27, 2017, 11:32:57
Even looking at the larger print at the normal viewing distance of the smaller one the detail looked a bit flat. The print sizes were 8x10, 11x14 and 16x20.

I'm not doubting that extra contrast makes a larger print at a greater viewing distance look sharper. I was once at a print shop where they were printing strips for a highway billboard. They aren't really photographs. They are enhanced significantly as they must be to appear sharp at a distance. I'd guess the original was air bushed. This strips of image really looked garish at one's feet. A billboard is an extreme example.

Dave
Title: Re: Discussion of 'Equivalence'
Post by: David H. Hartman on May 27, 2017, 11:49:35
I don't find the word "reach" misleading. A 300mm lens is a different lens on different formats. On DX (half frame) a 300mm lens is a super telephoto. On FX (full frame) a 300mm is less than "super." I read it in a magazine. :)

Dave
Title: Re: Discussion of 'Equivalence'
Post by: Les Olson on May 27, 2017, 12:25:40
Here is a legitimate question for which the optics of Equivalence can be useful.

[/sup]What are the trade-offs between using this zoomer on my 36MP D810 versus my 24MP D500 versus my 24MP D750??

The way to think about it is the number of pixels on the subject.  If your subject just fills the frame with 500mm on the D500 you get 20MP on the subject, while with a D810 you get 15.4MP and with a D750 you get 10.3MP.   But if the subject just fills the frame with 200mm on the D500, equivalence says that 300mm on the D810 is the same, and that is wrong, because that 300mm on the D810 gives you 36MP on the subject instead of 20MP.   

The trade-off with using the 200-500 on the D500 vs the D810 is that you get more pixels on the subject with the D500 when you are using the longer end, but fewer pixels on the subject when you are using the shorter end.  The cross-over is between 300 and 350mm. 

Of course, the question is what you would do with all those pixels.  A 13 x 19 print at 300 dpi is about 10MP and at 360 dpi is 11.5MP, so the D750 limits size and/or output resolution at the long end.  One thing you could do with them, which equivalence forgot to mention, is adjust DoF by printing larger and/or at higher resolution.  The D810 and the D500 give you spare pixels to do that at both ends, but the D750 does not give you a lot of spare pixels at the longer end unless you are printing small.   
Title: Re: Discussion of 'Equivalence'
Post by: Andrea B. on May 28, 2017, 08:37:40
Small correction:  At a fixed distance from a subject, Equivalence for lenses says that 200mm on the D500 and 300mm on the D810 both give the same diagonal angle of view (framing) and same perspective. Equivalence for lenses does not say the two settings are "the same" or that two photos resulting from those settings are "the same".

angle of view = 2*arctan[28.8/(2*200)] = 8.24°
Title: Re: Discussion of 'Equivalence'
Post by: Les Olson on May 28, 2017, 14:09:26
Small correction:  At a fixed distance from a subject, Equivalence says that 200mm on the D500 and 300mm on the D810 both give the same diagonal angle of view (framing) and same perspective. Equivalence does not say the two settings are "the same" or that two photos resulting from those settings are "the same".

Well, actually, "Equivalent photos, as opposed to "equal" photos, are photos that have the same perspective, framing, DOF, shutter speed, brightness, and display dimensions (http://www.josephjamesphotography.com/equivalence/), which is a good deal more than framing and perspective.  Of course, lots of things are not on the list, and in some cases - aesthetic considerations, eg - that is obviously appropriate.  The point is that output resolution is also not on the list, and the issue is whether that is justifiable. 

One way such things can be justified is as a simplifying assumption - ignoring friction in physics, eg.  Ignoring output resolution is not a simplifying assumption like assuming equal output size, or as standardising output resolution would be.  Output resolution must be ignored if display dimensions must be the same: it is impossible for a 36MP image and a 16MP image to have the same display dimensions and the same display resolution without extensive re-working.

Even standardising output resolution as a simplifying assumption would still need justifying: some simplifying assumptions can be justified - ignoring friction in physics, eg, but some cannot - ignoring informational asymmetry and irrational behaviour in economics, eg.  There has already been discussion about whether the simplifying assumption of the same output size can be justified; IMO it cannot, but opinions can vary.  But we have not been given any reason to justify ignoring the fact that output resolution can be varied independently of output size. That attribute of digital capture is one of its core characteristics, and one of the few ways in which it is genuinely an advance over film.  (Not the least bewildering aspect of the equivalence debate is being accused of being wedded to film-era concepts, then listening to the same people who make that accusation talk about how the FX image has to be "enlarged" less than the DX image to reach 8 x 10).

We are back to circularity: if you do not ignore output resolution equivalence collapses, therefore output resolution must be ignored. 
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 28, 2017, 16:33:49
Nothing speaks against defining a notion of equivalence which includes equal output resolution.
This would constitute a more restrictive notion of equivalence.
It requires down- or up-sampling in order to achieve the same output resolution from cameras with different native resolutions.
In fact, whenever the output is on a digital screen, this is implicit in the "equal display dimension" requirement, since the dimension and the resolution are linked.
For prints, the matter of output resolution and how it relates to the paper and printing method deserves separate studies.
Comparison of camera systems of very different native resolutions is fraught with difficulties that are best resolved on a case-by-case basis, IMHO.

There is no simplifying assumption in the definition of equivalence.
The definition is what it is, and can be changed depending on individual needs.
If one requires a stricter definition with more criteria, one is welcome to add them and specify the use of an alternative definition when communicating conclusions.
However, this narrows the scope of possible comparisons. When there are too many criteria, the comparison would be meaningless since by definition no differences between photographs will occur.
The list of criteria was chosen based on the idea that apples-to-apples comparison of camera performance needs to control for certain variables that would otherwise make a comparison difficult or affect the result in a way that completely changes the performance, making interpretation difficult.
Simplifications occur in the optical theory that describes the choice of shooting parameters in order to achieve equivalent shots.
Title: Re: Discussion of 'Equivalence'
Post by: Les Olson on May 28, 2017, 18:36:20
When there are too many criteria, the comparison would be meaningless since by definition no differences between photographs will occur.
The list of criteria was chosen based on the idea that apples-to-apples comparison of camera performance needs to control for certain variables that would otherwise make a comparison difficult or affect the result in a way that completely changes the performance, making interpretation difficult.

Precisely!  Your assumption is still, however, that we have to make comparisons of camera performance, and therefore we have to do whatever is necessary to make comparisons feasible, and, for preference, easy.  It is not obvious to me that we have to make comparisons of sensor performance at all, especially when we are not making comparisons of, eg, AF performance and ergonomics, but we can leave that aside.  The idea that because including some variables, in this case output resolution, would make comparison difficult or impossible means that it is OK to arbitrarily exclude them cannot be right.  Excluding them may make the comparison possible, but it also makes it irrelevant. 

Down-sampling is not a solution, partly because there are different ways to do it and what the software does in any given case is not transparent, but above all because there is no reason to assume the photographer will do it.  As ever, we are back to circularity: the only reason you have advanced why we should accept the assumptions on which equivalence depends is that equivalence depends on them.   
Title: Re: Discussion of 'Equivalence'
Post by: Jack Dahlgren on May 28, 2017, 19:43:56
Precisely!  Your assumption is still, however, that we have to make comparisons of camera performance, and therefore we have to do whatever is necessary to make comparisons feasible, and, for preference, easy.  It is not obvious to me that we have to make comparisons of sensor performance at all, especially when we are not making comparisons of, eg, AF performance and ergonomics, but we can leave that aside.  The idea that because including some variables, in this case output resolution, would make comparison difficult or impossible means that it is OK to arbitrarily exclude them cannot be right.  Excluding them may make the comparison possible, but it also makes it irrelevant. 

 As ever, we are back to circularity: the only reason you have advanced why we should accept the assumptions on which equivalence depends is that equivalence depends on them.

I feel we are getting close to the end of the discussion of circularity.

It is this point exactly - that equivalence leaves out factors which are not equivalent - which makes it clear that it is a false equivalency. A square is not a circle unless we round the corners enough.

It is clear to most experienced photographers that different size formats have different characteristics. Where does the desire to make them equivalent stem from anyway?

We can NOT claim that photography is just about a standardized image format as there are so many other uses that photography is put to. The thousands of different designs of cameras over time attest to this. Fitting them all into a circular bin is a fool's errand.


Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 28, 2017, 20:00:24
It is not clear to me what you mean by 'assumptions'.
Equivalence is a list of 'criteria'. It is a definition. The definition does not assume anything (apart from other definitions, like DOF, perspective, etc.)

After pondering your view of definitions some more, it might actually be that your analysis of definitions is a cultural one, and that might explain some of the misunderstanding.
My bias from my mathematical training is that definitions exist independently, and have no intrinsic meaning or purpose of their own.
As such, they don't need to be justified.
In mathematics, a definition is perhaps motivated, but never justified a priori. It would not make any sense to do it.
Instead, the justification is applied retroactively by the theorems that you can prove about it.
But I'm well aware that the mathematical notion of a definition is a very small class of all possible notions.

----------

Still, I think that certain definitions are more useful than others.

Using a more restrictive set of criteria, e.g. one that includes all of Joseph James' criteria PLUS equal native (not output) resolution, would make it possible to talk about pictures made by the D7000 and the D4 (16MP) with the same perspective, framing, DOF, shutter speed, brightness, and display dimension as equivalent pictures, but not pictures of the D4 and D800, or the D7000 and D500, even though the pictures might have been downsampled to match the output resolution. If you also include the same output resolution, you exclude any form of resampling, which further reduces the set of possible comparisons.

On the other hand, if you do not require resolution as part of the definition of equivalence, you can compare, e.g. 'equivalent' pictures (in the sense of Joseph James) and study e.g. the effect of having different number of pixels. Similarly, you could exclude any of the six criteria because you wish to compare this aspect. For example, you might be interested in comparing the effect of display dimension. You can then compare two images shot with the D4 and D800, but the same output dpi (the D800 shot will be substantially larger).

My opinion is that comparison of sensor performance should not be mixed with different output dimensions. I would, personally, compare sensor performance at equal output size, and then separately study the effects of output dimension when working with the same files. Or, run multiple parallel comparisons at different output sizes.

You are always coming back to the question of why we should compare sensors at all. There is no imperative, but many people are interested in that stuff and we do not need to defend or attack their interests. Comparison of output is in the very nature of photography, assuming you look at your shots after producing them (this might not be true of some ultra-fast press photography where the shot is directly uploaded to the press agency after taking it).
Title: Re: Discussion of 'Equivalence'
Post by: David H. Hartman on May 28, 2017, 23:25:34
A fly in the ointment regarding comparing one sensor to another is we never receive totally RAW data in a RAW file. It has always been cooked a little. We can compare them (the sensors) in theory but not in practice.

Dave Hartman
Title: Re: Discussion of 'Equivalence'
Post by: David H. Hartman on May 28, 2017, 23:48:26
What we desperately need is an apples to apples comparison of apples and oranges.

Dave Hartman
Title: Re: Discussion of 'Equivalence'
Post by: David H. Hartman on May 29, 2017, 00:25:05
The concept of enlargement is just as valid for digital images as film images. While the digital image data has no physical size the original image censor does and if we ever look at​ a representation of the image data that representation be it print or monitor will again have a physical size.

You will need to enlarge the DX image 1.5x that of the FX image for it to achieve the same physical size as the FX image. This is no different from enlarging a half frame 35mm negative as compared to enlarging a full frame 35mm negative.

Dave Hartman
Title: Re: Discussion of 'Equivalence'
Post by: David H. Hartman on May 29, 2017, 01:22:28
With the right criteria for comparison a 1964 Volkswagen Karmann Ghia is equivalent to a 1964  Ferrari 250 GTO.

Dave Hartman
Title: Re: Discussion of 'Equivalence'
Post by: Andrea B. on May 29, 2017, 04:43:00
My bias from my mathematical training is that definitions exist independently, and have no intrinsic meaning or purpose of their own. As such, they don't need to be justified. In mathematics, a definition is perhaps motivated, but never justified a priori. It would not make any sense to do it. Instead, the justification is applied retroactively by the theorems that you can prove about it. But I'm well aware that the mathematical notion of a definition is a very small class of all possible notions.

I think you have correctly identified the concept of "definition" as a source of the confusion over Equivalence. I also trained as a mathematician and, of course, agree with your comments about definitions. Optics and electronics are all based on physics, thus on mathematics, so it makes perfect sense to me to define this concept called Equivalence and then use that definition to compare different camera formats in such a way as to be able to draw some useful conclusions about which might be better for a given photographic task -- and why.The reason the definition of Equivalence is so useful for format comparisons is because you can produce the same framing (angle of view), the same perspective (distance from subject), the same DoF (or diffraction or total amount of light on the sensor), the same exposure time, the same brightness and the same display dimensions with many, many different combinations of cameras and lenses.
Title: Re: Discussion of 'Equivalence'
Post by: pluton on May 29, 2017, 05:40:10
Opinion:  Equivalent, in this context, really should be re-formulated as 'near equivalent', 'rough equivalent' or 'close equivalent'.  In the most common English usage that I am familiar with, the word 'equivalent' means 'the same as', not 'nearly the same as'.
Just a thought.
Title: Re: Discussion of 'Equivalence'
Post by: Andrea B. on May 29, 2017, 05:58:25
Perhaps call it Photographically Equivalent.
Title: Re: Discussion of 'Equivalence'
Post by: bclaff on May 29, 2017, 06:36:15
Perhaps call it Photographically Equivalent.
Perhaps, visually equivalent ? As in visually indistinguishable.
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on May 29, 2017, 10:32:46
There is always a certain tension when adopting words that occur in everyday language for a technical definition. The word 'equivalent', by most dictionary definitions, does not mean 'equal' but something weaker than that. Even in everyday use, it is usually met with a question for specification. For example, if I proclaim 'It is equivalent to have a house or a flat', the natural follow-up question is usually 'equivalent in what sense?', which indicates that the word was not sufficiently precise to make the statement clear. I can then explain, e.g. 'equivalent in the sense that they both give shelter to myself and my belongings.' Once this is clarified, one can then discuss whether this list of criteria is reasonable etc. It is obvious that some people will have criteria for equivalence of housing that is much more restrictive, up to the point where only their own house qualifies (at which point the word 'equivalent' becomes meaningless).

That is, the word 'equivalent' is generic enough to be suitable for technical definitions. This would not be true of 'visually indistinguishable', because that already means something that is fairly precise, and the criteria do not match. Two equivalent images (in the sense of James) do not have to be visually indistinguishable. One example was already given by Les (different resolution), another would be if the sensors are from a very different generation: the older one will generally have more noise. The fact that the comparison is of equivalent images makes it easier to interpret the observed difference; you cannot, for example, say that the images were shot with different amounts of total light, and the noise is due to photon noise. Or that the photos show very different framing, and the noise is more obvious in one shot because of that. It is because the pictures were shot in a way that controls for these nuisance variables.

When looking at reviews of cameras, it seems to be more or less acknowledged (and practised) that one should compare at equal framing, perspective, DOF/diffraction and viewing size. But many tests compare at equal ISO. This is possible, but when the comparison is between different formats, it requires a quantitative analysis of noise in order to establish whether it is more or less than what is expected. Often, such comparisons are met with quite a bit of criticism along the lines of 'the comparison is not fair'. I think that any comparison that is done in a controlled and transparent way is 'fair', but I think that the interpretation can be facilitated by certain ways of comparing things.

To give a further example: when I see a test that compares the noise performance of an iPhone with that of a DSLR of a similar vintage (at the same framing, perspective, DOF, viewing size) at the same ISO, of course the iPhone will be much noisier, and it would be naive to expect otherwise. If you compare at equivalent ISOs (same amount of total light), I immediately get a rough idea of how bad it is. E.g. if the noise is similar at equivalent ISOs, the iPhone performance is not less than what is anyway expected from the sheer scaling of the sensor, which would lead me to believe that the performance is actually quite impressive. If on the other hand the iPhone shot is more noisy at equivalent ISOs, I cannot say 'look, this is because the sensor is so small', because that has already been controlled for. Instead, there must be something else going on (e.g. a lower quantum efficiency or fill factor, or more read noise) that explains the perceived difference.

------------

So far, there were two main lines of criticism of the list of criteria included in James' definition:
1) The list is too restrictive, because of either
     a) two images can only be equivalent when they are the same, or
     b) two images can look the same even though they are not equivalent
2) The list is too loose (why is resolution not included? etc.), i.e. two images can look different even though they are equivalent

From this, I would conclude that without further assumptions the definition of James' is neither necessary nor sufficient for 'visual indistinguishability'.
Title: Re: Discussion of 'Equivalence'
Post by: Les Olson on May 29, 2017, 11:17:12

[...] Optics and electronics are all based on physics, thus on mathematics, so it makes perfect sense to me to define this concept called Equivalence and then use that definition to compare different camera formats in such a way as to be able to draw some useful conclusions about which might be better for a given photographic task -- and why.The reason the definition of Equivalence is so useful for format comparisons is because you can produce the same framing (angle of view), the same perspective (distance from subject), the same DoF (or diffraction or total amount of light on the sensor), the same exposure time, the same brightness and the same display dimensions with many, many different combinations of cameras and lenses.


No one is arguing about whether the square root of -1 "exists", and therefore whether it can be used to draw conclusions about alternating current.  Nor are we arguing about whether, as a term-of-art, equivalence, like "point", can have any restrictions or conditions its creator chooses (but people who wish to do that and be understood are well-advised not to choose ordinary words as terms-of-art, and instead emulate doctors and lawyers who use arcane or non-English words to signal the use of the term-of-art; confusion over whether "equivalent" = "nearly identical" = "equal" = "the same" was inherent in the choice of an ordinary word as term-of-art). 

We are arguing about whether it is true that equivalence allows us "to draw some useful conclusions about which [format] might be better for a given photographic task".  The status of axioms is irrelevant,  because there is a difference between an axiom and an arbitrary condition, and it is arbitrary conditions that are the problem with equivalence. 

There are two kinds of arbitrary condition at play in this discussion. 

One is merely irritating: starting from an arbitrary status quo that habit has lead one to suppose has some real significance: saying that if I use a 105mm f/2.5 lens on FX for portraits I need a 75mm f/1.8 lens on DX, and there is no such lens, is as inane as saying that one mile = 1.60934 km so athletics cannot possibly use SI units. 

The other, which undermines the whole equivalence enterprise, is arbitrarily ignoring factors that reverse the allegedly useful conclusions.  It is a legitimate simplification for physicists to ignore friction when teaching mechanics, because they do not pretend that the space shuttle will not get hot on re-entry.  It is not a legitimate simplification for neo-liberal economists to ignore informational asymmetry when discussing markets, because they do pretend that governments do not need to regulate markets, which depends on there being no informational asymmetry.  It is not a legitimate simplification for equivalentisti to ignore the independence of output size and output resolution, because they do conclude that at equal framing and perspective and output size, DX images will have more DoF, which depends on output resolution not being independent of output size.  It is not a legitimate simplification for equivalentisti to ignore the central role of the viewer's evaluation, because they conclude that 85mm f/1.8 on DX is not equivalent to 105mm f/2.5 on FX, which depends on ignoring the fact that viewers have no preference for one over the other.

So, let me set a problem for equivalence.  I want to reproduce the look of David Bailey's 1965 portrait of the Kray twins, Reggie and Ronnie (they were violent criminals, Reggie is on the left, Ronnie on the right; Bailey had grown up in the same neighbourhood and knew them; "I quite liked Reg, even though when he was 19 he slashed my father’s face with a razor. Ron was a basket full of rattlesnakes." https://www.theguardian.com/artanddesign/2015/jul/05/david-bailey-stardust-exhibition-edinburgh-photographer-interview - the whole interview is well worth reading).  The portrait was made with an 80mm f/2.8 on 6 x 6 film.  Does anyone think that knowing what would be equivalent on FX or DX is where I need to start? 

 
Title: Re: Discussion of 'Equivalence'
Post by: David H. Hartman on May 29, 2017, 13:12:37
It is not a legitimate simplification for equivalentisti to ignore the central role of the viewer's evaluation, because they conclude that 85mm f/1.8 on DX is not equivalent to 105mm f/2.5 on FX, which depends on ignoring the fact that viewers have no preference for one over the other.

Perspective in the human face has an emotional impact on the viewer. The flatter the perspective the more aloof or distant, the rounder the perspective more personal until the perspective appears distorted. A normal conversational distance is not an arbitrary distance. It is informed by social norms and practicality. The social norms are based in part on predatory animal behavior. When an animal with binocular vision looks intently with both eyes at another animal it is frequently thinking about food, sex or combat. I rather expect you will ridicule this for lack of understanding.

Perspective in the human face has an emotional impact on the viewer and the viewer needs no preference for one focal length or format over another to feel the perspective's impact.

Dave Hartman
Title: Re: Discussion of 'Equivalence'
Post by: Les Olson on May 29, 2017, 13:15:28
The concept of enlargement is just as valid for digital images as film images. While the digital image data has no physical size the original image censor does and if we ever look at​ a representation of the image data that representation be it print or monitor will again have a physical size.

You will need to enlarge the DX image 1.5x that of the FX image for it to achieve the same physical size as the FX image. This is no different from enlarging a half frame 35mm negative as compared to enlarging a full frame 35mm negative.

No.  The D500 has 20MP.  At 300 dpi, 8 x 10 is 5.4MP, 12 x 18 is 9MP, 24 x 36 is 18MP.  Because I have spare pixels, I can print all those sizes at identical resolution: there is no "enlargement" of the DX image as output size increases until I exceed 24 x 36.  It was different when a D2 only had 4MP: then printing larger was the same as enlarging a negative, because except for tiny prints the only way to print bigger was to lower dpi.   

A D810 has 36MP and a D500 has 20MP, so the sensor elements are slightly smaller on the D500 (pixel pitch 4.2 microns vs 4.9).  Suppose I have the same lens on both: the same object is imaged the same size on both sensors. That image covers 1.2 (4.9/4.2) times as many sensor elements on the D500 sensor as on the D810 sensor.  If I print the D500 and the D810 images at 300 dpi, the printed objects are 1.2 times larger (not 1.5). But I can print the D500 image at 360 dpi instead of 300 dpi, so that the printed images are the same size for both formats (360/300 = 1.2).  I cannot quite manage it at 24 x 36, which at 360 dpi is 21.6MP, but if I am allowed to adjust print resolution (and you just try and stop me  ;)) there is no sensor "enlargement" except at the most extreme print sizes.  For the D5, which has 6.4 micron pixel pitch, I would have to print at 450 dpi instead of 360 dpi to eliminate enlargement (6.4/4.2 = 1.5).   
Title: Re: Discussion of 'Equivalence'
Post by: David H. Hartman on May 29, 2017, 14:17:55
No.  The D500 has 20MP.  At 300 dpi, 8 x 10 is 5.4MP, 12 x 18 is 9MP, 24 x 36 is 18MP.  Because I have spare pixels, I can print all those sizes at identical resolution: there is no "enlargement" of the DX image as output size increases until I exceed 24 x 36.  It was different when a D2 only had 4MP: then printing larger was the same as enlarging a negative, because except for tiny prints the only way to print bigger was to lower dpi.   

A D810 has 36MP and a D500 has 20MP, so the sensor elements are slightly smaller on the D500 (pixel pitch 4.2 microns vs 4.9).  Suppose I have the same lens on both: the same object is imaged the same size on both sensors. That image covers 1.2 (4.9/4.2) times as many sensor elements on the D500 sensor as on the D810 sensor.  If I print the D500 and the D810 images at 300 dpi, the printed objects are 1.2 times larger (not 1.5). But I can print the D500 image at 360 dpi instead of 300 dpi, so that the printed images are the same size for both formats (360/300 = 1.2).  I cannot quite manage it at 24 x 36, which at 360 dpi is 21.6MP, but if I am allowed to adjust print resolution (and you just try and stop me  ;)) there is no sensor "enlargement" except at the most extreme print sizes.  For the D5, which has 6.4 micron pixel pitch, I would have to print at 450 dpi instead of 360 dpi to eliminate enlargement (6.4/4.2 = 1.5).   

36 x 10 = 360
24 x 15 = 360

[I too tired write and hoped the above conveyed what I was thinking. My previous post may have been ambiguous.]
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on June 01, 2017, 22:05:59
No.  The D500 has 20MP.  At 300 dpi, 8 x 10 is 5.4MP, 12 x 18 is 9MP, 24 x 36 is 18MP.  Because I have spare pixels, I can print all those sizes at identical resolution: there is no "enlargement" of the DX image as output size increases until I exceed 24 x 36.  It was different when a D2 only had 4MP: then printing larger was the same as enlarging a negative, because except for tiny prints the only way to print bigger was to lower dpi.   

A D810 has 36MP and a D500 has 20MP, so the sensor elements are slightly smaller on the D500 (pixel pitch 4.2 microns vs 4.9).  Suppose I have the same lens on both: the same object is imaged the same size on both sensors. That image covers 1.2 (4.9/4.2) times as many sensor elements on the D500 sensor as on the D810 sensor.  If I print the D500 and the D810 images at 300 dpi, the printed objects are 1.2 times larger (not 1.5). But I can print the D500 image at 360 dpi instead of 300 dpi, so that the printed images are the same size for both formats (360/300 = 1.2).  I cannot quite manage it at 24 x 36, which at 360 dpi is 21.6MP, but if I am allowed to adjust print resolution (and you just try and stop me  ;)) there is no sensor "enlargement" except at the most extreme print sizes.  For the D5, which has 6.4 micron pixel pitch, I would have to print at 450 dpi instead of 360 dpi to eliminate enlargement (6.4/4.2 = 1.5).   

Remember that the 'secondary magnification' argument was brought up by Bjørn as an alternative explanation for why images from different formats behave differently. So you are now disagreeing with Bjørn, not me, about the applicability of this concept to digital images. I think it would be best if he himself responded to that.
Title: Re: Discussion of 'Equivalence'
Post by: simsurace on June 01, 2017, 22:11:45
So, let me set a problem for equivalence.  I want to reproduce the look of David Bailey's 1965 portrait of the Kray twins, Reggie and Ronnie (they were violent criminals, Reggie is on the left, Ronnie on the right; Bailey had grown up in the same neighbourhood and knew them; "I quite liked Reg, even though when he was 19 he slashed my father’s face with a razor. Ron was a basket full of rattlesnakes." https://www.theguardian.com/artanddesign/2015/jul/05/david-bailey-stardust-exhibition-edinburgh-photographer-interview - the whole interview is well worth reading).  The portrait was made with an 80mm f/2.8 on 6 x 6 film.  Does anyone think that knowing what would be equivalent on FX or DX is where I need to start?

Nice iconic photograph. No, I don't think that is where you would start. I would first try to find some people/actors that can deliver this kind of impression. But at some point, a consideration of focal length and camera position have to come into play if I'm to reproduce the same perspective and framing.

Now, what is your point?
Title: Re: Discussion of 'Equivalence'
Post by: Les Olson on June 05, 2017, 12:13:58
My point is that "equivalence" is not helpful in making photographs.  All I need to know is that 80mm is "normal" on 6 x 6: then I know that I need a "normal" lens - on FX 35mm to 50mm and on DX 24mm to 35mm. There are plenty of lenses in the right range, for FX and DX.  The reason I want that unusual - for portraits - focal length is that I want the unusual perspective, which contributes to the intimidating atmosphere, but I do not need to know exactly how far away Bailey was: I will judge how close to get as I look in the viewfinder, which will depend on, eg, exactly how big my subject's shoulders are, which won't be the same as Ronnie Kray's.  The lighting, the expressions, the clothes and the low point of view are all format independent, and DoF is quite deep, so that is not a problem in any format.   

I do not need to know the "equivalent" focal lengths because I do not want "the same framing and perspective" because I am not trying to copy Bailey's image.  "Equivalence" is only helpful if you are trying to copy images, and copying images is pointless - as well as, in the case of other people's images, not obviously legal. 

If it were answered that in this case the DoF is quite deep, but in others I might want a very shallow DoF, I would ask for an example by an important photographer of an image depending on DoF so shallow that DX could not approximate it.  Very shallow DoF is not, contrary to the impression discussions of equivalence give, a widely used pictorial device; here is an example of creative use of unusually shallow DoF, by Sudek, from Labyrinths, but it is not beyond the reach of DX with ordinary lenses.

Do I really need to spell out an argument for why, if "equivalence" is not helpful in making photographs, it is not helpful in evaluating equipment for making photographs?
 



Title: Re: Discussion of 'Equivalence'
Post by: simsurace on June 08, 2017, 12:45:18
My point is that "equivalence" is not helpful in making photographs.  All I need to know is that 80mm is "normal" on 6 x 6: then I know that I need a "normal" lens - on FX 35mm to 50mm and on DX 24mm to 35mm.   
That is, loosely speaking, a statement about equivalence of focal lengths regarding angle of view (AOV). You are arguing against something that is so deeply ingrained in your thinking that you don't even notice it. All that James and others did was not to stop at AOV, but extend the concept to other important image characteristics.

I do not need to know the "equivalent" focal lengths because I do not want "the same framing and perspective" because I am not trying to copy Bailey's image.  "Equivalence" is only helpful if you are trying to copy images, and copying images is pointless - as well as, in the case of other people's images, not obviously legal. 
In your statement above, you already determined equivalent focal length ranges on several formats. You just used a very high tolerance in terms of the resulting FOV.
 
Whether you want to copy the image exactly or just imitate it roughly is a matter of tolerance, and you are quite right to say that there are many other factors at play here (besides the camera setting). Frankly, your problem statement did not precisely state the goal. Why are you now saying that copying images is pointless or illegal? You were the one to set up the problem. To 'reproduce the look' could mean anything from something vaguely reminiscent to an exact copy.

Very shallow DoF is not, contrary to the impression discussions of equivalence give, a widely used pictorial device;
I don't get your statement. It is a feature (or a bug, depending on your viewpoint) of almost all photography that most of everything is out of focus. Whether it is used intentionally as a pictorial device or not; it is simply a reality that we have to live with and work around.

Besides, shallow DOF, or rather, the amount of background blur, is a tool for isolating the subject. It is commonplace in e.g. wildlife photography, where quite often you get a rare opportunity at a certain subject, and there is lots of clutter that is not relevant to the image. You might not consider these examples (which are too numerous to list) important, but here we are again approaching deeply subjective territory.

I don't know whether you expect to settle such subjective and controversial matters, but I understand that you are pushing a certain taste or style of photography. I respect that, but I don't accept it as universal. Therefore I do not like the idea of subjugating technical/scientific concepts to (a particular) taste.

Do I really need to spell out an argument for why, if "equivalence" is not helpful in making photographs, it is not helpful in evaluating equipment for making photographs?
Well, if you want to. In my mind they are almost completely independent.
Why conflate two things if you can enjoy the freedom of thinking in two dimensions?

I like my kitchen knives to be very sharp. Sometimes it makes the food look and taste better, sometimes it is not noticeable.
Finding and perfecting ways to sharpen a knife is completely independent of perfecting culinary skills. I don't decide whether I sharpened the knife well by tasting the final meal, since there are too many factors playing into that. Instead, I inspect and try the blade using one or several standard tests. Having the blade perform well on these tests gives me the confidence I need to have in my tools. I don't subjugate the sharpness test to a specific style of cuisine, because I don't want to be constrained by this choice, and even when a sharp blade is not critical for the success of a meal, it makes the preparation more enjoyable and safer.