NikonGear'23

Gear Talk => Camera Talk => Topic started by: Michael Erlewine on October 27, 2017, 14:55:57

Title: Pixel-Shifting Vs. Larger Sensors
Post by: Michael Erlewine on October 27, 2017, 14:55:57
There are a number of threads here on the new Sony A7r3 mirrorless camera, and I even started one myself. It would be nice if we could keep this thread here on topic, since that is why I am posting it and perhaps post other issues on more general threads.

What I would like to discuss is why the A7R3 may be particularly useful to me and the reasons I feel this way. And, of course, I am trolling here for more information on this topic and other photographers with a similar bent.

The first “major” DSLR that I had was the Nikon D1X, sometime in 2001. And I have had almost all of the DSLRs from Nikon since then, at least of the landscape variety. Since I shoot close-up nature photos, I never cared about sports-related cameras, high ISOs, and autofocus.

Anyway, for me, there have been a string of cameras all the way up to Nikon’s recent release of the D850. In my case, it’s always been onward and upward, onward to more and better features and upward toward sensors with ever greater megapixels. And the last couple of years have been kind of a climax of sorts, at least a branching out of options. And of course, I was swept up in it all, especially the seeming-endless waiting, etc. I marched through buying (and returning) three medium-format cameras, a long time ago the Mamiya RZ67 (with eleven lenses) and more recently the Hasselblad X1D and the Fujifilm GFX.

And along in there I also bought and tested out the Pentax K3 and K1, mostly because of their pixel-shift technology. And I had the Sony A7S and A7R. I bought the A7R2, sold it, bought it again and sold it yesterday. I also ordered a copy of the A7R3 yesterday, mainly because of the pixel-shift feature, which brings me to my point in writing this.

Of course, like many of us I am in the habit of getting cameras with more and better pixels, and without really thinking about it I imagined I would like a 100 Mpx camera or even greater. However, I have been recently having doubts about this after getting the Nikon D850 camera, with its 45.7 Mpx.

I have a very big and fast PC, one with two GPUs, eight cores, a fast processor, 128 GB of RAM, etc. However, I did notice with the new Nikon D850, which has only a modest increase in megapixels, a difference in the computing power required. Keep in mind, that I stack focus, so I often have to process 100 or more large TIF files in the same batch. This takes time, and with the D850 it takes a little MORE time. Not that much, actually.

However, I can see that when we get 100 Mpx sensors, it will increasingly take more time (and storage). I keep all my stacked layers, so I have many hundreds of thousands of images by now. And this set me to thinking. 

Of course, I have wanted larger sensors, but not just for more megapixels, but for larger-sized photosites that collect more light. That is why I originally purchased a Sony A7s, for more light and larger photosites or whatever we call them.

By using the Pentax K3 and K1, both of which have pixel-shifting technology in them, I could see that they provided superior color and its resulting resolution, but I was not happy the way Pentax handled non-native lenses (of which I have a lot), so eventually it was more trouble than it was worth and the Pentax lenses did not make me happy. I like APO lenses.

So, my point and perhaps question here to those techsperts out there is: can we have a discussion here about perhaps not yearning for ever greater-sized sensors and concentrate more on improving the color and resolution in smaller-sized sensors, the ones we already are using.

I am happy with about 50 Mpx in sensor size, not less please, but perhaps I don’t need more. Since I don’t make prints of my images (never have), I only need a size to display on the web or place in an e-book format. Typically, I used images that are 2048 pixels on the long side for what I post, depending on where I post of course.

So, I’m wondering if my Nikon D850, which is great by the way, much nicer than I had imagined, along with the new Sony A7R3 (if it works as advertised) might be all that I need?  At 42 Mpx, the A7R3 is not much different than the 45.7 Mpx of the D850, and that may be as much as I need.

I am wondering, since I ONLY do still photography on a tripod, whether the pixel-shift technology of the A7R3 may give me the color (most important to me) and the enhanced resolution (however that works), so that instead of having to ever project myself forward to larger and larger sensors, I might (at least for a time) be happy with what I have (or will soon have with the A7R3)?

I am sure some of you here will have more technical thoughts about this conundrum I am in, either agreeing with me or pointing out something I have not thought of. Thanks for feedback.

P.S. This is the style of photography I tend to do, this with the D810 if I remember right.
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: armando_m on October 27, 2017, 17:41:03
Higher resolution sure

better color ? I do not know
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Michael Erlewine on October 27, 2017, 17:48:24
Higher resolution sure

better color ? I do not know

By that I mean what we get by having pure color from the pixel-sifting of the four images. What do you call it? We avoid the Bayer filter.
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Les Olson on October 27, 2017, 18:21:14
The aim is not to improve the colour or resolution or anything else of the sensor.  The aim is to improve something in the final output: the print, or the computer monitor, if you must.  That raises a whole other world of considerations, including psychophysics and all the Weber-Fechner stuff, the often large gulf between pleasingness and accuracy, and whether your aim is aesthetic or documentary, and if aesthetic, which aesthetic - David Attenborough or Robert Mapplethorpe or Carl Blossfeldt. 
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Michael Erlewine on October 27, 2017, 19:04:02
The aim is not to improve the colour or resolution or anything else of the sensor.  The aim is to improve something in the final output: the print, or the computer monitor, if you must.  That raises a whole other world of considerations, including psychophysics and all the Weber-Fechner stuff, the often large gulf between pleasingness and accuracy, and whether your aim is aesthetic or documentary, and if aesthetic, which aesthetic - David Attenborough or Robert Mapplethorpe or Carl Blossfeldt.

I don't understand your post or miss the point. I am talking about getting more RGB pure colors via the pixel-shifting and nothing else. What we do in post, etc. is individual. Please stay with what I pointed out.
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: simsurace on October 27, 2017, 20:32:24
This is already being done e.g. by Pentax. By shifting the pixels three times in a U shape (left-up-right, for example), you collect 2 green samples and one red and one blue for each pixel. This is more data so it cannot be worse than having only one of them and interpolated values for the other two. Are you asking whether the Sony will also do it, or just whether the principle works? I guess that the benefit is mainly seen at very high spacial frequencies close to the Nyquist limit, and the potential for color moiré may be reduced if the camera's AA filter is on the weak side.
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Michael Erlewine on October 27, 2017, 21:04:10
This is already being done e.g. by Pentax. By shifting the pixels three times in a U shape (left-up-right, for example), you collect 2 green samples and one red and one blue for each pixel. This is more data so it cannot be worse than having only one of them and interpolated values for the other two. Are you asking whether the Sony will also do it, or just whether the principle works? I guess that the benefit is mainly seen at very high spacial frequencies close to the Nyquist limit, and the potential for color moiré may be reduced if the camera's AA filter is on the weak side.

No, I expect the A7R3 pixel-shifting to be identical to the Pentax version, but perhaps smoother if there is any benefit from doing the collating in post rather than in-camera. My point (and perhaps not a question) is that the benfits are not necessarily seen in ever-larger sensors, but in getting better color and perhaps resolution in something like 50 Mpx. I don't need giant prints, but rather finer-looking images. Was looking for comments on that.
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Frank Fremerey on October 27, 2017, 21:23:56
With pixel shift you get more spatial resolution.

With larger pixels you get more color information (better statistics = higher differentiation)

Do I understand you question correctly, that you want a Multishot camera like it was done in earlier Digital Medium Format backs to go to quarter-pixel-resolution but you want less pixels to make them bigger and get more color differentiation?

Then my answer would be to use a Multishot Sensor:

https://www.youtube.com/watch?v=7N135xTbrZI

With a 16-shift-multishot you only need very small amount of very big pixel to achieve a 50MP result, namely 50/4...
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Michael Erlewine on October 27, 2017, 21:35:14
I guess it is hard to be understood. All I was saying is that after trying the pixel-shift technology on the Pentax K3 and then the K1, I got what I considered medium-format-like results. If the A7R3 is similar (which it promises to be) and I get MF results again, then I don't need the sensor to be larger than around 50 Mpx to be happy, ESPECIALLY since I have a lot of APO lenses that could probably work pretty well on the A&R3. I do notice more than a slight difference between Bayer interpolation and pixel-shift. I was looking for support in this view or reason why we must endless move on to larger sensors.
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Jack Dahlgren on October 28, 2017, 06:05:47
Michael,

I have a Nikon Df which has a 24x36 sensor with 16mpx and use of a Canon 5DSR which has same size sensor and 50mpx. For posting on the web the Nikon delivers better colors and a broader range. The Canon ultimately has more resolution but if you are shrinking to 2000px most of that is lost and color and tonality become more important.

For what I do it does not make sense to chase more pixels or larger sensors. Life is too short to spend it wishing for something I don’t have and being dissatisfied with what I do have. For you it seems like chasing that hardware is interesting and important to you. I wish you peace in your journey.

My point is that we all seek something different out of our photography so our desires for different cameras will vary. I think you will need to keep seeking, no matter what new thing comes out.
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Ethan on October 28, 2017, 09:33:29
Please correct me if I misunderstood:

You want a super duper type of sensor Pixel Shift to give you a super duper rendition of colours.
Nevermind that whatever light source you are using will affect the final colour of the image, but you do not print your images and instead reduce the size to 2000ish pixels on the long end to publish on line and Ebook which means converting to Jepg.

So you are starting with RGB NEF and converting by downsizing to Jpeg sRGB and you wish to protect the color rendition? Correct?

I mean seriously, you start with a Wagyu to end up with a Bolognese!!!!!!! and you wonder why the taste is lost???!!!???

At least start with a Prophoto color space which is larger than RGB which in turn is larger than Jpeg but you do not wish to discuss or consider the processing side or the colour gamut and psychoshit of it.

Let me rephrase, you want a camera that will give you the result that you want with minimal post processing  as you are too much investing time in the shooting set up and subsequent stacking.

Please, when you find the Valhalla camera and sensor and tech to go with it, do let me know.
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Michael Erlewine on October 28, 2017, 09:48:02
Please correct me if I misunderstood:

You want a super duper type of sensor Pixel Shift to give you a super duper rendition of colours.
Nevermind that whatever light source you are using will affect the final colour of the image, but you do not print your images and instead reduce the size to 2000ish pixels on the long end to publish on line and Ebook which means converting to Jepg.

So you are starting with RGB NEF and converting by downsizing to Jpeg sRGB and you wish to protect the color rendition? Correct?

I mean seriously, you start with a Wagyu to end up with a Bolognese!!!!!!! and you wonder why the taste is lost???!!!???

At least start with a Prophoto color space which is larger than RGB which in turn is larger than Jpeg but you do not wish to discuss or consider the processing side or the colour gamut and psychoshit of it.

Let me rephrase, you want a camera that will give you the result that you want with minimal post processing  as you are too much investing time in the shooting set up and subsequent stacking.

Please, when you find the Valhalla camera and sensor and tech to go with it, do let me know.

That's not it at all. It must be that it is my fault and that I am not able to communicate on blogs like this one, and there is no point in endlessly repeating myself. It's tiresome.

I use ProPhoto RGB in Photoshoip, like many people. I enjoy looking at it in full-res, but most (or many) web sites don't support large files or have a limit, so I limit the size for online posting. I do all this for my own enjoyment and share very few images online via photo-related forums.

This kind of discussion, where my original point is not only NOT understood, but brings forth no response that is helpful. This is why I don't post at these forums as much as I used to. It is like "Whack a mole." There is no communication, much less discussion to speak of. No offense, but such a tone is not helpful or on point.

It's just easier to post a few images and let it go at that.

Here is an image I took today using the D850 with the Nikon "O" (CRT) lens, just for fun.
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Les Olson on October 28, 2017, 13:56:07
I don't understand your post or miss the point. I am talking about getting more RGB pure colors via the pixel-shifting and nothing else.

But what do you mean by "more RGB pure colours"? 

If you mean you want to avoid the channel-contamination that causes low-ISO noise in monochrome areas like blue sky, pixel shift won't do it.  Your best bet with current technology is a 3CCD sensor, with prisms separating the light into red, green and blue and separate sensors for each.

If you mean "more, pure" - ie, an expanded colour space, you can't get away from the question of how you want your human observers to perceive the image.  The electro-magnetic spectrum is continuous: there is no such thing as "colour" in the absence of a human observer.  The only possible understanding of phrases like "accurate colour" or "better colour" is "human observers perceive the colour of the photograph as being the same as when they looked at the real thing" or "people like it better".  Then, you can't get away from the fact that there is such as thing as the "least noticeable difference", that differences smaller than that are not perceived, and that the least noticeable difference, in many parameters, depends on the background level (a given increase in light intensity may be obvious when the light is low, but undetectable when the light is bright). You cannot ask whether a new and/or improved sensor will produce a perceptible change in the print without defining what aspects of the print  - brightness, colour saturation, etc - you are interested in and what the current levels of those parameters are. 

If you mean more saturated colours or purer hues, the situation is even more complicated, because as well as the least noticeable difference issue the tristimulus colour system assumes that the three stimuli - hue, luminance and chromaticity - do not interact, and they do.  Brighter colours are perceived as more saturated and of purer hue, and more saturated colours are perceived as brighter. 

The upshot is that there is no such thing as truth in colour.  That is why you also can't get away from the aesthetic choice: it is not a matter of lying or telling the truth, it is a matter of which lie.
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Michael Erlewine on October 28, 2017, 15:15:51
But what do you mean by "more RGB pure colours"? 

I mean what I wrote, the four-shot technique of Red, Green, Blue (withtwo Greens) compared to the Bayer interpolated, nothing more and nothing less. What is hard to understand about that?
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Frank Fremerey on October 28, 2017, 16:13:46
I mean what I wrote, the four-shot technique of Red, Green, Blue (withtwo Greens) compared to the Bayer interpolated, nothing more and nothing less. What is hard to understand about that?
look at the video I linked to understand that you need a 12.5 MP sensor with huge 8 micron pixels and  16 shot with full native color information fo every pixel.

The point is to also work on the lighting setup to get a better color response and contrast. You are already in the highend, so better lighting will cost you.

Another question is, which company will deliver such a camera that is better than the superb D850? Look at my natural light shots from the botanical gardens with no stacking...
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Michael Erlewine on October 28, 2017, 16:26:13
look at the video I linked to understand that you need a 12.5 MP sensor with huge 8 micron pixels and  16 shot with full native color information fo every pixel.

The point is to also work on the lighting setup to get a better color response and contrast. You are already in the highend, so better lighting will cost you.

Another question is, which company will deliver such a camera that is better than the superb D850? Look at my natural light shots from the botanical gardens with no stacking...

What video is that, please?  And I am talking here about the pixels in the D850 and sifting them, not some other sensor.

I'm referring to pixel-shift stacking and not no stacking at all. Let's talk about that please.
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Les Olson on October 28, 2017, 16:49:46
I mean what I wrote, the four-shot technique of Red, Green, Blue (withtwo Greens) compared to the Bayer interpolated, nothing more and nothing less. What is hard to understand about that?

What is hard to understand is what avoiding interpolation - if, indeed the pixel shift procedure does - has to do with "more RGB pure" colours ("more" or "pure" in what possible sense?).  I realise Sony has talked about "unprecedented colour accuracy", but that is BS for psychophysical reasons. 

A few years ago Sony had another sensor that avoided interpolation, which they called Active Pixel Color Sampling - APCS.  The micro-lenses were on tiny wheels, and could be moved so that each sensor successively captured blue, red and green images.  Apart from the obvious problem with exposure duration, the micro-lenses were no more perfectly wavelength selective than the current ones are, so there was nothing pure about the channels then and there will be nothing pure about the channels with pixel shift. 
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Bruno Schroder on October 28, 2017, 16:55:19
I guess it is hard to be understood. ²

Obviously but I remember how you said, several years ago, that your had to learn patience. This is a test :)

I think you are right to think you will get more resolution and better colours. From a pure data acquisition standpoint, replacing an interpolation by a direct measure will, by definition, provide a better result, unless in the practically unfeasible case where the result of the interpolation is equal to the measured value.

Your style is the perfect use case for this technique.

Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Michael Erlewine on October 28, 2017, 16:58:19
Obviously but I remember how you said, several years ago, that your had to learn patience. This is a test :)

I think you are right to think you will get more resolution and better colours. From a pure data acquisition standpoint, replacing an interpolation by a direct measure will, by definition, provide a better result, unless in the practically unfeasible case where the result of the interpolation is equal to the measured value.

Your style is the perfect use case for this technique.

At last, a response I can understand and that addressed my question. Perhaps I used the wrong terms, but if you go and read about pixel-shifting, they show a red, blue, and two greens results that are more "pure" (use your own words) than the familiar Bayer interpolation schemes, which are just that: interpolations. That's it.
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Frank Fremerey on October 28, 2017, 19:21:48
What video is that, please?  And I am talking here about the pixels in the D850 and sifting them, not some other sensor.

I'm referring to pixel-shift stacking and not no stacking at all. Let's talk about that please.

the video I linked in answer #7
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Michael Erlewine on October 29, 2017, 12:27:35
the video I linked in answer #7

OK, thanks. So we are getting in the Sony pixel-shift double the resolution of the sensor without enlarging the sensor, correct? And the colors are actual or "true", and not hypothetical. Is that correct?
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Les Olson on October 29, 2017, 12:52:30
From a pure data acquisition standpoint, replacing an interpolation by a direct measure will, by definition, provide a better result, unless in the practically unfeasible case where the result of the interpolation is equal to the measured value.

There is another case where it won't make any difference: where there isn't any data. 

The idea that the world has "real" colours, which you can capture data about, which can be more or less accurate, is just wrong.  The colour we see does not exist outside our heads: it is purely subjective.  There is no such thing as objectively "better colour" or "more accurate colour", only subjectively better or more accurate colour.   

Of course, although there is no "colour" in the real world there is light of different wavelengths, and the wavelengths are data. But a silicon-based detector can't measure wavelength: we have no access to that data.  We can use an RGB system and, by experiment, develop an algorithm that reproduces the effect of light of a particular wavelength on the RGB system in our retina.  The catch is that the micro-lenses do not transmit light of only one colour.  The micro-lens means that the quantum efficiency of a pixel with a (say) green lens is less for red and blue light than for green light, but it is not zero.  The same number of captured photons could result from pure green light, or from less intense green light plus some blue and/or red light.  You have to find a way of working out, or guessing what it probably was. 

If you had a measured RGB value for every pixel you could calculate it directly, since you know the spectral transmission of the micro-lens.  That would get rid of interpolation.  That is not what you get with pixel shift. With pixel shift you get a value from the pixel's neighbour with a different coloured micro-lens when it looked (you hope) at the same part of the image.  You are still interpolating: assigning the pixel a value for the colours it did not measure based on its neighbours which did measure those colours.  That is no different to the Bayer mosaic for detail bigger than four pixels, so for detail on that scale pixel shift won't change the RGB values.  Detail smaller than two pixels is not resolved in either case.  So there is a window where you might get some benefit, if the sensor shift is precise enough, and you have really good tripod and a lens with no lateral colour. 

It is reasonable to ask whether you get different answers by looking at a pixel's neighbours compared to pixel shift.  We will have to wait and see, although you certainly won't where the detail is either larger than eight pixels or smaller than two pixels. 
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Frank Fremerey on October 29, 2017, 12:54:33
a 12 Megapixel sensor with 16 Multishot will deliver 48 Megapixels and the collected photon events will be 16 times the amount of a single shot accumulated. The color fidelity should be significantly higher though.

BUT

this is currently a theoretical piece of hardware. Quality is also dependent on implementation details

An A7S2 with 16 shot pixel shift multishot would be your dream come true...  write a letter to Sony. would be great for all table top shooters!
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Michael Erlewine on October 29, 2017, 13:20:38
I can follow what Les Olson points out, that ultimately it is subjective and in our minds. However....

Having actually used pixel-shifting a lot, in both the K3 and K1 Pentax cameras, you can say what you think, but seeing is believing. And I have SEEN the difference between Bayer interpolation and pixel-system, over and over.

So, we can talk until the cows come home or we can agree they are two different takes or kinds of interpolation, but to me, this just muddies the waters. And it is not why I started this thread, to litigate out of discussion the difference.

The pixel-shifted colors, however you spell it, are IMO (which is all that I have) quite a bit better than the standard Bayer interpolation scheme we are used to. My original question had to do with the value of going after ever-larger sensors as opposed to getting better color/resolution of the existing sized sensors as in the A7R3. And Pixel-Shifting, to my eyes, seems to offer this.



 
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Ilkka Nissilä on October 29, 2017, 15:47:48
How does use of flash work with pixel-shift technology? What about the effects of flickering artificial lights (100/120Hz)?
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Michael Erlewine on October 29, 2017, 15:58:48
How does use of flash work with pixel-shift technology? What about the effects of flickering artificial lights (100/120Hz)?

Any change of light is not good. I can't remember, but I thought I saw somewhere that there is a box you can check to equalize the light, but not sure for what camera.

It fits my regime, but when I used the Pentax cameras outside, light was real problem. It is a first step.
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Frank Fremerey on October 29, 2017, 16:35:06
natural light on an overcast day or sunny day behind a huge diffusor shade gives the best light for your kind of work. Artists light, northern light.
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Michael Erlewine on October 29, 2017, 16:41:03
natural light on an overcast day or sunny day behind a huge diffusor shade gives the best light for your kind of work. Artists light, northern light.

It is the variability of the light that is the problem with pixel-shifting, not the kind.
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Bruno Schroder on October 29, 2017, 17:26:26
Think data acquisition. If you change a parameter during sampling, your data loses coherency and can not be correctly aggregated.

Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Frank Fremerey on October 29, 2017, 19:09:53
It is the variability of the light that is the problem with pixel-shifting, not the kind.

the variability of the light is smallest in the named lighting.

With Kinoflo style lighting the color changes over time.

With professional strobes you need 16 flash cycles for one picture????
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Les Olson on October 30, 2017, 10:08:55
The pixel-shifted colors, however you spell it, are IMO (which is all that I have) quite a bit better than the standard Bayer interpolation scheme we are used to. My original question had to do with the value of going after ever-larger sensors as opposed to getting better color/resolution of the existing sized sensors as in the A7R3. And Pixel-Shifting, to my eyes, seems to offer this.

The fact that colour is all in your head does not mean it is not real, to paraphrase Professor Dumbledore.  It just means that when you say that the colours you get from a camera with pixel shift are "better", you are responding to something about the colours.  The colours you get from a camera are no more a "true" representation of the real world than the colours you see.  The light reflected from a flower is not made up of RGB, so the colour the camera "sees" is all in its head - its image processing engine - in exactly the same way as the colour you see is all in your head. It is just not true that the colours a pixel shift camera sees are purer in the sense of being less a result of the choices made by the people who designed the image processing engine.

The colours in the camera (or a monitor or a printer) are made up of saturation, luminance and hue.  The hue is the RGB values.  That is all there is.  So when you say colours are "better" it can only be because one (or more) of those things - saturation, luminance or hue, is very slightly different.  The saturation and the luminance and the RGB values can be anything you like, so any camera can reproduce, perfectly, the output of any other camera.  So when you say your choice is between larger sensors and pixel-shift you are missing a third option: adjust the output of your current camera to match what it is you like about the colours you get from pixel shift cameras.

Of course, it may be less work just to buy the camera whose output you prefer, but what happens if you don't like the same colours for every type of scene, or if a client's taste differs from yours?  You still need to know which of the colour parameters is driving the preference so you can adjust the output to match it.  You need to work out whether you are David Attenborough or Robert Mapplethorpe, artistically speaking, because colour tastes are driven by that choice.

The same applies to resolution. You may want more resolution for the same reason as astronomers and microscopists want more resolution, but if you want more resolution to give the impression of greater sharpness you are going down the same rabbit hole as with colour accuracy. "Sharpness" is like "better colour" - it is all in your head.  However, just as the perception of better colour has a basis in particular values of luminance and saturation and so on, perceived sharpness has a basis in measureable image parameters: resolution, but also acutance, contrast and colour (red areas are perceived as less sharp, eg).  To get the result you want you have to know what is driving perceived sharpness in that image, and if you do you can get the result you want with any camera.
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Michael Erlewine on October 30, 2017, 10:29:49
The fact that colour is all in your head does not mean it is not real, to paraphrase Professor Dumbledore.  It just means that when you say that the colours you get from a camera with pixel shift are "better", you are responding to something about the colours.  The colours you get from a camera are no more a "true" representation of the real world than the colours you see.  The light reflected from a flower is not made up of RGB, so the colour the camera "sees" is all in its head - its image processing engine - in exactly the same way as the colour you see is all in your head. It is just not true that the colours a pixel shift camera sees are purer in the sense of being less a result of the choices made by the people who designed the image processing engine.

The colours in the camera (or a monitor or a printer) are made up of saturation, luminance and hue.  The hue is the RGB values.  That is all there is.  So when you say colours are "better" it can only be because one (or more) of those things - saturation, luminance or hue, is very slightly different.  The saturation and the luminance and the RGB values can be anything you like, so any camera can reproduce, perfectly, the output of any other camera.  So when you say your choice is between larger sensors and pixel-shift you are missing a third option: adjust the output of your current camera to match what it is you like about the colours you get from pixel shift cameras.

Of course, it may be less work just to buy the camera whose output you prefer, but what happens if you don't like the same colours for every type of scene, or if a client's taste differs from yours?  You still need to know which of the colour parameters is driving the preference so you can adjust the output to match it.  You need to work out whether you are David Attenborough or Robert Mapplethorpe, artistically speaking, because colour tastes are driven by that choice.

The same applies to resolution. You may want more resolution for the same reason as astronomers and microscopists want more resolution, but if you want more resolution to give the impression of greater sharpness you are going down the same rabbit hole as with colour accuracy. "Sharpness" is like "better colour" - it is all in your head.  However, just as the perception of better colour has a basis in particular values of luminance and saturation and so on, perceived sharpness has a basis in measureable image parameters: resolution, but also acutance, contrast and colour (red areas are perceived as less sharp, eg).  To get the result you want you have to know what is driving perceived sharpness in that image, and if you do you can get the result you want with any camera.

I'm not going to repeat myself. Thanks for the explanation, but I get nothing from it but a lot of words. Sorry. You are telling me stuff I already know, but still not saying anything I feel is useful. That's just me. I have been singing the song that sharpness is dependent on color for a decade at least, etc. Perhaps there are others out there who can dialog with what I wrote to better communicate with me. You want me to hear you, but I feel you never heard what i said, but just used my question as a way to go on and on.

I am having the same discussion on LuLa, but with reasonable answers.
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: BW on October 30, 2017, 11:19:18
Have you tried shouting your questions into the woods? Seem like thats your only remaining option?
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Ilkka Nissilä on October 30, 2017, 13:30:28
If there are fine details which vary in colour at high spatial frequencies in the subject then the Bayer method in itself may not reproduce the color variations at the pixel level as accurately as the pixel-shift technique does.

Also simply white text on black background shows how a Bayer sensor with no AA filter creates a lot of artifacts which pixel shift seems to be able to eliminate (assuming conditions are static enough). Here is a section of a screen grab from dpreview's studio comparison.
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Michael Erlewine on October 30, 2017, 13:41:58
If there are fine details which vary in colour at high spatial frequencies in the subject then the Bayer method in itself may not reproduce the color variations at the pixel level as accurately as the pixel-shift technique does.

Also simply white text on black background shows how a Bayer sensor with no AA filter creates a lot of artifacts which pixel shift seems to be able to eliminate (assuming conditions are static enough). Here is a section of a screen grab from dpreview's studio comparison.

Thank you! This, then, is what I thought we would be talking about by now. My own eyes proved to me that the pixel-shifting in the Pentax K3 and K1 were a step finer than I could achieve with my Nikon D810. I could see the difference, but the implementation of other Pentax features (like the convenience of non-native lenses) were just not there and many of the Pentax native lenses were, well, not so good IMO. I even went to great extent to find lenses that I could use on the K1, including the Cosina Voigtlander 90mm APO Macro and ever a rare copy of the Voigtlander 125mm APO-Lanthar in Pentax mount, and others. It was easy to see that there was a future in pixel-shifting. I just needed to wait for a better implementation of it. The Sony A7R3 may be it; if not I will return it and sell my Voigtlander 65mm Macro in E-mount.

I also have a special rig for the Cambo Actus that will let me use the A7R3 with some of my cherished lenses like the APO El Nikkor 105mm, and many others.
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: simsurace on October 30, 2017, 14:59:06
If there are fine details which vary in colour at high spatial frequencies in the subject then the Bayer method in itself may not reproduce the color variations at the pixel level as accurately as the pixel-shift technique does.

Also simply white text on black background shows how a Bayer sensor with no AA filter creates a lot of artifacts which pixel shift seems to be able to eliminate (assuming conditions are static enough). Here is a section of a screen grab from dpreview's studio comparison.

Thanks Ilkka, this is a pretty impressive demonstration!
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Les Olson on October 30, 2017, 20:47:37
If there are fine details which vary in colour at high spatial frequencies in the subject then the Bayer method in itself may not reproduce the color variations at the pixel level as accurately as the pixel-shift technique does.

I am done trying to explain this, so I will just ask a question: we measure resolution of high spatial frequencies by MTF, so why can't you show us the MTFs for pixel shift vs no pixel shift for a coloured target - instead of black and white lines, red/green, red/blue, green/blue?
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Michael Erlewine on October 30, 2017, 21:42:18
I am done trying to explain this, so I will just ask a question: we measure resolution of high spatial frequencies by MTF, so why can't you show us the MTFs for pixel shift vs no pixel shift for a coloured target - instead of black and white lines, red/green, red/blue, green/blue?

I'm done responding to you too and your telling folks what they have to do. My suggestion is that you actually try pixel-shift and compare it to other DSLR quality images and thus see for yourself. That's what this thread was supposed to be about. Then, I would like to hear what you have found.
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: simsurace on October 31, 2017, 13:20:34
I am done trying to explain this, so I will just ask a question: we measure resolution of high spatial frequencies by MTF, so why can't you show us the MTFs for pixel shift vs no pixel shift for a coloured target - instead of black and white lines, red/green, red/blue, green/blue?
Why are you asking for this?
What is your prediction?
Does it change the interpretation of the obvious occurrence of massive artifacts in the Bayer shot that are absent from the pixel-shifted shot?
Are you familiar with Nyquist's theorem and its relationship with Bayer sensors?
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Les Olson on October 31, 2017, 15:39:57
Why are you asking for this?
What is your prediction?
Does it change the interpretation of the obvious occurrence of massive artifacts in the Bayer shot that are absent from the pixel-shifted shot?
Are you familiar with Nyquist's theorem and its relationship with Bayer sensors?

The claim is that pixel shift will give better resolution of fine coloured detail. Isn't an MTF curve or MTF50 for a coloured target the obvious way to show that?   
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Ilkka Nissilä on October 31, 2017, 16:54:18
Measuring line pairs at different distances is not a good way to characterize the 2D imaging performance of a system when the real-world objects to be delineated are not lines but 2D random patterns of small variations in colour.

I think the best way to judge the accuracy of fine detail is to drop high contrast dots at random locations of the test target, photograph it and take a sum-of-squared-errors between RGB values of the target and the image and then use that as a measure of the quality of the imaging. Distortion would be heavily punished in this metric though so perhaps that's not what people want to use (if the spatial accuracy of the image is not a priority).
However, distortion can be measured and corrected prior to testing.

Perhaps a metric of false color can be devised that is less sensitive to distortion than MSE.

Line pairs of homogeneous colour allow interpolation to patch gaps in the matrix. There should be some contrast loss though (which reduces MTF values). I think coloured resolution targets are not commonly used because it is harder to make high-contrast high-resolution targets in colour. However, in my opinion a different metric should be used for the accuracy of colour of pixel-level detail. I'll think about it.
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Ilkka Nissilä on October 31, 2017, 17:25:05
False color is a kind of cross-talk between the colors. A system with false color isn’t properly characterized by images taken of targets of a specific color  since the result is garbling of the channel not being meausured as well. I guess the whole k-space analysis fails because the RGB MTF changes if the target is shifted by one pixel. It’s just not the right tool for characterizing this kind of imaging artefacts.
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: simsurace on October 31, 2017, 18:01:57
The claim is that pixel shift will give better resolution of fine coloured detail. Isn't an MTF curve or MTF50 for a coloured target the obvious way to show that?
We are talking about detail at frequencies very close to the Nyquist limit. MTF is probably less than 50% at that frequency.
Anyway, with differences so obvious to the naked eye, what more do you expect to learn from an MTF curve?
The Bayer color artifacts are not nice, but currently there is a steep price to pay for avoiding them (either light loss for stacked sensors or steadyness for the pixel shift feature).
If there is a lot of light and the pixel shift is fast enough, maybe you can capture the four images so quickly that the problem of movement is minimized...
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Bruno Schroder on October 31, 2017, 19:22:29
The claim is that pixel shift will give better resolution of fine coloured detail. Isn't an MTF curve or MTF50 for a coloured target the obvious way to show that?
Does this means that you do not agree that a direct measure is better than interpolation?
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Michael Erlewine on October 31, 2017, 19:30:05
Does this means that you do not agree that a direct measure is better than interpolation?

My point exactly, but Les wants to put them on the same level. I don't get it.
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Bruno Schroder on October 31, 2017, 19:49:04
Not exactly about the same topic but the theory is applicable here as well: https://www.lensrentals.com/blog/2017/10/the-8k-conundrum-when-bad-lenses-mount-good-sensors/

From the conclusion: For an exceptionally good lens stopped down a bit, then the system MTF is almost entirely dependent on the camera. Raise the resolution of the camera and the image improves dramatically.

Michael uses only the finest lenses available. He not only works with sharp lenses, but also with the best APO corrected. In his case, and his style of photography, any increase in sensor resolution will have a positive impact on the system MTF (lens + sensor).

If you have not looked at his work, he photographs static subjects in studio under static lightning. Movement and duration of the process have no impact here.

Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Michael Erlewine on October 31, 2017, 20:01:29
Not exactly about the same topic but the theory is applicable here as well: https://www.lensrentals.com/blog/2017/10/the-8k-conundrum-when-bad-lenses-mount-good-sensors/

From the conclusion: For an exceptionally good lens stopped down a bit, then the system MTF is almost entirely dependent on the camera. Raise the resolution of the camera and the image improves dramatically.

Michael uses only the finest lenses available. He not only works with sharp lenses, but also with the best APO corrected. In his case, and his style of photography, any increase in sensor resolution will have a positive impact on the system MTF (lens + sensor).

If you have not looked at his work, he photographs static subjects in studio under static lightning. Movement and duration of the process have no impact here.

Here is a an example of what I do,  many taken outside in summer, but inside in winter, because we are in northern Michigan.

Taken with the Nikon D850 and the APO El Nikkor 105mm lens.

Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Les Olson on November 01, 2017, 08:57:36
Does this means that you do not agree that a direct measure is better than interpolation?

But pixel shift is not a direct measure: it does not measure RGB values for each pixel. It is, just like the Bayer array, assigning each pixel the values for RGB it did not measure based on RGB values in its neighbours.

It is not true that a direct measurement is always better than an interpolation: an interpolation between accurate measurements can be much more accurate than an inaccurate direct measurement.  In the case of pixel shift the question question is how accurate the shift is.  There must be both systematic and random error in how far the sensor moves.  How big is it? There is no variation in the distance between pixels in the Bayer array, so it is perfectly possible that pixel shift will be worse than Bayer interpolation.  (Plus, of course, since we are talking about microns, it really is not good enough to say "Oh, Michael's in the studio so he doesn't have to worry about subject movement").
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Michael Erlewine on November 01, 2017, 09:24:35
But pixel shift is not a direct measure: it does not measure RGB values for each pixel. It is, just like the Bayer array, assigning each pixel the values for RGB it did not measure based on RGB values in its neighbours.

It is not true that a direct measurement is always better than an interpolation: an interpolation between accurate measurements can be much more accurate than an inaccurate direct measurement.  In the case of pixel shift the question question is how accurate the shift is.  There must be both systematic and random error in how far the sensor moves.  How big is it? There is no variation in the distance between pixels in the Bayer array, so it is perfectly possible that pixel shift will be worse than Bayer interpolation.  (Plus, of course, since we are talking about microns, it really is not good enough to say "Oh, Michael's in the studio so he doesn't have to worry about subject movement").

Les Olson: Have you ever actually used pixel-shift? If so, which camera and on what?
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: simsurace on November 01, 2017, 09:38:27
But pixel shift is not a direct measure: it does not measure RGB values for each pixel. It is, just like the Bayer array, assigning each pixel the values for RGB it did not measure based on RGB values in its neighbours.
My understanding is that it assigns each pixel RGGB values. One of them comes from the unshifted position, the other three from shifts in a U-shaped pattern. It is the same as if you would leave the photosites stationary and move the color filter array on top of that. You would make four measurements, one in each channel.
I'm assuming that stationarity of the camera and subject is given and that the sensor movement error is significantly smaller than a pixel.

It is not true that a direct measurement is always better than an interpolation: an interpolation between accurate measurements can be much more accurate than an inaccurate direct measurement.  In the case of pixel shift the question question is how accurate the shift is.  There must be both systematic and random error in how far the sensor moves.  How big is it? There is no variation in the distance between pixels in the Bayer array, so it is perfectly possible that pixel shift will be worse than Bayer interpolation.  (Plus, of course, since we are talking about microns, it really is not good enough to say "Oh, Michael's in the studio so he doesn't have to worry about subject movement").

Your argument would certainly have merit a priori (i.e. before having seen any images from pixel shifted sensors), and these are certainly among the challenges that the engineers implementing a pixel shift feature are/were facing.
I would assume that in Michael's studio it is possible to achieve the same degree of stability as in dpreview's studio.
I do not know whether this fully accounts for what Michael is seeing, as there might be other differences between the cameras and their processing pipeline that contribute to the perception that color rendition has improved.
Nevertheless, my guess is that artifacts like those seen in Ilkka's example are especially problematic for stacking, because they make it more difficult for the software to match features to a pixel-level precision. So while the artifacts themselves may be somewhat smeared out and hidden by the stacking process, the resulting contrast may also be lower than in the case where very few artifacts are present and very precise matching is possible. This could have a perceptible effect on sharpness, but to be really sure one would need to do quite elaborate testing.
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Les Olson on November 01, 2017, 09:48:03
Raise the resolution of the camera and the image improves dramatically.

Michael uses only the finest lenses available. He not only works with sharp lenses, but also with the best APO corrected. In his case, and his style of photography, any increase in sensor resolution will have a positive impact on the system MTF (lens + sensor).

Actually, the conclusion was "system MTF changes significantly with different sensors, but at higher resolutions a diminishing return is seen." [Emphasis added]

The perceived sharpness of a print is heavily influenced by detail about 10 lp/mm.  For an FX sensor, ignoring output resolution, and to make it easy to read the graphs, say 75 lp/mm at the sensor.  So, if you want to know how your prints will look, MTF at 75 lp/mm is a better metric than extinction MTF.  On those graphs, 6K - roughly 24MP - gives MTF at 75 lp/mm = 0.65 and 8K - roughly, 40MP - gives just under 0.7, and the lens only value, which you cannot ever quite reach, is 0.75. 

If you prefer, you can look at extinction resolution, conventionally defined as MTF50.  In that case, going from 24MP to 40MP takes you from 100 lp/mm to 110 lp/mm.  On a back-of-the envelope calculation, you need well over 100MP to get close to the lens only MTF50.

Of course, those are contrast MTF curves, and you might want to think about what MTF50 means in terms of colour resolution.
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Les Olson on November 01, 2017, 11:32:49
My understanding is that it assigns each pixel RGGB values. One of them comes from the unshifted position, the other three from shifts in a U-shaped pattern. It is the same as if you would leave the photosites stationary and move the color filter array on top of that. You would make four measurements, one in each channel.


No, it's not the same.  When you move the micro-lenses - Sony's Active Pixel Colour Sensor - the pixel sees the same bit of the scene each time, but through a different coloured micro-lens.  In pixel shift the pixel sees a different bit of the scene each time through the same coloured micro-lens.  Then in software you "shift" the pixels back so they all line up.  In the APCS small inaccuracies don't matter, but in pixel shift, they do. Plus, it may not be the case that the software manipulation of pixel "position" is free, informationally.

Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Les Olson on November 01, 2017, 13:22:19

So while the artifacts themselves may be somewhat smeared out and hidden by the stacking process, the resulting contrast may also be lower than in the case where very few artifacts are present and very precise matching is possible. This could have a perceptible effect on sharpness, but to be really sure one would need to do quite elaborate testing.

Or one could just move the little square around the DPR test shot, since why anyone would think white on black text was the way to test for resolution of fine colour detail is beyond me.  Just to the left of the B&W text is a tiny embroidery of the Beatles.  There is a lot of very fine detail with abrupt colour transitions. There may be a tiny but more detail in the K1 images with pixel shift on compared to off, but the D850 is better than both (look especially at the trousers).  If you go up to the reels of thread immediately above, where there is detail within larger areas of the same colour, you see the same thing: pixel shift may improve the K1 images a tiny bit, but the D850 is clearly better. 

As for the artifacts, comparing the white on black text with the black on white and the lower contrast versions is illuminating.

Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Frank Fremerey on November 01, 2017, 13:30:37
But pixel shift is not a direct measure: it does not measure RGB values for each pixel. It is, just like the Bayer array, assigning each pixel the values for RGB it did not measure based on RGB values in its neighbours.

It is not true that a direct measurement is always better than an interpolation: an interpolation between accurate measurements can be much more accurate than an inaccurate direct measurement.  In the case of pixel shift the question question is how accurate the shift is.  There must be both systematic and random error in how far the sensor moves.  How big is it? There is no variation in the distance between pixels in the Bayer array, so it is perfectly possible that pixel shift will be worse than Bayer interpolation.  (Plus, of course, since we are talking about microns, it really is not good enough to say "Oh, Michael's in the studio so he doesn't have to worry about subject movement").

For color accuracy massive oversampling on static subjects means to deepen the FWC on virtual pixels that get much more direct measurement of the actual color in the position of the shift.

Color accuracy is what Michael is longing for and a 16-shift would mean that the final picture is made from 16 times the amount of photon events compared to a single shot.

In my imagination that means we have a 16 fold Full Well Capacity per final pixel and much more direct measurement quotient, meaning a real 16 bit measurement

Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Michael Erlewine on November 01, 2017, 13:42:46
The Nikon D850 does not offer pixel-shift, so I fail to understand why it is being referenced. The new Sony A7R3 has pixel-shift and early reports seem to agree that the new A7R3 has greater DR than the older A7R2. If so, then pixel-shift on the A7R3 may well be very useful for more accurate color (to my eyes) than with Bayer interpolation. I'm not a betting person, but I would bet that the A7R3 may make a greater splash in this regard than we expect, given the right lenses.

I sold my A7R2 (and a host of batteries) the day the A7R3 was announced having pixel-shift. I hope Sony does not mess it up; I have used Sony video cameras for decades and trust them a lot.
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Frank Fremerey on November 01, 2017, 15:22:30
The Nikon D850 does not offer pixel-shift, so I fail to understand why it is being referenced. The new Sony A7R3 has pixel-shift and early reports seem to agree that the new A7R3 has greater DR than the older A7R2. If so, then pixel-shift on the A7R3 may well be very useful for more accurate color (to my eyes) than with Bayer interpolation. I'm not a betting person, but I would bet that the A7R3 may make a greater splash in this regard than we expect, given the right lenses.

I sold my A7R2 (and a host of batteries) the day the A7R3 was announced having pixel-shift. I hope Sony does not mess it up; I have used Sony video cameras for decades and trust them a lot.

The A7R3 has a 4-way-pixel-shift, so the final image is made from four times the photon events than a single shot with the same sonsor.

Is t he shifting is done with the aim to measure all three basic colors (green twice) in the accurate position for non static subjects (1-pixel-shift)?
Or is it done mainly to increase spatial resolution (1/2 pixel shift)?
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Les Olson on November 01, 2017, 15:37:42
For color accuracy massive oversampling on static subjects means to deepen the FWC on virtual pixels that get much more direct measurement of the actual color in the position of the shift.

Color accuracy is what Michael is longing for and a 16-shift would mean that the final picture is made from 16 times the amount of photon events compared to a single shot.

Anyone who cared could just go back to the DPR test shot and download the RAW file for the K1 with and without pixel shift, process identically, zoom in on the color-checker card and get the RGB values for each colour area with and without pixel shift. 

"Colour accuracy" is still meaningless, but the RGB values must be different - otherwise the pixel shift is doing nothing to colour at all. 
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Michael Erlewine on November 01, 2017, 16:14:02
Having done many pixel-shift images with the Pentax K3 and K1, not to mention endless thousands of retouching images with the various modes of focus-stacking, I don’t care how you describe it, there is a significant difference between shifted and unshifted images, IMO (and experience), in the favor of shifted images.

Questions about “APO,” which has no standard definition, and terms like acutance, resolution, and other terms that we use to describe the interplay of color with “sharpness” (another vague term) is not at issue here. That conversation will go on perhaps forever.

All the tests I have seen, performed, and read about point out that there is a difference between pixel-shift and traditional Bayer images. It is that DIFFERENCE I have tried to refer to, not to stir up all the armchair philosophers or theoreticians out there, but to contact those who actual have used pixel-shift. They are the ones I would like to talk with, folks who actually have experimented with pixel-shifting, where the rubber meets the road, so to speak. Perhaps there are none on this forum!

And my inquiry was as to how far up the road of ever-larger sensors do we go until we have “enough” of whatever we are looking for in “resolution,” etc., to be satisfied. Perhaps never. And I am not interested in printing out images, either.

However, my hunch is there is a point of diminishing returns between what I need to get in the way of color that pleases me, along with sharpness, correction, etc. and... sensor size. Pixel-shift has, IMO, helped to limit (for me) the size of sensor I need. In other words, I believe I can get more with less... as to the sensor size that I used to imagine. 
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: David H. Hartman on November 02, 2017, 02:35:04
I may be sticking my foot deeply to my mouth but please indulge me...

If one is down sampling to 2000 pixels on the long side isn't the process used to down sample more important than the pixel or near pixel level performance of the camera, its sensor and its image processor?

Dave the ignorant
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Ethan on November 02, 2017, 06:08:41
I may be sticking my foot deeply to my mouth but please indulge me...

If one is down sampling to 2000 pixels on the long side isn't the process used to down sample more important than the pixel or near pixel level performance of the camera, its sensor and its image processor?

Dave the ignorant

As another "ignorant", this is what I tried to underline to Michael.

He is starting with a super duper scanned file, process it in Profoto colour space and drops it down to an sRGB Jpeg compressed colour space at a max of 2000 px at any length!!!!!!

What would be the point in having the best resolution if you are dropping to a Jpeg compressed file??????

But hey what do we know, we are all ignorant.

Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Ethan on November 02, 2017, 06:32:16
Michael, in view of your learned experience in focus stacking and pixel shifting, could you explain the effect of down scaling to a smaller file and additionally converting to an sRGB colour space in a Jpeg format.

1- Down scaling
2- sRGB
3- Jpeg

You are arguing continuously about the top end and dismissing any comment which does not find echo to your thinking. Maybe it is time for you to explain what happens on the lower end and how much file information is destroyed and subsequently how Pixel Shift will not be affected or marginally affected?


Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: charlie on November 02, 2017, 07:11:27
It seems to me that

You are arguing continuously
, Ethan.

As far as I can tell whatever Michael is doing on the top end is translating through to the bottom end, so to speak. The level of detail in many of his sRGB jpg images is among highest you'll find anywhere, I welcome you to prove me wrong. 

Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: David H. Hartman on November 02, 2017, 07:15:08
I'm very sure Michael is for post processing and his own viewing using a monitor capable of Adobe RGB and that sRGB is for the hoi polloi.

If stacking and retouching is done in prophoto, down sampled to 2000 pixels and then coverted to conform his monitors capability, that would be quite reasonable to me. Converting to sRGB for Web viewing is I think still a necessary evil because of the monitors of the many.

Dave Hartman who aspires to be a has been

---

...The level of detail in many of his sRGB jpg images is among highest you'll find anywhere...

Down sampling is a black art.

Just as every cop is a criminal
And all the sinners saints
As heads is tails
Just call me Lucifer
'Cause I'm in need of some restraint
(Who who, who who)

--Jagger,  Richards
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: David H. Hartman on November 02, 2017, 07:30:28
Oops!
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Erik Lund on November 02, 2017, 07:57:44
Seems like some of you guys are missing the crux here, this is not only about posting an JPG online or an PDF, it's also about the process and about what's possible.


So leave out the snarky remarks and try to stay on the subject of the thread. Thanks ;)



It is up to oneself what to use the output file for, we are many here who can benefit from knowing these above details, and put it to use for our output files whatever format we or our customers need or desire,,,
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Les Olson on November 02, 2017, 08:44:18
Les Olson: Have you ever actually used pixel-shift? If so, which camera and on what?

Yes, I have used it - but not in a camera.  Some scanners have staggered-pixel arrays that use the same principle.  Others have an option to do multiple scans with each slightly displaced.  Any review can tell you how much difference it makes. 
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Ilkka Nissilä on November 02, 2017, 09:14:32
What would be the point in having the best resolution if you are dropping to a Jpeg compressed file??????

Aliasing and false color do affect also the downsampled image if these processes are allowed to happen in the first place and the image which contain aliasing, color cross talk etc does not contain enough information to correct for these problems in post. The only way to get a correct image is to sample at a high enough frequency so that there is no aliasing in the first place.
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Bjørn Rørslett on November 02, 2017, 12:10:57
... The only way to get a correct image is to sample at a high enough frequency so that there is no aliasing in the first place.

A basic truth that obviously cannot be repeated often enough. Pixel shifting cannot solve the aliasing issue on its own, one still needs sufficient resolution for the task at hand.

Also worth keeping in mind is the Law of Diminishing Returns.
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Les Olson on November 02, 2017, 12:32:26
Aliasing and false color do affect also the downsampled image if these processes are allowed to happen in the first place and the image which contain aliasing, color cross talk etc does not contain enough information to correct for these problems in post. The only way to get a correct image is to sample at a high enough frequency so that there is no aliasing in the first place.

No. You can't reconstruct the un-aliased image from the aliased image, but you can make the aliased image identical to the un-aliased image or any other image you like.  And if the answer is that you can't if you don't know what the true un-aliased image looked like, you are back with the fallacy that there is a "true colour" that you don't have access to but want to reproduce. 

You do have access to the true colour: you look at the original.  No image can be identical to what you saw without processing: the sensor and the eye have different spectral sensitivities, the brain and the image processing engine have different "algorithms" (the brain's is actually closer to the Bayer process) and the monitor and the (say) flower have different spectral illuminance.  Given all those factors, the un-aliased image is not likely to be closer to what you saw, and if it is closer, it is no easier to bring it from "close" to "the same". 

We have been here many times before: a manufacturer introduces a feature, and suddenly the aspect of the image it purports to "improve" is all anyone can think about.  When "full frame" sensors appeared people discovered that shallow DoF was incredibly important, when AF fine tune appeared they discovered that "back focus" was rampant, when live-view histograms appeared they discovered that you can't expose correctly without ETTR, and now when pixel shift is in the news they find - or purport to find - horrendous aliasing everywhere. 

Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: simsurace on November 02, 2017, 13:14:59
No. You can't reconstruct the un-aliased image from the aliased image, but you can make the aliased image identical to the un-aliased image or any other image you like.  And if the answer is that you can't if you don't know what the true un-aliased image looked like, you are back with the fallacy that there is a "true colour" that you don't have access to but want to reproduce. 

You do have access to the true colour: you look at the original.  No image can be identical to what you saw without processing: the sensor and the eye have different spectral sensitivities, the brain and the image processing engine have different "algorithms" (the brain's is actually closer to the Bayer process) and the monitor and the (say) flower have different spectral illuminance.  Given all those factors, the un-aliased image is not likely to be closer to what you saw, and if it is closer, it is no easier to bring it from "close" to "the same". 

We can set up an imaging pipeline and study the distortion incurred from the source to the final reproduction. Distortion metrics (i.e. measures of how far apart two images are) vary according to the application, and some of them will be very sensitive to the Bayer artifacts, some less.

Maybe you are suggesting that a meaningful metric is one that does not care about Bayer artifacts, but I would disagree because they are so plain obvious. Maybe you can clarify this point.

I don't follow the argument about "making it like any other image". Are you suggesting retouching?
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Les Olson on November 02, 2017, 14:42:34

I don't follow the argument about "making it like any other image". Are you suggesting retouching?

There is no such thing as "colour" in the real world.  There are wavelengths of light in the real world, but colour is a subjective phenomenon.  When you put a prism in sunlight and separate the various wavelengths, red is around 680nm, yellow is around 590nm and green is around 550nm.  When you mix red light and green light you get yellow light - but the 680nm and the 550nm photons do not meld together and make 585nm photons. Red light and green light make yellow light only because our visual system responds so that a mixture of 680nm and 550nm photons looks the same as 590nm photons.  The colour we see has nothing to do with the frequencies: if you mix infrared at 800nm and ultraviolet at 400nm you don't see yellow.

Our visual system uses an RGB system for hue. There is no RGB in the real world.  The photons in the aerial image formed by the lens are not just RGB, they cover the whole range of wavelengths (except those filtered out by glass).  The RGB only comes into existence when you put coloured filters over the photosites.  The filters are not very selective: the red filter transmits quite a lot of yellow and green and the green filter transmits quite a lot of blue and yellow and red (there is a graph at https://micro.magnet.fsu.edu/primer/digitalimaging/cmosimagesensors.html about half way down).  In the example they give, light at 585nm = yellow gives you exactly equal R and G and a little bit of B.  So, whenever you have R and G big and equal and B small you say "yellow".  Obviously, other mixtures of wavelengths and intensities could give you R and G big and equal and B small, and you would call that "yellow" as well.  The catch is that our retinal RGB photoreceptors do not have the same spectral response as the RGB sensels, so light that we would see as a different colour the camera may see as the same, or as a different different colour.  That happens whether the RGB values are all measured by the same pixel - as with a Foveon sensor - or interpolated from the Bayer mosaic.

So however you get your RGB values some tinkering has to go on to get the camera output to look right, and however good the designers of the image processing engine are at colour tinkering, it won't look right every time.  The bad news is, there are only a few things you can tinker with: RGB values, lightness, brightness and saturation.  The good news is, you can tinker with them all you like.  So, if your camera makes your buttercup look ever so slightly greenish, you can turn down the G until it looks just right.  It does not matter whether that ever-so-slight greenish tinge is due to aliasing or your monitor being mis-calibrated or just the way the image processor works.
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: simsurace on November 02, 2017, 18:41:10
There is no such thing as "colour" in the real world...
One could argue that human vision is part of the 'real world' and hence color is part of the real world, but all of this does not really matter. All I'm saying is that we may look at the imaging chain below and compare RGB image_in and RGB image_out for different cameras and different input images, leaving the other arrows unchanged (or, alternatively, optimizing the conversion to minimize the difference between the two images either on an image-per-image basis, or the average difference across the different input images). This scheme may be applied to any kind of imaging chain comparison regardless of the physics of color or anything else that you mentioned.
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: David H. Hartman on November 03, 2017, 08:28:56
Down sampling with bicubic interpolation causes soft edges of fine detail and down sampling from approximately 8000 pixels to 2000 pixels involves a significant reduction of data. I don't see how differences seen at pixel level in the original image can endure to be seen after significant down sampling. Different procedures used  to down sample can give more or less apparent detail to the down sampled image. A dive directly from 8000 to 2000 pixels will not give the best results unless the software has a hidden algorithm at work. My software does not so I down sample is stages with sharpening in between each step.

Dave Hartman
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Erik Lund on November 03, 2017, 08:45:04
Yes, 'Bicubic Sharper' downsampling in several steps for reduction in image pixel size helps preserve details.
Often it's surprisingly good at retaining clarity and details IMHO
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: David H. Hartman on November 03, 2017, 08:56:25
I don't use bicubic with sharpening as the sharpening  in my software is too strong. That's why I alternate between sharpening and down sampling. This gives me significant control. Newer software may be more flexible than mine.

Dave
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: David H. Hartman on November 03, 2017, 10:15:42
Curiosity:

How may here have 4K monitors?  Please raise you hands.

Any 8K monitors?

How about technologically disadvantaged souls like myself with only a 1080P?

Dave
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Erik Lund on November 03, 2017, 11:02:16
In a new thread; Monitor - Please :)
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Ilkka Nissilä on November 03, 2017, 11:06:28
Down sampling with bicubic interpolation causes soft edges of fine detail and down sampling from approximately 8000 pixels to 2000 pixels involves a significant reduction of data. I don't see how differences seen at pixel level in the original image can endure to be seen after significant down sampling. Different procedures used  to down sample can give more or less apparent detail to the down sampled image. A dive directly from 8000 to 2000 pixels will not give the best results unless the software has a hidden algorithm at work. My software does not so I down sample is stages with sharpening in between each step.

Why not? To get the correct image, the blurring should occur before the image hits the sensor that samples it. Post-capture blurring doesn't achieve the same result.
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Les Olson on November 03, 2017, 11:16:49
All I'm saying is that we may look at the imaging chain below and compare RGB image_in and RGB image_out for different cameras and different input images, leaving the other arrows unchanged (or, alternatively, optimizing the conversion to minimize the difference between the two images either on an image-per-image basis, or the average difference across the different input images). This scheme may be applied to any kind of imaging chain comparison regardless of the physics of color or anything else that you mentioned.

Of course you can and that can be very useful in comparing image chains. 

What you can't do is get outside the RGB-based imaging chain, despite what people here keep pretending.  You could measure the wavelengths of light reflected off a (say) flower, and you could measure the wavelengths of light in the image formed by a lens (they won't be the same, because no glass is colour neutral and every glass is not-colour-neutral in its own way) but you cannot say that one set of RGB values generated in response to those wavelengths is more "accurate" than another.

What is being said here is "I have a theory about images and if the theory is true Image A should be better - although I am carefully sliding over what "better" means - and I am going to define any differences between Image A and Image B as Image A being better, which confirms my theory." (We had a discussion a while ago about circular arguments: well, here's another one). 
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: David H. Hartman on November 03, 2017, 11:22:24
I brought up the monitor because it could be the weak link just as down sampling surely is. If a drop from 8000 to 4000 is all that's needed that would be an improvement. If no down sampling is needed that would be idea.

I guess this thread is about theory. I thought it was practical. 2000 pixels on a side has confused me. I'll leave the thread. 

Sorry...
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Michael Erlewine on November 03, 2017, 11:58:52
I brought up the monitor because it could be the weak link just as down sampling surely is. If a drop from 8000 to 4000 is all that's needed that would be an improvement. If no down sampling is needed that would be idea.

I guess this thread is about theory. I thought it was practical. 2000 pixels on a side has confused me. I'll leave the thread. 

Sorry...

I am the OP. This thread IS about practical experience with pixel-shifting, but was hijacked (as many threads now here on NikonGear) by armchair philosophers who IMO just want to talk about theory. Perhaps no one has any actual experience with pixel shifting, but I for one am tired of seeing thread after thread turn into theory and not practice. This used to be THE practice forum. LuLa now is much more practical than here, IMO, if I am allowed to have an opinion and not have it considered unfriendly. I am not unfriendly, just focused on practical stuff. And I like some theory too, but in proportion.

I wish we had an area for practice questions and discussions. Almost all the threads I start these days turn into this.  Certainly there is a place for theory and "blue sky," but IMO this was never meant to be that. Is there anything we can do to get the practice back into this forum? Certainly the founders (nfoto, Erik, Andrea, etc.) were (and still are) practical and practice oriented.

I am still waiting for a practical discussion on pixel-shifting, without this thread being turned into "perhaps it is this way and perhaps it is that way."

Perhaps photography is changing into deciding how many angels can fit on the head of a pin. I hope not. And that's my two cents, meant to be constructive.
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Erik Lund on November 03, 2017, 12:44:13
I hear what you are saying Michael. Thank you! Don't forget JA and Jakov as co-founders! ;) Both even more the practical kind!


Please consider that people here come from different backgrounds, some definitely more theoretical than others, some feel comfortable with theory,,,


My best advice is to overlook the theoretical replies and focus on the practical since that's your preference, we are not in a position to discard theory from certain threads on the site, theory is fundamental for understanding and explaining most things,,, but yes on an other level,,, 


In this thread, I for one enjoy the theoretical comments, I don't always understand, but I can relate to it,,,


Sorry I have not used Pixel Shift nor MF sensor cameras - It is not usable/economical for my current field of photography,,,
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Michael Erlewine on November 03, 2017, 12:55:08
I hear what you are saying Michael. Thank you! Don't forget JA and Jakov as co-founders! ;) Both even more the practical kind!


Please consider that people here come from different backgrounds, some definitely more theoretical than others, some feel comfortable with theory,,,


My best advice is to overlook the theoretical replies and focus on the practical since that's your preference, we are not in a position to discard theory from certain threads on the site, theory is fundamental for understanding and explaining most things,,, but yes on an other level,,, 


In this thread, I for one enjoy the theoretical comments, I don't always understand, but I can relate to it,,,


Sorry I have not used Pixel Shift nor MF sensor cameras - It is not usable/economical for my current field of photography,,,

I hear you. I didn't know JA and Jakov were co-founders. Sorry. I will try to overlook theory unless is also relates to the pracrtice, but in this thread, there are pretty-much no practical users. Lloyd Chambers and I have talked about this and he has actually tested pixel-shift, as I have. Maybe it is just too early in the game for this. Thanks.
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Les Olson on November 03, 2017, 15:12:54
I hear what you are saying Michael. [...]

My best advice is to overlook the theoretical replies and focus on the practical since that's your preference, we are not in a position to discard theory from certain threads on the site, theory is fundamental for understanding and explaining most things,,, but yes on an other level,,, 

Steady on.  I will not be put in the wrong. 

Michael's original post said: "I am wondering [...] whether the pixel-shift technology of the A7R3 may give me the color (most important to me) and the enhanced resolution (however that works), so that [...] I might (at least for a time) be happy with what I have (or will soon have with the A7R3)?"

How is that a practical question?  This is a man asking us to predict his state of mind when he owns a camera that will not be on sale for a month and that for the feature he is fixated on requires software that as of now is at "pre-beta" (https://www.sonyalpharumors.com/sony-also-announced-new-imaging-edge-software-suite/).  But we can't have any theoretical discussion or talk about how it might be this or it might be that?! 

He went on: "I am sure some of you here will have more technical thoughts about this conundrum I am in, either agreeing with me or pointing out something I have not thought of."

That is, straightforwardly, an invitation to discuss technical issues.  It is not OK to whine and snarl because the invitation was taken up.

 

Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: JKoerner007 on November 03, 2017, 15:37:30
Steady on.  I will not be put in the wrong. 

Michael's original post said: "I am wondering [...] whether the pixel-shift technology of the A7R3 may give me the color (most important to me) and the enhanced resolution (however that works), so that [...] I might (at least for a time) be happy with what I have (or will soon have with the A7R3)?"

How is that a practical question?  This is a man asking us to predict his state of mind when he owns a camera that will not be on sale for a month and that for the feature he is fixated on requires software that as of now is at "pre-beta" (https://www.sonyalpharumors.com/sony-also-announced-new-imaging-edge-software-suite/).  But we can't have any theoretical discussion or talk about how it might be this or it might be that?! 

He went on: "I am sure some of you here will have more technical thoughts about this conundrum I am in, either agreeing with me or pointing out something I have not thought of."

That is, straightforwardly, an invitation to discuss technical issues.  It is not OK to whine and snarl because the invitation was taken up.


+1

The irony is, this is a Nikon forum, and Nikon doesn't have pixel shift, so how many here are going to be well-experienced in its usage?

Till now (as far as I am aware) only Pentax carried the technology ...

Even more ironic, Michael has used the technology, and is the very "person with experience" (in precisely his own style of shooting) ... and therefore has the answers to his own questions.

The only remaining questions are, "What will the pixel-shift tech of the Sony AR7rIII be like, and will it render better color than the Nikon D850?" Michael already owns and is happy with.

Since no one owns the Sony yet, how can anyone provide an in-depth, practical response?

Theoretical discussion is the only option, other than not to respond at all.

If we attempt to discuss theory, perhaps a re-read of Bjørn's suggestion, Also worth keeping in mind is the Law of Diminishing Returns, is the most relevant.

How many photos must we take, stack/shift, and combine ... with how many different camera/lens combinations ... before we can be happy with what we do?

Certainly, Michael has produced some exquisitely-rendered images ... so is the Sony pixel-shift/stack + adapter really going to make a difference over a D850 stack?

I honestly doubt it; in fact, already the D850 has shown to have better Base ISO DR.

At some point, it pays just to be happy, rather than forever chasing a rainbow of perfection ... that can never be caught.
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Michael Erlewine on November 03, 2017, 15:39:24
Steady on.  I will not be put in the wrong. 

Michael's original post said: "I am wondering [...] whether the pixel-shift technology of the A7R3 may give me the color (most important to me) and the enhanced resolution (however that works), so that [...] I might (at least for a time) be happy with what I have (or will soon have with the A7R3)?"

How is that a practical question?  This is a man asking us to predict his state of mind when he owns a camera that will not be on sale for a month and that for the feature he is fixated on requires software that as of now is at "pre-beta" (https://www.sonyalpharumors.com/sony-also-announced-new-imaging-edge-software-suite/).  But we can't have any theoretical discussion or talk about how it might be this or it might be that?! 

He went on: "I am sure some of you here will have more technical thoughts about this conundrum I am in, either agreeing with me or pointing out something I have not thought of."

That is, straightforwardly, an invitation to discuss technical issues.  It is not OK to whine and snarl because the invitation was taken up.

 

 I get it. Probably my mistake. I thought I clearly stated, like in the first sentence or two what I wanted.

IMO, you mistake technical (practice) for theoretical. But you understand now, right? Let's talk on THIS thread about using pixel-shift on those cameras WE have that implement it. Of course, post your more theoretical material on another thread. No harm done, just please stay on topic as I have described it in this post. A hands on discussion please, at least some. 
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Michael Erlewine on November 03, 2017, 15:43:34
+1

The irony is, this is a Nikon forum, and Nikon doesn't have pixel shift, so how many here are going to be well-experienced in its usage?

Till now (as far as I am aware) only Pentax carried the technology ...

Even more ironic, Michael has used the technology, and is the very "person with experience" (in precisely his own style of shooting) ... and therefore has the answers to his own questions.

The only remaining questions are, "What will the pixel-shift tech of the Sony AR7rIII be like, and will it render better color than the Nikon D850?" Michael already owns and is happy with.

Since no one owns the Sony yet, how can anyone provide an in-depth, practical response?

Theoretical discussion is the only option, other than not to respond at all.

If we attempt to discuss theory, perhaps a re-read of Bjørn's suggestion, Also worth keeping in mind is the Law of Diminishing Returns, is the most relevant.

How many photos must we take, stack/shift, and combine ... with how many different camera/lens combinations ... before we can be happy with what we do?

Certainly, Michael has produced some exquisitely-rendered images ... so is the Sony pixel-shift/stack + adapter really going to make a difference over a D850 stack?

I honestly doubt it; in fact, already the D850 has shown to have better Base ISO DR.

At some point, it pays just to be happy, rather than forever chasing a rainbow of perfection ... that can never be caught.


Jack, this is not just a Nikon forum; that's just where it started out. It is for all kinds of cameras, etc. If I'm the problem, I will be glad to leave this thread that I started and did my best to make clear. It's only getting cloudier, IMO.

And your comments, most of them, are just avoiding the thread intent even more. LOL. I will find another venue for these kinds of questions. Later.






Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: JKoerner007 on November 03, 2017, 15:53:46

Jack, this is not just a Nikon forum; that's just where it started out. It is for all kinds of cameras, etc. If I'm the problem, I will be glad to leave this thread that I started and did my best to make clear. It's only getting cloudier, IMO.

And your comments, most of them, are just avoiding the thread intent even more. LOL. I will find another venue for these kinds of questions. Later.

Michael, I actually think you have the most relevant experience to your own question (having used the technology for your style of shooting) ... and, with your order of the new Sony, are poised to be in the best position to answer your own questions about the new offering.

From what I gather (sorry to theorize), the pixel-shift would not benefit a wildlife photographer at all, but only a studio photographer of static subjects.

Will it be better than the D850 for what you do? Only you will be able to provide that answer ... so let us know.

Cheers.
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: simsurace on November 03, 2017, 15:59:40
you cannot say that one set of RGB values generated in response to those wavelengths is more "accurate" than another.
It is a problem if you look at absolute values, but not relative values. If two adjacent pixels have very different RGB values when they should be roughly the same (e.g. white), we may speak of color artifacts. This may be generalized.
------------------------
Regarding the discussion on the appropriateness of theory; my view is that the original question was very vague (which is not a bad thing as such) and did not indicate what the OP's level of understanding was. My approach was to outline possible ways in which pixel-shift might lead to improvements in imaging (in a very narrow sense outlined above). I'm not sure whether that was helpful to the OP, but since reading my posts is optional and free, I don't see any harm done.
If I were only interested in practical discussions, I would clearly not participate in this thread, as I do not own and have never owned a camera with pixel shift, nor do I plan to own one in the foreseeable future or find it particularly practical for my work. I imagine that Micheal could slightly benefit from the feature based on what I know about his work, in the way I tried to outline above. The theoretical discussion between Les and myself and others arises primarily because (as I understand it) he claims that there is no rigorous way in which the potential benefits of pixel-shift may be measured. I'm trying to understand his claim but so far I fail.
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Michael Erlewine on November 03, 2017, 16:03:22
It is a problem if you look at absolute values, but not relative values. If two adjacent pixels have very different RGB values when they should be roughly the same (e.g. white), we may speak of color artifacts. This may be generalized.
------------------------
Regarding the discussion on the appropriateness of theory; my view is that the original question was very vague (which is not a bad thing as such) and did not indicate what the OP's level of understanding was. My approach was to outline possible ways in which pixel-shift might lead to improvements in imaging (in a very narrow sense outlined above). I'm not sure whether that was helpful to the OP, but since reading my posts is optional and free, I don't see any harm done.
If I were only interested in practical discussions, I would clearly not participate in this thread, as I do not own and have never owned a camera with pixel shift, nor do I plan to own one in the foreseeable future or find it particularly practical for my work. I imagine that Micheal could slightly benefit from the feature based on what I know about his work, in the way I tried to outline above. The theoretical discussion between Les and myself and others arises primarily because (as I understand it) he claims that there is no rigorous way in which the potential benefits of pixel-shift may be measured. I'm trying to understand his claim but so far I fail.

Well, I've made myself quite clear by now. LOL. I will keep my own counsel until someone arrives who wants to discuss how to best use this technique. No problem.
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Les Olson on November 03, 2017, 18:20:36
+1

The irony is, this is a Nikon forum, and Nikon doesn't have pixel shift, so how many here are going to be well-experienced in its usage?

Till now (as far as I am aware) only Pentax carried the technology ...

There is more experience that you might think, though not with consumer cameras.

The principle underlying pixel shift is not complicated.  I have a feeling not everyone gets how it does what it does, so here is how it does what it does.


You have a 4 x 4 array, with a detail in the top right corner that cannot be resolved: that pixel will be pale grey and all the rest white. If you move the pixel half a pixel across, down, and across again in the other direction, the small black detail has moved to all four corners of its pixel, and when you combine the four images that pixel will be black and the rest will be white. Detail resolved. If you move a third of a pixel you can resolve even smaller detail.  Of course, it isn't real resolution: the detail is now four times its true size, and it was always there, you just could not be sure it wasn't noise.  Unless it was noise, of course.

I am showing this so people can convince themselves that if you move a whole pixel, as in the Pentax and, as far as we know, the Sony, you lose contrast resolution - all four pixels are now pale grey.

The pixel shift principle is used in some scanners, which have staggered arrays - two lines of pixels offset half a pixel.  It doesn't make much difference, if any.

The principle is also used in sensors for machine-vision cameras because it allows you to use a lower MP and therefore cheaper sensor.  In that application the sensor is cooled to -15 degrees so dark noise is (almost) eliminated so you are less likely to be resolving noise, and the lenses used can be highly corrected because they only have to work at a single object distance.  Even under those circumstances the effect is hard to see with four images - you have to take 32 or 64 to get a convincing improvement.

Whole pixel shift means that each pixel-sized area of the image has a measured RGB value (not "each pixel has a measured RGB value").  The idea that this will improve resolution because the Bayer filter, being 2 x 2 pixels, means you are sampling at half the rate, is wrong.  There are good theoretical reasons it should be, but we don't have to discuss them.  We can just look at the effect of removing the Bayer filter, which has been done for the P45 Phase One sensor (the AA and IR filters were removed as well).  The effect on resolution was imperceptible.
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Les Olson on November 03, 2017, 20:55:16
If two adjacent pixels have very different RGB values when they should be roughly the same (e.g. white), we may speak of color artifacts.
------------------------
 he claims that there is no rigorous way in which the potential benefits of pixel-shift may be measured. I'm trying to understand his claim but so far I fail.

But how do you know what they "should be"?  The only way you can know is if you have eliminated picture detail - by photographing a white screen, eg.

There is a perfectly rigorous way to measure the effects of pixel shift: compare the resulting image to the original. However, if by "rigorous" you mean something not involving human subjectivity, there is no such method as far as colour is concerned because colour is fundamentally subjective.

To make this point practical let me ask you to look at the Mars images at https://www.jpl.nasa.gov/news/news.php?feature=6989  Do you think the colours are correct?  Obviously, the question is nonsense.  But it is no more nonsense than if you are looking at one of Michael's flower images: you have no idea what the "correct" colour is. 

Edit: If you mean is there a rigorous way to define the effects - ie, leaving out the claim that they are good - of pixel shift, of course there is: look at the RGB values.  I pointed out a while ago that anyone who cares can download the files from DPR and see what the RGB values for the K1 or the K3 with and without pixel shift are. 
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Erik Lund on November 03, 2017, 21:59:09
Steady on.  I will not be put in the wrong. 

Michael's original post said: "I am wondering [...] whether the pixel-shift technology of the A7R3 may give me the color (most important to me) and the enhanced resolution (however that works), so that [...] I might (at least for a time) be happy with what I have (or will soon have with the A7R3)?"

How is that a practical question?  This is a man asking us to predict his state of mind when he owns a camera that will not be on sale for a month and that for the feature he is fixated on requires software that as of now is at "pre-beta" (https://www.sonyalpharumors.com/sony-also-announced-new-imaging-edge-software-suite/ (https://www.sonyalpharumors.com/sony-also-announced-new-imaging-edge-software-suite/)).  But we can't have any theoretical discussion or talk about how it might be this or it might be that?! 

He went on: "I am sure some of you here will have more technical thoughts about this conundrum I am in, either agreeing with me or pointing out something I have not thought of."

That is, straightforwardly, an invitation to discuss technical issues.  It is not OK to whine and snarl because the invitation was taken up.

 


Put you wrong ?


I have no intention of such, Re read what I wrote.
Then calm down or this will end in moderation
Thanks
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Erik Lund on November 03, 2017, 22:01:13
+1

The irony is, this is a Nikon forum, and Nikon doesn't have pixel shift, so how many here are going to be well-experienced in its usage?

Till now (as far as I am aware) only Pentax carried the technology ...

Even more ironic, Michael has used the technology, and is the very "person with experience" (in precisely his own style of shooting) ... and therefore has the answers to his own questions.

The only remaining questions are, "What will the pixel-shift tech of the Sony AR7rIII be like, and will it render better color than the Nikon D850?" Michael already owns and is happy with.

Since no one owns the Sony yet, how can anyone provide an in-depth, practical response?

Theoretical discussion is the only option, other than not to respond at all.

If we attempt to discuss theory, perhaps a re-read of Bjørn's suggestion, Also worth keeping in mind is the Law of Diminishing Returns, is the most relevant.

How many photos must we take, stack/shift, and combine ... with how many different camera/lens combinations ... before we can be happy with what we do?

Certainly, Michael has produced some exquisitely-rendered images ... so is the Sony pixel-shift/stack + adapter really going to make a difference over a D850 stack?

I honestly doubt it; in fact, already the D850 has shown to have better Base ISO DR.

At some point, it pays just to be happy, rather than forever chasing a rainbow of perfection ... that can never be caught.


This is not a NIKON forum, all brands welcome!
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Erik Lund on November 03, 2017, 22:08:37
A moderation comment to all, no one in particular;


You don’t have to convince everybody to agree with you.


To disagree is perfectly ok,,,


Here on NikonGear we are here to learn and have fun!
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: OCD on November 03, 2017, 23:08:05
Thank you Erik, your moderation is appreciated.

Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: JKoerner007 on November 03, 2017, 23:33:39
This is not a NIKON forum, all brands welcome!

Perhaps a re-name to "All Brands Gear"?  ;)

Though I've seen a couple of Sony-folks ... can't say I've seen a single Canon user post.

Pretty sure the theme title is a filter toward Nikon aficionados ...
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Bjørn Rørslett on November 03, 2017, 23:56:14
The site name is for historical reasons. We have no intention of changing it.
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Hugh_3170 on November 04, 2017, 02:18:31
Well the Olympus OMD E-M5 MkII certainly has had pixel shift high resolution on offer for some time now, as do/have certain Pentax and Sony models (as already noted in the various posts herein), so it is not just restricted to the one manufacturer any more.  I suspect that those manufacturers that use in body sensor shift image stabilisation should be better placed to introduce Pixel-Shift higher resolution options.

EDIT:  The newer Olympus OMD E-M1 Mk II also has pixel shift high resolution modes;  Olympus first introduced the feature on the E-M5 MkII.

EDIT:  Those interested in the outcomes obtained fromthe Olympus pixel shift high resolution implementations should Google on Robin Wong and Ming Thein and take a look at their reviews and field tests of the Olympus E-M5 MkII and the E-M1 MkII.  I cannot add further as my E-M1 is the MkI version that doesn't have the pixel shift feature.


+1

The irony is, this is a Nikon forum, and Nikon doesn't have pixel shift, so how many here are going to be well-experienced in its usage?

Till now (as far as I am aware) only Pentax carried the technology ...

............................................................................................
............................................................................................
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: JKoerner007 on November 04, 2017, 14:35:32
Well the Olympus OMD E-M5 MkII certainly has had pixel shift high resolution on offer for some time now, as do/have certain Pentax and Sony models (as already noted in the various posts herein), so it is not just restricted to the one manufacturer any more.  I suspect that those manufacturers that use in body sensor shift image stabilisation should be better placed to introduce Pixel-Shift higher resolution options.

EDIT:  The newer Olympus OMD E-M1 Mk II also has pixel shift high resolution modes;  Olympus first introduced the feature on the E-M5 MkII.

Thanks for the correction.

Honestly never paid attention to Olympus, so the Pentax K1 was my first awareness of pixel shift. Seems to have extremely limited application, regardless, and it's not an exciting enough difference for me to switch brands (or even buy a single camera) in order to achieve it.

Here is an example (http://www.fredmiranda.com/forum/topic/1514495/3#14240662) of pixel-shift limitations in real world use (look at the flags).

Even under the best conditions, I have never seen a K1/Olympus image I didn't think I could take with my D810; but there are many images I could take with my D810 that I could not take (due to lens selection limitation, as well as others) with a K1.

Now that Sony has it, I still don't like their lens limitations either. And, even where the glass choices are comparable, you could never take a landscape shot (with running water, moving trees/leaves) using pixel shift, so it still seems more academic than practical, for me at least.

For those who do absolutely static shots, it may produce a slightly-better effect, I don't know, but I think Nikon's in-body-stacking has more real-world use than pixel-shift. Not for extreme macro, but definitely for those happy with AF lenses, from close to landscape, in-body stacking seems a more useful overall function than pixel-shift.
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Michael Erlewine on November 04, 2017, 17:58:32
JKoerner writes:

"For those who do absolutely static shots, it may produce a slightly-better effect, I don't know, but I think Nikon's in-body-stacking has more real-world use than pixel-shift. Not for extreme macro, but definitely for those happy with AF lenses, from close to landscape, in-body stacking seems a more useful overall function than pixel-shift."

For your use, Jack, quick stacks in the field make great sense. You can capture what otherwise you could not and have it all in focus. I have done decades of field work, so I understand. And the motion of the wind, not to mention, critters is a real handicap that the new D850 can help to overcome.

Partially because of age, but also because of inclination (they may be related! LOL.), I no longer chase after or travel to remote areas to capture photos. I have morphed into a “found” photographer, photographing whatever is around in summer and in winter, dragging whatever flowers, plants, etc. into my little studio and photographing them.

For me, the built-in Nikon focusing does not look very interesting, although should Nikon come out with a fast, sharp, highly corrected macro lens with autofocus (or someone else does), I could be perhaps persuaded.

Pixel-shift to me looks very usable, jast as the Nikon stacking feature does to you.

The problem with the new internal focus-stacking from Nikon is the same thing we stub our toe on all the time, the fact that Nikon autofocus lenses tend to be not as well corrected, etc. as some manual lenses, Nikon or other.

For your use, Jack, quick stacks in the field make great sense. You can capture what otherwise you could not and have it all in focus. I have done decades of field work, so I understand. And the motion of the wind, not to mention, critters is a real handicap that the new D850 can help to overcome.

Partially because of age, but also because of inclination (they may be related! LOL.), I no longer chase after or travel to remote areas to capture photos. I have morphed into a “found” photographer, photographing whatever is around in summer and in winter, dragging whatever flowers, plants, etc. into my little studio and photographing them.

For me, the built-in Nikon focusing does not look very interesting, although should Nikon come out with a fast, sharp, highly corrected macro lens with autofocus (or someone else does), I could be perhaps persuaded.

Pixel-shift to me looks very usable, jast as the Nikon stacking feature does to you.
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: David H. Hartman on November 05, 2017, 01:11:02
After a positive PM from Michael I'll venture a question to those with pixel shift experience.

From time to time I've had a problem with either the red or blue channel maxing out where the other channel and the green channel is fine. The Bayer array has one red, two green and one blue sensors. I don't recall this problem with the green channel.

I'm thinking of a photograph of a garden with small in the photo pink flowers. They were an almost solid one tone pink (255R, xxxG,  xxxB) in the photo. I had a struggle in Nikon Capture NX-D and Photoshop to get an acceptable but not really up to my standard photo. Can pixel shifting help with this problem or can pixel shifting and other post processing help.

Dave Hartman
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: JKoerner007 on November 05, 2017, 02:25:03
For your use, Jack, quick stacks in the field make great sense. You can capture what otherwise you could not and have it all in focus. I have done decades of field work, so I understand. And the motion of the wind, not to mention, critters is a real handicap that the new D850 can help to overcome.

Agreed, Michael.



Partially because of age, but also because of inclination (they may be related! LOL.), I no longer chase after or travel to remote areas to capture photos. I have morphed into a “found” photographer, photographing whatever is around in summer and in winter, dragging whatever flowers, plants, etc. into my little studio and photographing them.

Lol, understood.

My own studio use is limited to 3:1 macro, and above, which cannot really be done in the field ... esp. without a flash.

For me, neither the Nikon in-body stacking, nor pixel-shift, are answers to ultra-macro (my only studio work).

The in-body stacking (based on what I read) is too crude, and not fine enough, for high-mag macros ... so I have bought a WeMacro rail for this ultra-precise purpose ... as it is adjustable in 1 μm units in either direction.

Pixel-shift seems like it's just a superfluous consideration to add to a high-mag stack.



For me, the built-in Nikon focusing does not look very interesting, although should Nikon come out with a fast, sharp, highly corrected macro lens with autofocus (or someone else does), I could be perhaps persuaded.

Understood. That said (and back to an earlier Olympus reference), I've seen some superb in-body-stacked field macro images from Olympus cameras ... that, while not using 'highly-corrected' lenses ... still used pretty darned good lenses that rendered fantastic field-derived stacks, that would likely not have been possible with a highly-corrected lens. (Or even with a StackShot/WeMacro devise fitted for the field.) The subjects captured weren't really 1:1, more like 1:2, but the quick in-body stacks really improved the presentation.

Right now, I am satisfied with my D810 and CV 125, as the subtle focus ring adjustments allow me to achieve a 25 to 50-stack image, fairly quickly, and with acceptable precision, just due to the enormous and precise focus throw.



Pixel-shift to me looks very usable, jast as the Nikon stacking feature does to you.

I can see why: for you and your specialty, delving into pixel-shift would make far more sense. Am interested to see what you think, when all is said and done.



The problem with the new internal focus-stacking from Nikon is the same thing we stub our toe on all the time, the fact that Nikon autofocus lenses tend to be not as well corrected, etc. as some manual lenses, Nikon or other.

I think Nikon will come out with a 200mm FL-Micro one of these days ... and that may be pretty nice. Pure speculation, though.

Also, on long-Nikkor super-teles (which are about as well-corrected as any Zeiss Otus), there may be stacking options with the D850 to where (as you like to do) a wildlife photographer can set a 200, 300, 400, or 500 mm at f/2.8 (to f/4), and multi-stack a fairly-stationary telephoto shot, quickly and effectively, in-body... and thereby achieve tremendous DOF detail on the subject ... while yet still enjoying totally creamy background bokeh due to the comparatively-fast aperture at that focal length. Obviously, this couldn't be done with a bird in flight, but perhaps a perfectly-stationary one (or a bobcat 'freezing' on the hunt, a lion, buffalo, etc.). Lots of room, and applications, to apply this feature to ... again, including landscape.



You can capture what otherwise you could not and have it all in focus. I have done decades of field work, so I understand. And the motion of the wind, not to mention, critters is a real handicap that the new D850 can help to overcome.

Precisely 8)



Partially because of age, but also because of inclination (they may be related! LOL.), I no longer chase after or travel to remote areas to capture photos. I have morphed into a “found” photographer, photographing whatever is around in summer and in winter, dragging whatever flowers, plants, etc. into my little studio and photographing them.

For me, the built-in Nikon focusing does not look very interesting, although should Nikon come out with a fast, sharp, highly corrected macro lens with autofocus (or someone else does), I could be perhaps persuaded.

Pixel-shift to me looks very usable, jast as the Nikon stacking feature does to you.

Hey, we all have our interests, as well it should be.

And different features will make more, or less, sense accordingly.

Cheers.
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Roland Vink on November 05, 2017, 20:54:25
From time to time I've had a problem with either the red or blue channel maxing out where the other channel and the green channel is fine. The Bayer array has one red, two green and one blue sensors. I don't recall this problem with the green channel.

I'm thinking of a photograph of a garden with small in the photo pink flowers. They were an almost solid one tone pink (255R, xxxG,  xxxB) in the photo. I had a struggle in Nikon Capture NX-D and Photoshop to get an acceptable but not really up to my standard photo. Can pixel shifting help with this problem or can pixel shifting and other post processing help.

Dave Hartman
If the red channel is maxed out on a single Bayer-array image, wouldn't it still be maxed out on a pixel shifted RGBG image?  Of course, the red channel is no longer interpolated, you have red information at every pixel, but would still be saturated, so surely the problem would remain (this is only my educated guess ... I don't have experience with pixel shifting...)
The only solution is to lower the ISO to increase your dynamic range, or reduce your exposure to prevent the channel from saturating. But there is no free lunch, lower ISOs mean longer shutter speeds which could result in camera shake or motion blur, underexposing risks more noise in the shadows.

Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: David H. Hartman on November 05, 2017, 21:29:08
If the red channel is maxed out on a single Bayer-array image, wouldn't it still be maxed out on a pixel shifted RGBG image?  Of course, the red channel is no longer interpolated, you have red information at every pixel, but would still be saturated, so surely the problem would remain (this is only my educated guess ... I don't have experience with pixel shifting...)
The only solution is to lower the ISO to increase your dynamic range, or reduce your exposure to prevent the channel from saturating. But there is no free lunch, lower ISOs mean longer shutter speeds which could result in camera shake or motion blur, underexposing risks more noise in the shadows.

No free lunch! Oh dear, one can hope.

I don't think of this single channel being max for the green channel as a problem. I wondered if this had anything to do with two green cells in the array. I wondered interpolation of the red or blue channel increased the problem.

I've taken to keeping the blinkies set to Red and switching them to Blue if I suspect there might be a problem. I suspect ISO is often the problem even at 800 or 1000. The D800 isn't a great high ISO camera. I'm pressed for time but found the photo I was thinking of was shot at ISO 1000. It looks overcast so color saturation would be high while shadows close to non-existent.

Thanks for the reply,

Dave

Anyone else?
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Roland Vink on November 05, 2017, 22:07:25
Again, I can only offer an educated guess, but I would assume if the green channel was saturated it wouldn't matter if there were one cell or two per array - you'd just get two maxed out green cells instead of one (unless the two greens are different, one having a dark green filter to capture highlights, the other with a pale green filter to capture shadows, but I don't think this is the case). There must be other reasons why red and blue tend to max out first - the type of filters used, or maybe saturated greens are less common in nature? Having two green cells does allow the noise to be averaged out and lowered, so it helps with dynamic range and shadow capture, I don't think it helps with highlights.

As Dave said, anyone else?
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: Les Olson on November 06, 2017, 08:50:53
I wondered interpolation of the red or blue channel increased the problem.


Interpolation can cause problems when there is channel imbalance, but it appears as noise, and the green channel is the least likely to suffer.  Blue is most likely to suffer, because the red and green filters are more selective at that wavelength, so if you have pure blue light the green and red channels are severely under-exposed and therefore noisy. That noise can leak into the blue channel when demosaicing occurs, so you get low-ISO noise (typically in blue sky). The filters are least selective in the green-yellow region, so with green light the red and blue channels are not severely under-exposed and so not noisy.   

With pixel shift the red channel will still be under-exposed and noisy in blue light.
Title: Re: Pixel-Shifting Vs. Larger Sensors
Post by: schwett on November 13, 2017, 02:17:01
interesting how these things always get so heated! us boys with our toys :)

here's my take on pixel shift from some brief experience with it.

in order to really resolve a lot more detail, the pixels have to be smaller. shifting larger pixels around does have some effect since the pixel isn't a perfect field of light-gathering goodness, and fine details which otherwise would have disappeared are captured.

a better application in my opinion, as michael suggests, for pixel shift should would be using the shift on still subjects to simulate a "true" rgb sensor. the shifting actually moves a red pixel onto a green pixel and then a blue pixel onto the red pixel. this is like the old color wheel sensors. great for still subjects, useless for anything else.

since each final pixel in a bayer-interpolated image is in fact the result of data from adjacent pixels, we already have a bit of the same problem that the "shifting larger pixels" approach entails. our 45.7mp d850 images aren't really quite that good, which is evident when compared on a pixel by pixel basis to a "true" color image.

it's interesting technology. not too interesting to me since my shooting needs vary widely and often include movement. for still subjects with a still camera, it's definitely good for a bit of a bump in apparent "resolution." what we really need is a device which records the wavelength and intensity of light at each pixel. :o imagine that!