Author Topic: Pixel-Shifting Vs. Larger Sensors  (Read 21367 times)

Roland Vink

  • NG Member
  • *
  • Posts: 1525
  • Nikon Nerd from New Zealand
    • Nikon Database
Re: Pixel-Shifting Vs. Larger Sensors
« Reply #105 on: November 05, 2017, 22:07:25 »
Again, I can only offer an educated guess, but I would assume if the green channel was saturated it wouldn't matter if there were one cell or two per array - you'd just get two maxed out green cells instead of one (unless the two greens are different, one having a dark green filter to capture highlights, the other with a pale green filter to capture shadows, but I don't think this is the case). There must be other reasons why red and blue tend to max out first - the type of filters used, or maybe saturated greens are less common in nature? Having two green cells does allow the noise to be averaged out and lowered, so it helps with dynamic range and shadow capture, I don't think it helps with highlights.

As Dave said, anyone else?

Les Olson

  • NG Member
  • *
  • Posts: 502
  • You ARE NikonGear
Re: Pixel-Shifting Vs. Larger Sensors
« Reply #106 on: November 06, 2017, 08:50:53 »
I wondered interpolation of the red or blue channel increased the problem.


Interpolation can cause problems when there is channel imbalance, but it appears as noise, and the green channel is the least likely to suffer.  Blue is most likely to suffer, because the red and green filters are more selective at that wavelength, so if you have pure blue light the green and red channels are severely under-exposed and therefore noisy. That noise can leak into the blue channel when demosaicing occurs, so you get low-ISO noise (typically in blue sky). The filters are least selective in the green-yellow region, so with green light the red and blue channels are not severely under-exposed and so not noisy.   

With pixel shift the red channel will still be under-exposed and noisy in blue light.

schwett

  • NG Member
  • *
  • Posts: 73
  • You ARE NikonGear
    • photos
Re: Pixel-Shifting Vs. Larger Sensors
« Reply #107 on: November 13, 2017, 02:17:01 »
interesting how these things always get so heated! us boys with our toys :)

here's my take on pixel shift from some brief experience with it.

in order to really resolve a lot more detail, the pixels have to be smaller. shifting larger pixels around does have some effect since the pixel isn't a perfect field of light-gathering goodness, and fine details which otherwise would have disappeared are captured.

a better application in my opinion, as michael suggests, for pixel shift should would be using the shift on still subjects to simulate a "true" rgb sensor. the shifting actually moves a red pixel onto a green pixel and then a blue pixel onto the red pixel. this is like the old color wheel sensors. great for still subjects, useless for anything else.

since each final pixel in a bayer-interpolated image is in fact the result of data from adjacent pixels, we already have a bit of the same problem that the "shifting larger pixels" approach entails. our 45.7mp d850 images aren't really quite that good, which is evident when compared on a pixel by pixel basis to a "true" color image.

it's interesting technology. not too interesting to me since my shooting needs vary widely and often include movement. for still subjects with a still camera, it's definitely good for a bit of a bump in apparent "resolution." what we really need is a device which records the wavelength and intensity of light at each pixel. :o imagine that!