Author Topic: Pixel-Shifting Vs. Larger Sensors  (Read 30066 times)

Ethan

  • NG Supporter
  • **
  • Posts: 208
  • You ARE NikonGear
Re: Pixel-Shifting Vs. Larger Sensors
« Reply #60 on: November 02, 2017, 06:32:16 »
Michael, in view of your learned experience in focus stacking and pixel shifting, could you explain the effect of down scaling to a smaller file and additionally converting to an sRGB colour space in a Jpeg format.

1- Down scaling
2- sRGB
3- Jpeg

You are arguing continuously about the top end and dismissing any comment which does not find echo to your thinking. Maybe it is time for you to explain what happens on the lower end and how much file information is destroyed and subsequently how Pixel Shift will not be affected or marginally affected?



charlie

  • NG Member
  • *
  • Posts: 587
Re: Pixel-Shifting Vs. Larger Sensors
« Reply #61 on: November 02, 2017, 07:11:27 »
It seems to me that

You are arguing continuously
, Ethan.

As far as I can tell whatever Michael is doing on the top end is translating through to the bottom end, so to speak. The level of detail in many of his sRGB jpg images is among highest you'll find anywhere, I welcome you to prove me wrong. 


David H. Hartman

  • NG Member
  • *
  • Posts: 2790
  • I Doctor Photographs... :)
Re: Pixel-Shifting Vs. Larger Sensors
« Reply #62 on: November 02, 2017, 07:15:08 »
I'm very sure Michael is for post processing and his own viewing using a monitor capable of Adobe RGB and that sRGB is for the hoi polloi.

If stacking and retouching is done in prophoto, down sampled to 2000 pixels and then coverted to conform his monitors capability, that would be quite reasonable to me. Converting to sRGB for Web viewing is I think still a necessary evil because of the monitors of the many.

Dave Hartman who aspires to be a has been

---

...The level of detail in many of his sRGB jpg images is among highest you'll find anywhere...

Down sampling is a black art.

Just as every cop is a criminal
And all the sinners saints
As heads is tails
Just call me Lucifer
'Cause I'm in need of some restraint
(Who who, who who)

--Jagger,  Richards
Beatniks are out to make it rich
Oh no, must be the season of the witch!

David H. Hartman

  • NG Member
  • *
  • Posts: 2790
  • I Doctor Photographs... :)
Re: Pixel-Shifting Vs. Larger Sensors
« Reply #63 on: November 02, 2017, 07:30:28 »
Oops!
Beatniks are out to make it rich
Oh no, must be the season of the witch!

Erik Lund

  • Global Moderator
  • **
  • Posts: 6529
  • Copenhagen
    • ErikLund.com
Re: Pixel-Shifting Vs. Larger Sensors
« Reply #64 on: November 02, 2017, 07:57:44 »
Seems like some of you guys are missing the crux here, this is not only about posting an JPG online or an PDF, it's also about the process and about what's possible.


So leave out the snarky remarks and try to stay on the subject of the thread. Thanks ;)



It is up to oneself what to use the output file for, we are many here who can benefit from knowing these above details, and put it to use for our output files whatever format we or our customers need or desire,,,
Erik Lund

Les Olson

  • NG Member
  • *
  • Posts: 502
  • You ARE NikonGear
Re: Pixel-Shifting Vs. Larger Sensors
« Reply #65 on: November 02, 2017, 08:44:18 »
Les Olson: Have you ever actually used pixel-shift? If so, which camera and on what?

Yes, I have used it - but not in a camera.  Some scanners have staggered-pixel arrays that use the same principle.  Others have an option to do multiple scans with each slightly displaced.  Any review can tell you how much difference it makes. 

Ilkka Nissilä

  • NG Member
  • *
  • Posts: 1714
  • You ARE NikonGear
Re: Pixel-Shifting Vs. Larger Sensors
« Reply #66 on: November 02, 2017, 09:14:32 »
What would be the point in having the best resolution if you are dropping to a Jpeg compressed file??????

Aliasing and false color do affect also the downsampled image if these processes are allowed to happen in the first place and the image which contain aliasing, color cross talk etc does not contain enough information to correct for these problems in post. The only way to get a correct image is to sample at a high enough frequency so that there is no aliasing in the first place.

Bjørn Rørslett

  • Fierce Bear of the North
  • Administrator
  • ***
  • Posts: 8252
  • Oslo, Norway
Re: Pixel-Shifting Vs. Larger Sensors
« Reply #67 on: November 02, 2017, 12:10:57 »
... The only way to get a correct image is to sample at a high enough frequency so that there is no aliasing in the first place.

A basic truth that obviously cannot be repeated often enough. Pixel shifting cannot solve the aliasing issue on its own, one still needs sufficient resolution for the task at hand.

Also worth keeping in mind is the Law of Diminishing Returns.

Les Olson

  • NG Member
  • *
  • Posts: 502
  • You ARE NikonGear
Re: Pixel-Shifting Vs. Larger Sensors
« Reply #68 on: November 02, 2017, 12:32:26 »
Aliasing and false color do affect also the downsampled image if these processes are allowed to happen in the first place and the image which contain aliasing, color cross talk etc does not contain enough information to correct for these problems in post. The only way to get a correct image is to sample at a high enough frequency so that there is no aliasing in the first place.

No. You can't reconstruct the un-aliased image from the aliased image, but you can make the aliased image identical to the un-aliased image or any other image you like.  And if the answer is that you can't if you don't know what the true un-aliased image looked like, you are back with the fallacy that there is a "true colour" that you don't have access to but want to reproduce. 

You do have access to the true colour: you look at the original.  No image can be identical to what you saw without processing: the sensor and the eye have different spectral sensitivities, the brain and the image processing engine have different "algorithms" (the brain's is actually closer to the Bayer process) and the monitor and the (say) flower have different spectral illuminance.  Given all those factors, the un-aliased image is not likely to be closer to what you saw, and if it is closer, it is no easier to bring it from "close" to "the same". 

We have been here many times before: a manufacturer introduces a feature, and suddenly the aspect of the image it purports to "improve" is all anyone can think about.  When "full frame" sensors appeared people discovered that shallow DoF was incredibly important, when AF fine tune appeared they discovered that "back focus" was rampant, when live-view histograms appeared they discovered that you can't expose correctly without ETTR, and now when pixel shift is in the news they find - or purport to find - horrendous aliasing everywhere. 


simsurace

  • NG Member
  • *
  • Posts: 835
Re: Pixel-Shifting Vs. Larger Sensors
« Reply #69 on: November 02, 2017, 13:14:59 »
No. You can't reconstruct the un-aliased image from the aliased image, but you can make the aliased image identical to the un-aliased image or any other image you like.  And if the answer is that you can't if you don't know what the true un-aliased image looked like, you are back with the fallacy that there is a "true colour" that you don't have access to but want to reproduce. 

You do have access to the true colour: you look at the original.  No image can be identical to what you saw without processing: the sensor and the eye have different spectral sensitivities, the brain and the image processing engine have different "algorithms" (the brain's is actually closer to the Bayer process) and the monitor and the (say) flower have different spectral illuminance.  Given all those factors, the un-aliased image is not likely to be closer to what you saw, and if it is closer, it is no easier to bring it from "close" to "the same". 

We can set up an imaging pipeline and study the distortion incurred from the source to the final reproduction. Distortion metrics (i.e. measures of how far apart two images are) vary according to the application, and some of them will be very sensitive to the Bayer artifacts, some less.

Maybe you are suggesting that a meaningful metric is one that does not care about Bayer artifacts, but I would disagree because they are so plain obvious. Maybe you can clarify this point.

I don't follow the argument about "making it like any other image". Are you suggesting retouching?
Simone Carlo Surace
suracephoto.com

Les Olson

  • NG Member
  • *
  • Posts: 502
  • You ARE NikonGear
Re: Pixel-Shifting Vs. Larger Sensors
« Reply #70 on: November 02, 2017, 14:42:34 »

I don't follow the argument about "making it like any other image". Are you suggesting retouching?

There is no such thing as "colour" in the real world.  There are wavelengths of light in the real world, but colour is a subjective phenomenon.  When you put a prism in sunlight and separate the various wavelengths, red is around 680nm, yellow is around 590nm and green is around 550nm.  When you mix red light and green light you get yellow light - but the 680nm and the 550nm photons do not meld together and make 585nm photons. Red light and green light make yellow light only because our visual system responds so that a mixture of 680nm and 550nm photons looks the same as 590nm photons.  The colour we see has nothing to do with the frequencies: if you mix infrared at 800nm and ultraviolet at 400nm you don't see yellow.

Our visual system uses an RGB system for hue. There is no RGB in the real world.  The photons in the aerial image formed by the lens are not just RGB, they cover the whole range of wavelengths (except those filtered out by glass).  The RGB only comes into existence when you put coloured filters over the photosites.  The filters are not very selective: the red filter transmits quite a lot of yellow and green and the green filter transmits quite a lot of blue and yellow and red (there is a graph at https://micro.magnet.fsu.edu/primer/digitalimaging/cmosimagesensors.html about half way down).  In the example they give, light at 585nm = yellow gives you exactly equal R and G and a little bit of B.  So, whenever you have R and G big and equal and B small you say "yellow".  Obviously, other mixtures of wavelengths and intensities could give you R and G big and equal and B small, and you would call that "yellow" as well.  The catch is that our retinal RGB photoreceptors do not have the same spectral response as the RGB sensels, so light that we would see as a different colour the camera may see as the same, or as a different different colour.  That happens whether the RGB values are all measured by the same pixel - as with a Foveon sensor - or interpolated from the Bayer mosaic.

So however you get your RGB values some tinkering has to go on to get the camera output to look right, and however good the designers of the image processing engine are at colour tinkering, it won't look right every time.  The bad news is, there are only a few things you can tinker with: RGB values, lightness, brightness and saturation.  The good news is, you can tinker with them all you like.  So, if your camera makes your buttercup look ever so slightly greenish, you can turn down the G until it looks just right.  It does not matter whether that ever-so-slight greenish tinge is due to aliasing or your monitor being mis-calibrated or just the way the image processor works.

simsurace

  • NG Member
  • *
  • Posts: 835
Re: Pixel-Shifting Vs. Larger Sensors
« Reply #71 on: November 02, 2017, 18:41:10 »
There is no such thing as "colour" in the real world...
One could argue that human vision is part of the 'real world' and hence color is part of the real world, but all of this does not really matter. All I'm saying is that we may look at the imaging chain below and compare RGB image_in and RGB image_out for different cameras and different input images, leaving the other arrows unchanged (or, alternatively, optimizing the conversion to minimize the difference between the two images either on an image-per-image basis, or the average difference across the different input images). This scheme may be applied to any kind of imaging chain comparison regardless of the physics of color or anything else that you mentioned.
Simone Carlo Surace
suracephoto.com

David H. Hartman

  • NG Member
  • *
  • Posts: 2790
  • I Doctor Photographs... :)
Re: Pixel-Shifting Vs. Larger Sensors
« Reply #72 on: November 03, 2017, 08:28:56 »
Down sampling with bicubic interpolation causes soft edges of fine detail and down sampling from approximately 8000 pixels to 2000 pixels involves a significant reduction of data. I don't see how differences seen at pixel level in the original image can endure to be seen after significant down sampling. Different procedures used  to down sample can give more or less apparent detail to the down sampled image. A dive directly from 8000 to 2000 pixels will not give the best results unless the software has a hidden algorithm at work. My software does not so I down sample is stages with sharpening in between each step.

Dave Hartman
Beatniks are out to make it rich
Oh no, must be the season of the witch!

Erik Lund

  • Global Moderator
  • **
  • Posts: 6529
  • Copenhagen
    • ErikLund.com
Re: Pixel-Shifting Vs. Larger Sensors
« Reply #73 on: November 03, 2017, 08:45:04 »
Yes, 'Bicubic Sharper' downsampling in several steps for reduction in image pixel size helps preserve details.
Often it's surprisingly good at retaining clarity and details IMHO
Erik Lund

David H. Hartman

  • NG Member
  • *
  • Posts: 2790
  • I Doctor Photographs... :)
Re: Pixel-Shifting Vs. Larger Sensors
« Reply #74 on: November 03, 2017, 08:56:25 »
I don't use bicubic with sharpening as the sharpening  in my software is too strong. That's why I alternate between sharpening and down sampling. This gives me significant control. Newer software may be more flexible than mine.

Dave
Beatniks are out to make it rich
Oh no, must be the season of the witch!