Having done many pixel-shift images with the Pentax K3 and K1, not to mention endless thousands of retouching images with the various modes of focus-stacking, I don’t care how you describe it, there is a significant difference between shifted and unshifted images, IMO (and experience), in the favor of shifted images.
Questions about “APO,” which has no standard definition, and terms like acutance, resolution, and other terms that we use to describe the interplay of color with “sharpness” (another vague term) is not at issue here. That conversation will go on perhaps forever.
All the tests I have seen, performed, and read about point out that there is a difference between pixel-shift and traditional Bayer images. It is that DIFFERENCE I have tried to refer to, not to stir up all the armchair philosophers or theoreticians out there, but to contact those who actual have used pixel-shift. They are the ones I would like to talk with, folks who actually have experimented with pixel-shifting, where the rubber meets the road, so to speak. Perhaps there are none on this forum!
And my inquiry was as to how far up the road of ever-larger sensors do we go until we have “enough” of whatever we are looking for in “resolution,” etc., to be satisfied. Perhaps never. And I am not interested in printing out images, either.
However, my hunch is there is a point of diminishing returns between what I need to get in the way of color that pleases me, along with sharpness, correction, etc. and... sensor size. Pixel-shift has, IMO, helped to limit (for me) the size of sensor I need. In other words, I believe I can get more with less... as to the sensor size that I used to imagine.