I read this on another forum "The Pentax K-1 has sensor-shift image stabilization, allowing movements of up to 1.5mm in any direction — so, the sensor can be within a space of 36+1.5+1.5 by 24+1.5+1.5, or 39×27mm. That means the minimum image circle diameter to avoid problems is 47.4mm "
1.5 mm off-center is not that much so that it would ruin a photograph, I usually try to have much more margin of error around the edges of my composition than that. But the compositional error that results from in-camera VR use is likely to be much smaller than that.
There is no way typical full-frame lenses have 5mm extra image circle for in-camera VR implementation either in radius or diameter. On APS-C the situation is different as the sensor is not only lighter (thus more easily moved about) but the user then uses lenses that cover full-frame sensor so that creates the opportunity to larger range of compensation that's not available when using full frame sensors, unless of course one mounts medium format lenses on the camera.
This video shows Sony's version and its range:
https://www.youtube.com/watch?v=-Ncye37e6xMSince in a DSLR, there is no point in moving the sensor during optical viewfinder viewing, the sensor should start at zero shift just before the shutter opens, and only move during the exposure. (At this point there should be no error between sensor position and what the viewfinder shows). The aim of the stabilization is to keep the content steady in the sensor plane, so any framing error would be small. How much start-up time the system needs, I don't know, if it is substantial then there could be a slight framing error, but it is not likely to be anywhere near the limit of the movement of the system since then it would not be able to move in all directions to correct for movement.
The drawback that I can see from in-camera VR combined with OVF is that the user cannot optimize their hand-holding to work together with the effect of the VR system to produce the best overall stability of the image; the user has to try to keep the viewfinder steady without the help of in-camera VR and then the VR system has to try to correct for the residual movement on its own, without co-operation of the user. When using a lens with optical VR, the VR system is active during viewing and at least my hand-holding technique adopts to the VR available so that the view is held steady but I don't try to compensate for the micro-vibration that I would try to compensate for when using a lens without VR. So, the system might not work as optimally together with the photographer as it can when the viewfinder shows the effect of the stabilization. But that remains to be seen, how effective the system is.