If you plan on carrying out a set of tests using multiple lens and camera samples, you also need to rotate the photographers taking the shots since photographer variability is likely to be much greater than lens sample variability in hand held shooting. And the number of shots made with each combination of body+lens+photographer+shutter speed should be at least 20 to get meaningful data and to reduce the effect of randomness in the hand holding on the mean and other statistics. The image should be focused using live view and the distance to target such that the shake of the lens forward and back does not affect image sharpness appreciably. The photographer should be unaware of which sample of lens they are using. This is not an easy thing to do and there are many ways to do it wrong.
If Nikon can make EFCS work without M-UP mode, and put it in more cameras, this problem would be solved for all lenses, not only the 300mm PF.
The expectation of sharpness given the null hypothesis (the lens works as advertised) apparently varies from person to person. If you have a fairly wide distribution of outcomes, it requires more data to reject the null hypothesis. But instead of making a lot of experiments under the same condition, one can do a few control experiments to exclude confounding factors. For instance, the variability of shakiness of the photographer can be controlled for by putting the lens on a table or on a tripod, and shooting at even slower speeds than 1/125s. Especially the latter, i.e. shooting at 1/60s and getting results that are basically as sharp as at 1/500s or higher, removes almost any doubt that the issue is present. Or like the instance of Chris putting my lens on his D810 and getting at the first shot a tack sharp result, which he has never seen before with his lens sample. You have to have very low (and complex) expectations towards the efficacy of VR to accept this as chance.
I think this sort of data is fairly convincing even though it might not be up to the highest scientific standards. It is about as much as I'm willing to do, although I could certainly do better if I wanted to. It is Nikon's job to do more detailed research. Not only would it be highly inefficient for the user base to do this research since we don't have access to the algorithms of the VR and the various firmwares to do debugging, but it would also be unsolicited work that is unpaid and we would be doing in our spare time instead of using the lens we paid for to be functional in order to do creative work.