I propose the following method to evaluate the resizing/re-sharpening objectively:
You can download this image of a Siemens star here:
https://www.dropbox.com/s/hijmrelhpaesijt/Siemens.png?dl=0I prepared the file to have the same resolution as a D800 file.
The red circles show equal increments of spacial frequencies.
The innermost circle is at Nyquist, the next circle at 1/2 Nyquist, the next at 1/3 Nyquist etc.
The stuff inside the innermost circle can be ignored. It looks nice, but is just false detail. The ideal pattern that is being represented would have infinitely thin rays as you approach the center, but because the file has a finite number of pixels, the rays cannot be resolved inside the circle.
There is a little bit of false details even outside the inner circle (but inside the second), which is due to the fact that the pixel grid is vertical/horizontal, but the rays are slanted. This is an issue that is very relevant since we rarely align all fine detail horizontally or vertically.
It would therefore be wise to go a bit below Nyquist with regards to our expectations of the finest representable detail.
You can test your algorithms on this file. You can test what happens to the detail during resizing. Let's say your resize is to 920px width (a factor of 8 ). The Nyquist frequency is now at circle no. 8. Look at the detail outside circle no.8. Depending on the resizing method, this detail will be more or less contrasty. After re-sharpening, the contrast should be higher (as high as further out) but not as high to give additional false patterns.
This should give you a way to fine-tune your sharpening strength/radius independently of image content.
I haven't given this much thought, but I would guess that a sharpening radius of 0.5 will be close to ideal, since it will operate at the Nyquist frequency of the resized image.