Author Topic: The downsizing challenge - Part 1: The Problem  (Read 13386 times)

Andrea B.

  • Technical Adviser
  • *
  • Posts: 1671
Re: The downsizing challenge - Part 1: The Problem
« Reply #15 on: May 20, 2016, 19:29:43 »
Andrea Test Three: Here is a conversion and resize of the converted TIF using Photo Mechanic's default sharpening.
I usually think that Photo Mechanic's sharpening is a bit harsh.

Do these look better or worse than the preceding?







Andrea B.

  • Technical Adviser
  • *
  • Posts: 1671
Re: The downsizing challenge - Part 1: The Problem
« Reply #16 on: May 20, 2016, 19:33:48 »
So, let's see your refinement of Resize & Sharpening using your favorite methods,
now that we've seen my first three "basic" tests:

  • conversion of NEF & resizing to 800 pix with no sharpening from Capture NX-D
  • resizing of converted TIF to 800 pix with no sharpening using Photo Mechanic
  • resizing of converted TIFs to 800 pix with "default" sharpening using Photo Mechanic
You can perform your own NEF conversion or just use the converted TIFs.

jhinkey

  • Just Trying To Do My MF Nikkors Justice
  • NG Member
  • *
  • Posts: 262
  • You ARE NikonGear
Re: The downsizing challenge - Part 1: The Problem
« Reply #17 on: May 20, 2016, 19:53:06 »
I have similar downsizing issues with files from my D800 and A7RII - my solution is to do light capture sharpening in ACR, then at full size make another sharpening pass with smart sharpen (very light), downsize in Photoshop (bicubic), then make another light sharpening pass viewing the image at the final size, adjusting the sharpening parameters to just make the image "crisp" w/o going overboard.

Straight down-sampling/down-sizing always leaves the image looking soft . .
PNW Landscapes, My Kids, & Some Climbing

Alaun

  • NG Supporter
  • **
  • Posts: 422
  • You ARE NikonGear
Re: The downsizing challenge - Part 1: The Problem
« Reply #18 on: May 20, 2016, 20:24:12 »
Here are the pics with mine standards
(first RAW (not standard): ACR: reduce the high lights by 100,  reduce Loca, nothing else)

-convert to sRGB
-Then: first sharpening (via high pass filter) with 0.8 pixel soft blending
-Downsize bicubic (soft edges) to 1200
-second sharpening (via high pass filter) with 1 pixel and soft blending
-safe jpg at highest level (12) (usually I reduce quality till size is below 500k, it seemed to me to cause to many jpg artefacts here, but even with 12, most leaves look like little squares)

df seems a bit brighter, 750 and 810 seem very similar

 
Wer-      Dro-
      ner         ste

the solitaire

  • NG Member
  • *
  • Posts: 624
Re: The downsizing challenge - Part 1: The Problem
« Reply #19 on: May 20, 2016, 20:48:29 »
Andrea Test Three: Here is a conversion and resize of the converted TIF using Photo Mechanic's default sharpening.
I usually think that Photo Mechanic's sharpening is a bit harsh.

Do these look better or worse than the preceding?

Too much sharpening for this subject with this method in my opinion. Doing that with one of our dog pictures owuld make the dog appear to have pigs bristle instead of hair.

I was amazed by the results from your first test though
Buddy

simsurace

  • NG Member
  • *
  • Posts: 835
Re: The downsizing challenge - Part 1: The Problem
« Reply #20 on: May 20, 2016, 23:03:42 »
Any help in how to prepare the file for an upload on facebook and ending up with a representation of the file that appears sharp is greatly appreciated.

I think Facebook does a lot of compression, so the safest bet is probably to give them files which are already at the correct size and compressed. This approach gives you control over the compression and ensures more predictable results. But I think the interaction with Facebook algorithms makes this a somewhat more challenging problem than just resizing for your own hosting.
Simone Carlo Surace
suracephoto.com

simsurace

  • NG Member
  • *
  • Posts: 835
Re: The downsizing challenge - Part 1: The Problem
« Reply #21 on: May 20, 2016, 23:05:27 »
I am experiencing the same issue with downsizing. Images don't appear sharp. 99% of the time I dont have to sharpen my images at all - (when processing the high res) so for me it can't be an issue due to how I sharpened

Could you please elaborate? How do you know that you don't have to sharpen if your result is unsharp? This seems a contradiction in itself.
Simone Carlo Surace
suracephoto.com

simsurace

  • NG Member
  • *
  • Posts: 835
Re: The downsizing challenge - Part 1: The Problem
« Reply #22 on: May 20, 2016, 23:10:58 »
Simone S, a question about conversion: I'm thinking that in this first trial, I'll use the conversion software supplied by the manufacturer because it preserves in-camera settings. There will be no edits. The output will initially be a TIF so that different resizer tools can be tested. I'll add the converted TIFs to the Drobox cache with the NEFs. Does this sound OK?
Whatever you use, please give us the full-res image just before resizing happens, so everyone has the same starting point. Raw conversion is another can of worms, so let's leave that out of the equation. I think for massive resizes the small details will not matter anyway (e.g. how exactly the image has been de-mosaicked, these things happen at very large spacial frequencies, none of which can be represented in the final image).

Simone S, a question about resize size:  What size shall we agree on for posting a resized photo? 800, 1000, 1200 pixels width??
I think any of these should be OK. The person uploading the picture can state his/her desired target size so we have a healthy variety of target sizes.
Simone Carlo Surace
suracephoto.com

simsurace

  • NG Member
  • *
  • Posts: 835
Re: The downsizing challenge - Part 1: The Problem
« Reply #23 on: May 20, 2016, 23:17:03 »
I have similar downsizing issues with files from my D800 and A7RII - my solution is to do light capture sharpening in ACR, then at full size make another sharpening pass with smart sharpen (very light), downsize in Photoshop (bicubic), then make another light sharpening pass viewing the image at the final size, adjusting the sharpening parameters to just make the image "crisp" w/o going overboard.

Straight down-sampling/down-sizing always leaves the image looking soft . .

Thanks! Do you find that the result changes if you leave out the capture sharpening in ACR and the full size Smart Sharpen pass?
Simone Carlo Surace
suracephoto.com

simsurace

  • NG Member
  • *
  • Posts: 835
Re: The downsizing challenge - Part 1: The Problem
« Reply #24 on: May 20, 2016, 23:29:46 »
Andrea, thanks for participating!

- Which one of your resizes do you find best?
- Are you seeing the differences between the D810 file vs. the two others that you were talking about in Scotland? If yes, could you try to describe them?

I find almost no difference between the resizeNoEdit and PhoMechTifResizeNoShrp version for either of the cameras, whereas the sharpened Photo Mechanic version seems oversharpened to me. I don't find any big difference between the D810 vs. the other two. But maybe I will after you tell me where to look.
Simone Carlo Surace
suracephoto.com

John Geerts

  • NG Supporter
  • **
  • Posts: 9364
  • Photojournalist in Tilburg, Netherlands
    • Tilburgers
Re: The downsizing challenge - Part 1: The Problem
« Reply #25 on: May 21, 2016, 00:56:46 »
I think Facebook does a lot of compression, so the safest bet is probably to give them files which are already at the correct size and compressed. This approach gives you control over the compression and ensures more predictable results. But I think the interaction with Facebook algorithms makes this a somewhat more challenging problem than just resizing for your own hosting.
Yes.  Never a make a file too large for Facebook. 1200pix is max in my experience, otherwise you seem to lose control.

beryllium10

  • NG Member
  • *
  • Posts: 269
Re: The downsizing challenge - Part 1: The Problem
« Reply #26 on: May 21, 2016, 05:53:09 »
One observation which may be relevant to some of this discussion.  Sharpening, and especially evaluating sharpness, got more difficult for Mac users when Apple put 'retina' screens on their laptops (and now I think one of their iMacs).  The high-res screens seem to me to make almost all photos look more punchy - sharper and higher in contrast at small scales - than they would on screens of lower pixel density.  Presumably the intention is to make cell-phone photos look as good on your Mac as they do on the bright, sharp and contrasty screen of your cell phone.  However I find that I cannot reliably evaluate sharpness on such a screen. Photos that looked great on my retina laptop screen often look noticeably unsharp on conventional screens, and remain off-sharp even when downsized.  I would also add that trying to get around the problem by "pixel doubling" - enlarging to 200% so that each pixel in the photo covers 4 pixels on the retina screen - does not work for me.  I don't find the 200% retina screen image equivalent to viewing at 100% on a conventional screen.  When I updated my laptop about a year ago this forced me to do all photo sorting, evaluation and processing on an external monitor.  For some this may be affecting assessment of sharpness in the full-sized image and in the down-sizing procedure.

Andrea, your 3 initial photos (in reply #8) look fine to me.  The 3 downsized by Photo Mechanic look to me unpleasantly over-sharpened - i.e. distracting, over-contrasty detail at small scales, which prevents me seeing the photo as a whole.  My eyes dance around the photos trying to see what's behind the distracting detail.

Werner (Alaun), your 3 look more natural.  Perhaps they could even take a little more unsharp masking (especially the middle one, 1200_810_8171), but it's a matter of taste.  With your procedure nothing about them looks unnatural or distracting on my screen.

I agree with John and previous posters, that if you know the size at which your output will be displayed, downsize to those exact pixel dimensions.  Otherwise your photo will be re-rasterized to those dimensions  by someone else's rendering algorithm, over which you have no control.

Cheers, John

charlie

  • NG Member
  • *
  • Posts: 587
Re: The downsizing challenge - Part 1: The Problem
« Reply #27 on: May 21, 2016, 06:31:48 »
Any help in how to prepare the file for an upload on facebook and ending up with a representation of the file that appears sharp is greatly appreciated.

I've read PNG's have less of a chance of suffering compression artifacts than JPG's on the facebook, and uploading at the max size, 2048px. But I think you sort of take what you can get with them. Your girlfriend is not alone in experiencing random low quality images on that site.

the solitaire

  • NG Member
  • *
  • Posts: 624
Re: The downsizing challenge - Part 1: The Problem
« Reply #28 on: May 21, 2016, 09:44:09 »
Charlie, Simone, John and John, thank you for the advise given so far. Downsizing to 2048 and providing a JPG at 80% or a PNG will be the next things we will try.

Since my girlfriend uploads quite a few pictures I am not sure whether downsizing in small steps in Photoshop is a viable option because that would slow down the workflow considerably.

Does anyone have experience with Irfanview for batch downsizing?
Buddy

John Geerts

  • NG Supporter
  • **
  • Posts: 9364
  • Photojournalist in Tilburg, Netherlands
    • Tilburgers
Re: The downsizing challenge - Part 1: The Problem
« Reply #29 on: May 21, 2016, 10:09:06 »
Charlie, Simone, John and John, thank you for the advise given so far. Downsizing to 2048 and providing a JPG at 80% or a PNG will be the next things we will try.
Facebook's max pixel size is 1.600 or something (they change it regulary) I would suggest to make the file smaller around 1200-1400 region when using it for FB. I never mess with the percentages (because I don't know what actually is 'changed'  and always keep it at max.

Goodold 'View NX2' but also 'Faststone' and DxO are good in resizing, but I prefer CC for  most of the resizes because of the control you have when downsizing.