A GPU should always be way faster since it's all about parallel tasks.
As a general statement, this is unfortunately not correct. There are many factors impacting the cross-over point when a parallel system is faster than a serial system.
To name a few:
1)
Amdahl's law.
2) The relative speed difference between the serial processor and the parallel processor. Ron has a very fast "serial CPU" and a relatively speaking "slow parallel GPU". This ratio shifts the cross-over point where the execution of the parallel part on the GPU shows a reduction in overall execution time further out.
3) The efficiency of the implementation.
4) The communication overhead between CPU and GPU. As the GPU does not have an embedded serial CPU, it needs to ask always the CPU for further work or needs to send data back to the CPU for further processing
plus many more factors are impacting this statement
In the end, minimizing the wall clock time counts. For example:
If Ron's system with his OC CPU would finish a task in 20 seconds(CPU only) and the addition of the GPU option with the GTX 750 would not change the result, it is still 20 seconds.
If someone else has a very slow Celeron CPU taking 100 second for this same task and add a super fast GPU like the GTX 980, the overall improvement could be 4x (as example) = 25 seconds. Great from a speedup perspective in itself, but it is still slower from an overall time perspective than the first option.
rgds, Andy