There is a fundamental sonic difference between perfectly capturing the sound of a distorted guitar versus allowing the guitar to distort the recording.
Many recording engineers do not understand the difference.
One good example might be found in the difference between amplifying a square wave and creating a square wave.
In the average amplifier, if you input a musical signal with previously clipped peaks (where the loudest parts of the signal have already been flat-topped into something akin to a square wave) you will hear those peaks as rendered distortion—the same at any volume level—without upsetting the amplifier or adding any extra distortion components.
Contrast that with the same musical input signal that has not been previously clipped. In this scenario, we will turn up the signal input level high enough to cause the amplifier itself to clip the peaks. This will sound fundamentally different than the rendered distortion because, while the clipped signal might look similar on a scope, the amplifier is now generating its own signatures: pushing limits of linearity, spewing harmonics, saturating feedback loops, stressing its power supply.
Perfecly capturing a pre-distorted signal sounds very different than the act of distorting it.
While the differences may seem academic, understanding the mechanisms of change will go a long way to making better recordings.
Paul McGowan