Which audio quality is better, 44KHz 1441kbps or 48KHz 320 kbps?

The sample rate of digital audio needs to be high enough to cover the audio spectrum - usually rated as 20Hz to 20kHz. The 44.1kHz standard used for CDs was a convenient figure designed originally to allow this - sampling rate needs to exceed the Nyquist limit, meaning it must be >2x the maximum band frequency.

44.1kHz doesn’t fit well with film and TV frame rates, so a second standard arose using 48kHz as a sample clock rate. The difference in terms of quality is negligible but it conveniently allows an integral number of audio samples per video frame.

The bit rate is rather different. 1441kbps represents the digital rate of an uncompressed CD data stream. You can work out that 320kbps is a great deal less data (despite the sample rate being higher) so you can instantly see that some data has been lost - that stream has had some data compression to slim the stream down.

In general terms data compression is only a good idea on the final delivery leg of audio distribution. If you chain data compressions you can get some pretty unattractive results, and if you mix two data compressed streams together you will create artefacts because the assumptions for compression in the first place are no longer valid.

Thus 44.1kHz 1441kbps is likely to be a better quality, and (being uncompressed) more useful stream than the 48kHz 320kbps one.

Chris Woolf

Leave a Reply