Software induced jitter

You will find many claims on the audio forums about improved sound quality by disabling as many processes as possible, using another media player, memory playback,  using different protocols like ASIO, WASAPI, Kernel Streaming, etc.

 

Some of these claims are plausible. If you use a bit perfect audio path, avoiding resampling, dithering, etc., it can improve the sound.


If we assume a bit perfect audio path, it is a bit of a mystery why using another media player or another bit transparent protocol can have an impact on sound quality.
Using other software will not improve as by magic the quality of the clock driving the DA conversion. This clock has its own intrinsic jitter, no way to lower it.

 

The task of the software is putting the hardware to work.

Press play and the software will tell the hard disk to read a certain file. This requires the disk to spin, the head to move.
Likewise decoding the audio requires the processor to do its work, store the results in memory, spool it to the audio device.

In the end, all commands issued by the software results in actions by the hardware.


The idea behind software induced jitter is that the load of the system or a specific load pattern e.g. a constant processing or burst processing generates EMI and/or RFI and/or ripples on the power rails that might affect the DA conversion by inducing some sample rate jitter.

Power saving schema

One of the recommendations to optimize sound quality is to disable power saving.
I did an analog loop back recording (sound card out in to sound card in) with RMAA with the PSU in place and without.

Toshiba Satellite with PSU in place

Toshiba Satellite without PSU, battery only

This is really horrible. Switching to battery power do gives you a clean DC source but at the same time the power saving kicks in generating a lot of distortion.


This is perhaps the most vulnerable setup as the onboard audio is fully exposed to all what is going on inside the PC.

One can imagine when using an outboard DAC using asynchronous USB combined with galvanic isolation the results might be completely different.

This is one of the reasons why it is hard to say if a tweak will work or not. Our systems differ substantially.

 

Archimago [1] did some interesting tests.
Do a jitter test and repeat it with the CPU and the GPU running at 100%

Adaptive mode USB (system load not specified)

 

Adaptive mode USB 100% CPU/GPU

 

Some additional noise is showing up between 8-9kHz.

 

Asynchronous mode USB (system load not specified)

 

Asynchronous mode USB with 100% CPU/GPU

Almost impossible to spot a difference.

 

References
  1. MEASUREMENTS: Adaptive AUNE X1, Asynchronous "Breeze Audio" CM6631A USB, and Jitter - Archimago's Musings