djoptix
Senior Retro Guru
Neil":2wwgl1i2 said:Put that in context of the cable inside your PC, betwixt hard drive and mother board / controller.
Then tell me there's sense in spending lots of money on digital cables.
That's a special case - the cables are incredibly short and hence have hardly any capacitance to speak of, and don't have to reject interference, so they can be cheap and small.
Neil":2wwgl1i2 said:Look at the type of cable that's used for networking.
IT doesn't have the budget (typically, these days) for spending on expensive, overpriced cables, and the same types of 0s and 1s at easily the same speeds (probably much, much, more).
Cat5 can be used for very long runs because of its balanced pairs which have a very tight and controlled twist. This makes it excellent at rejecting interference but it's not ideal for audio because it's flimsy, would suffer from handling noise and it's difficult to terminate into audio connectors. Plus once you're transmitting a lot of audio you have to start paying attention to curve radii when installing which is tedious. Nevertheless you're right that it does do a very good job of transmitting a digital audio signal like SPDIF or AES. There's masses of error correction and redundancy built into network equipment so traffic still flows well over cat5 despite the errors. The difference for audio traffic is that you have a finite amount of time to do error correction and reconstruct the signal because it needs to make it out of the speakers by a certain time, which is why digital cable has to be properly specified, terminated and shielded. For instance, I've just pinged my router which is being a bit temperamental. Usually it takes under 2ms for a response but sometimes it spikes up to around 20ms. In the context of network traffic this is completely unnoticeable but in an audio context a sudden 20ms delay of the data getting through would be disastrous. Of course the cable hasn't actually introduced a delay, it's just that the error correction has taken this long to spit out meaningful values of a signal which has been compromised.
Shamelessly borrowing from a recent Sound On Sound article:
Sound On Sound":2wwgl1i2 said:
This shows the result of sending an AES3 signal through 2m of Canford Audio’s DFT digital audio cable. What this graph is showing is a series of sections of digital audio data, all overlaid over one another. The top, horizontal blue line represents the high data value (binary 1), and the lower blue line represents the low data value (binary 0). Individual data bits obviously exist only at one of these two levels, and the divisions between one data bit and the next are represented by the vertical blue lines.
As you can see, with this digital audio cable the source signal is being received still looking very square and clean, and each individual ‘bit cell’ (also referred to as the ‘eye’ pattern) is clearly identifiable and very ‘open.’ The small red rectangles within each bit cell represents the minimum eye pattern area that must be kept clear for the receiver to be able to interpret the data reliably — in effect, this is the ‘decision zone’, where the receiver has to decide whether the data is high or low for that specific data bit.
Sound On Sound":2wwgl1i2 said:
The second graph shows exactly the same thing, but this time with the signal passing through 2m of standard professional star-quad microphone cable. The rising and falling edges between each bit cell have obviously been slowed down dramatically, and follow a ragged curve rather than the crisp up or down switching seen in the DFT cable. This is due entirely to the capacitance of the cable storing, and then releasing over time, the electrical energy from each change of signal voltage. While the damage is very obvious, it’s still not disastrous and the data can still be received reliably — but you can certainly see how the eye pattern is starting to ‘close up’. Now imagine what would happen if you tried to send a signal down many tens of metres of mic cable! In practice, it would become unrecoverable after about 15 to 25m.
The critical aspect of this waveform damage is that the rising and falling edges of each bit cell are used to define the bit-clock transitions, and the distortions caused by cable capacitance can therefore introduce clock jitter.
Sound On Sound":2wwgl1i2 said:
With S/PDIF and AES formats, clock jitter is directly affected by the audio data content to some degree, because of the different cable-capacitance charging and discharging times that occur.
The diagram above shows how the data is conveyed in the S/PDIF and AES3 interface formats. The audio data ones are conveyed by switching the signal between high and low (or vice versa) mid-way through the bit period (ie. sending the 3MHz square wave), while audio data zeros are conveyed by remaining at a constant level for the entire bit period (sending the 1.5MHz square wave). This arrangement is actually very similar to the way standard linear timecode works — although that system operates by switching between 1kHz and 2kHz signals, giving rise to the characteristic warble. This kind of channel code arrangement (often called a Bi-Phase Mark Code) has the advantage that the recovered data is not affected by the overall signal polarity — all that matters is whether the signal transitions midway through a bit cell period or not, rather than whether it is a high or low voltage at any particularly time. However, as the diagram shows, the amount of clock jitter is directly affected by the audio data content to some degree, because of the different cable-capacitance charging and discharging times that occur.
Ideally, the receiving circuitry will be able to remove this interface jitter, but not all devices manage this equally well (as shown in the investigations into A-D converter clock-recovery above), and if the embedded clocks are to be used as a clock reference (as is common practice in D-A converters, for example), then this interface jitter can become part of the overall system’s clock jitter, potentially resulting in reduced D-A (or A-D) performance.
----------
Despite my rant I wasn't saying that spending huge amounts of cash on digital interconnects is justified. My point (as drcarlos has said) is that if your cable meets the right standards for digital audio transmission, it will deliver identical results to a special, super-shielded, hand-welded-by-virgins Monster cable costing £1000 a foot. But you do have to get the specification right in the first place. Nevertheless (Neil) you are right about Cat5. It is used to transmit massive amounts of audio data over large distances with great success such as in the up-and-coming Dante protocol. (http://en.wikipedia.org/wiki/Dante_(networking)) However, even Dante struggles when using standard network interfaces to pass audio and works much better with its own kit.
Line level audio is much more tolerant of cable differences and IMHO your interconnects make little difference as long as they don't have too much capacitance and are well-terminated.
Speaker cable even more so and as long as it doesn't exceed 5% of the rated impedance of the system, makes no discernible difference.