(07-21-2020, 09:18 PM)MartinB Wrote: I am surprised to not read any recommendation about standardization of sampling rates and number of bits per sample.
From my simple understanding, differences in sampling rates should lead to additional effort and artefacts during resampling.
Any hints / comments / experiences ?
Reducing latency by increasing the sampling rate to extreme values like 96 kHz does not make sense to me
TLDR summary: you're right. It doesn't make sense. However, the people doing it did have a good reason.
Here's my understanding. (As a real-time software engineer and home recordist for 40 years, I'm a certified nerd. Most of what I say is correct, even!)
First, the latency we're measuring here is audio interface round-trip latency, and has nothing to do with the network. The network is important for JK, but it's not what we see as our latency when we measure it in JK. (I'm new here, and perhaps there's a way to show latency for each member in a jam, and THAT would include network latency.)
But restricting ourselves to audio interface latency here, the main cause of this latency is buffering: storing sample points in a buffer that is passed between the software and hardware. If my sample rate is 44100 samples per second and my buffer is 44100 samples long, then my latency going from hardware to software (audio input) is going to be 1 second. (Yeah, that's an oversimplification.) And it'll be another second going from software to hardware on output, for a total "round-trip" latency (in my computer) of 2 seconds.
There are two ways to reduce this buffering latency. The first is to use smaller buffers. The second is to use a higher sample rate.
Generally the best way to reduce latency is to use smaller buffers. There's a limit to that related to software engineering/architecture. But if the buffer size is set to some constant by something, then the only other option is to increase the sample rate. However, you're right that this is a nutty idea in this context. Fortunately, not quite as nutty as one might think, due to how JK software works.
I'm new here, so some of this may be wrong, but my understanding is that JK software "normalizes" the audio from whatever your soundcard produces, and it uses the same audio format for everyone. That's a good thing. Now, if we knew what format it likes best and used that in our computer, it would make JK job easier. I believe it uses 48000 kHz. It apparently uses floating point internally, so 16 vs 24 bit samples won't matter much; either way it's pretty much the same conversion. And it compresses the audio (like MP3) so that it doesn't use too much bandwidth. I have no guess what compression scheme it uses, and don't really care.
Note that there's a limit to how low the buffering latency can be, related to how long the computer's attention might be on other stuff it can't interrupt to handle your audio. Usually the biggest pole in that tent is "PCI bus latency" and you can google for apps to measure that. IIRC a popular old one is "pcilat.exe" if you're on Windows. When you try to go lower, you get pops and clicks due to the CPU being busy with other stuff when it needed to handle your audio buffer. It doesn't hurt anything other than your ears, so it's fine to try. Also, for jamming, an occasional pop or click may be better than always having too much latency. It's a tradeoff that you get some control over.
Finally (and hopefully) the real reason that people use higher sample rates is to pass the audio setup test; if we don't pass, we don't get to play! That test uses a fixed buffer size! (440 samples!) So, we cheat to pass the exam.
But wait! I lied. It doesn't really use a fixed value. It uses a fixed DEFAULT value. I never would have guessed, but while that "add audio equipment" panel that tests our hardware is up, the "Manage" button on the panel behind is still active! And we can use that to reduce the buffer size, all the way down to 1ms (it uses time rather than number of samples, which is a good idea.) So, the main reason for using higher sample rates disappears, poof.
If that higher sample rate propagated all the way through the network, it would be really bad. Fortunately it doesn't. But it still adds a lot of unnecessary processing for your CPU, and you don't even get the benefit of better audio quality (because JK normalizes it to its preferred format, and extra quality is silently discarded.)
Regarding artefacts, I'd wager that the compression method's artefacts would overwhelm resampling artefacts. You get bonus points for spelling it that way in this context. I had to add it to my spell checker just now.