07-23-2020, 04:45 AM
This is my simple picture of the process:
- The audio interface repeatedly measures the loudness and describes it with a number of x bits. The more bits you use, the better you can describe the loudness range. The single measurement is called a sample
- The number of samples taken per second (sampling rate) determines. how good you can capture the frequencies in the music. According to a Nyquist-Shannon theorem the sampling rate has to be at least twice the highest frequency you can hear (which is about 20 kHz).
- Different standards for bits per sample and sampling rate exist, e.g. CDs use 16 bit and 44.1 kHz, in other common areas of applications 24 or 32 bit and 48 kHz are used.
- For the transmission the samples are send in packages, i.e. they are buffered in frames. The size of the frame can be described either by the number of samples per frame or by the length of the frame in milli seconds (ms). A frame with 100 samples at a sampling rate of 48 kHz has a length of (100 / 48) ms, i.e. about 2 ms
- The amount of information transmitted per second is about independent of the frame size and (without compression) given by number of bits per sample times number of samples per second. 24 bit samples at 48 kHz give 24 bit * 48 kHz, i.e. a little more than 1 Mbit/s or roughly 0,15 Mbyte/s per channel (has to be e.g. doubled for stereo)
- Lowering the frame size obviously reduces latency, but increases the chance of packets not arriving in the correct order.
My question did not refer to the individual setting of the audio interface, but to potential problems arriving when the members of a group use different settings. The resampling required when different sampling rates are used may be even more probematic with short frame sizes.
48 kHz to me seems to be a reasonable choice for a standard sampling rate.
- The audio interface repeatedly measures the loudness and describes it with a number of x bits. The more bits you use, the better you can describe the loudness range. The single measurement is called a sample
- The number of samples taken per second (sampling rate) determines. how good you can capture the frequencies in the music. According to a Nyquist-Shannon theorem the sampling rate has to be at least twice the highest frequency you can hear (which is about 20 kHz).
- Different standards for bits per sample and sampling rate exist, e.g. CDs use 16 bit and 44.1 kHz, in other common areas of applications 24 or 32 bit and 48 kHz are used.
- For the transmission the samples are send in packages, i.e. they are buffered in frames. The size of the frame can be described either by the number of samples per frame or by the length of the frame in milli seconds (ms). A frame with 100 samples at a sampling rate of 48 kHz has a length of (100 / 48) ms, i.e. about 2 ms
- The amount of information transmitted per second is about independent of the frame size and (without compression) given by number of bits per sample times number of samples per second. 24 bit samples at 48 kHz give 24 bit * 48 kHz, i.e. a little more than 1 Mbit/s or roughly 0,15 Mbyte/s per channel (has to be e.g. doubled for stereo)
- Lowering the frame size obviously reduces latency, but increases the chance of packets not arriving in the correct order.
My question did not refer to the individual setting of the audio interface, but to potential problems arriving when the members of a group use different settings. The resampling required when different sampling rates are used may be even more probematic with short frame sizes.
48 kHz to me seems to be a reasonable choice for a standard sampling rate.