In electronics and telecommunications, jitter refers to the deviation between the actual timing of a signal and its expected or ideal values.
Jitters are significant and usually undesired factors in the design of almost all communications links. They are typically caused by noises and disturbances in the system, such as thermal noises, power supply variations, loading conditions, device noise, and interference from adjacent circuits.
Jitter can be quantified in the same terms as all time-varying signals, e.g., root mean square (RMS), or peak-to-peak displacement. Also like other time-varying signals, jitter can be expressed in terms of spectral density.
Jitter period is the interval between two times of maximum effect (or minimum effect) of a signal characteristic that varies regularly with time. Jitter frequency, the more commonly quoted figure, is its inverse. ITU-T G.810 classifies jitter frequencies below 10 Hz as wander and frequencies at or above 10 Hz as jitter.
Jitter may be caused by electromagnetic interference and crosstalk with carriers of other signals. Jitter can cause a display monitor to flicker, affect the performance of processors in personal computers, introduce clicks or other undesired effects in audio signals, and cause loss of transmitted data between network devices. The amount of tolerable jitter depends on the affected application.
For clock jitter, there are three commonly used metrics:
In telecommunications, the unit used for the above types of jitter is usually the unit interval (UI) which quantifies the jitter in terms of a fraction of the transmission unit period. This unit is useful because it scales with clock frequency and thus allows relatively slow interconnects such as T1 to be compared to higher-speed internet backbone links such as OC-192. Absolute units such as picoseconds are more common in microprocessor applications. Units of degrees and radians are also used.
In the normal distribution, one standard deviation from the mean (dark blue) accounts for about 68% of the set, while two standard deviations from the mean (medium and dark blue) account for about 95% and three standard deviations (light, medium, and dark blue) account for about 99.7%.
If jitter has a Gaussian distribution, it is usually quantified using the standard deviation of this distribution. This translates to an RMS measurement for a zero-mean distribution. Often, jitter distribution is significantly non-Gaussian. This can occur if the jitter is caused by external sources such as power supply noise. In these cases, peak-to-peak measurements may be more useful. Many efforts have been made to meaningfully quantify distributions that are neither Gaussian nor have a meaningful peak level. All have shortcomings but most tend to be good enough for the purposes of engineering work. Note that typically, the reference point for jitter is defined such that the mean jitter is 0.[citation needed]
In computer networking, jitter can refer to packet delay variation, the variation (statistical dispersion) in the delay of the packets.