In a previous article, I looked at the construction of anechoic chambers and their relevance to measuring the frequency response and various distortions of speakers. I’m now going to compare some of the different ways of measuring frequency response and their pros and cons.
I am not going to discuss what constitutes a good target response, merely how accurate the results might be. I’m not going to get too technical either, but it’s a hefty topic, so the discussion is going to be split into a couple of articles, beginning at the beginning.
Swept sine wave method
At one time, this was the only viable way of measuring frequency response or, more accurately, the steady-state frequency response. This diagram illustrates a basic set-up.
The oscillator can generate a pure tone anywhere in the audio band. The signal is amplified and fed to a speaker, whose output is picked up by a high-quality microphone and recorded. Gradually sweep across the bandwidth of interest (traditionally 20Hz to 20kHz, being the range of human hearing) with an electrical signal at a fixed voltage and you can plot the output onto paper. You have to be careful not to sweep through the frequencies too rapidly, otherwise the speaker doesn’t have enough time to reach its steady state.
This photo from our archives shows a typical measuring setup of the time (late1960s). The device with the round dial behind the plotter is a beat frequency oscillator (BFO). They produced very pure tones and, although strictly not necessary for simple frequency response plots, were essential when measuring distortion. The response being plotted looks a little smoother than normal, being plotted on half (vertical) scale paper to reduce the size of wiggles and perhaps with the pen on ‘slow’ response. It was probably intended for publicity!
Here is a sample of a full scale plot of one of our old speakers (808). The measurement was made in the large BRE chamber mentioned in the Anechoic Chamber article, so the bass response is fairly accurate.
For simple frequency response measurements, it was common to use an input voltage of 2.83V (equivalent to 1 watt into an 8Ω load) and put the microphone at a distance of 1 metre. The flat portion of the graph would then come out at the sensitivity of the speaker if everything were properly calibrated. Later it was realised that putting the microphone only 1 metre away was probably not far enough; you could be measuring significantly nearer some drivers than others, leading to incorrect balancing. A distance of 2 metres became more common and this is what was used for the measurement shown above. It’s closer anyway to a typical listening distance. As for sensitivity, all you have to do is recalibrate for the 6dB lost in level as the distance is doubled.
If you add a narrow band filter to the equipment, you can measure harmonic distortion (2nd, 3rd, 4th etc.). You set the filter to the required multiple of the fundamental frequency fed in and it will track the sweep. Here’s a plot of the 2nd harmonic distortion of the 808:
and the 3rd harmonic:
Of course, all you got from this type of measurement was a series of plots on pieces of paper. And it was all steady state. It gave you no idea why speakers with very similar response plots sounded so different from one another. Of course, the answer lay in differentiating between the various causes of irregularities, or deviations from flat, in the response. It made a difference if the wiggle was caused by an underlying resonance or a reflection or diffraction off sharp edges in the structure. A way of measuring transient behaviour was needed.
We have become used to seeing waterfall plots of speakers in reviews. These days, with various digital measurement techniques, they are relatively easy to produce and show how a speaker’s output builds and decays as the signal starts and stops. But it is instructive to look back at the origin of such plots. If you are a serious engineer or enthusiast, I heartily recommend you read a paper by D E L Shorter of the BBC Research Department entitled “The Development of High-Quality Monitoring Loudspeakers: A Review of Progress”. It dates from 1958 – yes, 54 years ago – and looks delightfully old fashioned, being written on a typewriter and using cycles per second instead of the modern hertz. You can download it in pdf format here: http://downloads.bbc.co.uk/rd/pubs/reports/1958-31.pdf
In his day, Shorter was a significant figure in the industry and the BBC Research Department at Kingswood Warren a haven of talent, with both Spencer Hughes (Spendor) and Dudley Harwood (Harbeth) being two engineers that started out working with Shorter at the BBC and then going on to found their own speaker companies.
In this paper, Shorter describes a method of using interrupted tone bursts to realise a rather crude waterfall plot as shown in this extract:
It just goes to show that many ideas we think of as new in audio are not at all, they are merely new and better ways of doing things.
During the 1970s, engineers started to turn their attention to phase response. Being cynical, this was probably because B&K introduced a phase meter and engineers were able to measure acoustic phase conveniently for the first time. A combination of amplitude and phase response is necessary to fully define frequency response, but you really need the two in some form of combined measurement. Separately, amplitude and phase are of limited use, but the phase meter’s arrival did usher in a brief flirtation with trying to get a flat acoustic phase response from speakers and our DM6 was one such model. But it eventually became apparent that compromises were being made in more important areas of performance to achieve a flat phase response and it fell out of fashion.
Things began to come together when computers became readily available and with them a new way of measuring – the impulse response. We’ll talk about that in another article.
Mike Gough, Senior Product Manager.