This article looks at some of the issues dealing with measurement uncertainty at high frequency found with IEC 60601-2-2 measurements, and the solutions offered by MEDTEQ. It provided in a Q&A format.
Are oscilloscopes (scopes) and high voltage probes traceable at the frequency associated with ESU measurements?
Almost certainly no, although it is always worth to check specifications and calibration records. Most scopes and HV probes have accuracy specifications which only apply at "low frequency", typically less than 1kHz. Calibration normally follows the specification so only covers this low frequency area. Calibration at both high voltage and high frequency is an extremely difficult area and even the national standards laboratories have limited coverage (e.g. NIST is limited to 100Vrms/1MHz).
Most scopes and probes are designed to prioritise bandwidth and time base accuracy over accuracy in the vertical axis (voltage). They are not designed to provide stable, laboratory accuracy at high frequency, so traceable calibration would not make sense.
But the marking on the scope and probe show 100MHz?
This is the "3dB bandwidth" which the frequency where the error exceeds 29%. This error is obviously too high for normal laboratory tests and may seem to have little practical meaning. In theory a laboratory could calculate the 1% bandwidth, around 14MHz for a 100MHz probe/scope. However, this assumes a simple, flat first order response which is not true - in order to achieve a wide bandwidth, various forms of adjustable compensation are applied, resulting in different measurements systems at different frequencies. It's not an exact science and most manufacturers target a flatness of around ±0.5dB for each component which is reasonable considering the difficulties. Even so, this is still a fairly high error of around ±10% for the full system (scope + probe), and exceeds the recommendation in IEC 60601-2-2. Also it should be clear that flatness in the passband is rarely part of the declared or "guaranteed specifications". Traceability is a function of both the points of calibration and equipment specifications, since it is impractical to calibrate equipment under the full range of use. Or to put it another way, if equipment is used outside of the specified range, it should be calibrated at the points of interest, i.e. for IEC 60601-2-2, calibration should be performed at 400kHz.
Does the same issue apply to DMMs (digital multimeters)?
No - good quality DMM manufacturers declare not only the headline accuracy, but also the frequency range which the accuracy is maintained. Within a this range, DMM measurement systems are typically designed to be inherently flat with no frequency compensation. As such, calibration at 50Hz, for example, can be considered representative of 500Hz, 5kHz and 50kHz if these frequencies are in the specified range.
In contrast, probes and scopes employ various compensation schemes which take over at higher frequencies. In HV probes for example, the measurement system at 50Hz is completely different to the system used at 50kHz. As such, it is not possible to say that measurement at low frequency is representative of higher frequencies.
It is not to say that scopes and probes are bad, it is extremely (incredibly) difficult to provide flat passband for the wide range of voltages and frequencies which users demand. And in many cases 10% is fine . Perhaps the only complaint is the way specifications are written can easily lead a user to assume the low frequency specifications also apply at mid range frequencies in the passband.
What if the probe compensation adjustment is done?
Compensation adjustment is import to keep the accuracy in the ±0.5dB (~6%) range. But it is an approximate method, typically performed at 1kHz only and not suitable to establish legal traceability. It is noted that many scopes and probes exhibit different errors at different frequencies. Compensation is also usually capacitance based, which is much more susceptible to temperature, time, frequency and voltage than a resistor based system. Compensation also depends combination of the probe with the particular scope, channel and cable used.
Are there other issues?
Apart from the basic frequency response, there are a number of other issues which can affect accuracy. Most oscilloscopes are only 8 bit in the first place which is just 256 steps from top to bottom of the screen (a typical 4.5 digit DMM has 40000 steps). Scopes also have 2-3 LSBs of noise. This means for example, on a scope set to 2kV/div (±8kV range), the noise will be around 200V. This alone is 5% of a 4000Vp waveform, which is on top of the bandwidth issues above. From experience, dc offsets in the scope and probe (in particular, active probes) can easily add another 2-3% and are often unstable with time. This makes peak measurements at best a rough indication in the 10-15% range for a typical set up.
For rms measurements, the scope sample rate and sample length need to be sufficient for an accurate calculation. As a general rule of thumb, you should have at least 10 cycles for a low crest factor waveform, and 20 cycles or more may be needed for high crest factors. However, too many cycles can slow the sample rate so that information is lost. Some scopes offer "cycle rms" which eliminates the problem, but the algorithm for detecting a "cycle" should be checked as it is possible for the software to be confused by pulse waveforms.
Why is it so difficult?
The technical issues with accurate rms measurement increase as a function of Vrms²x frequency. Wide bandwidth requires low resistance to minimise the impact of stray capacitance and other effects (e.g. standard 50Ω), but low resistance is not practical at high voltage due to the power and heat (try putting 100Vrms on a 50Ω resistor). Another problem is that main use of scopes is for diagnostic purpose and timing, hence the widespread use of 8 bit resolution which is more than enough for those tasks. As scopes went digital, the temptation obviously exists to add various measurement functions such as peak and rms. But these software functions and the associated measurement systems were never really scrutinised by the industry as to whether they provided reliable measurements with traceable accuracy.
Why can MEDTEQ offer accurate, traceable measurement?
The MEDTEQ equipment tackles a relatively small range up to 1MHz and up to 1200Vrms with analogue based designs (compared to 100MHz/20kVrms offered by probes/scopes). This smaller area is challenging but not impossible.
A first target was to develop a reliable 1000:1 divider for use in the MEDTEQ HFIT (High Frequency Insulation Tester). After many years of experimenting with chains of SMD resistors, it was finally established that that the upper limit for a resistive divider without compensation was 300Vrms for 0.1% error at 500kHz (11MHz 3dB bandwidth). Beyond this, the competing constraints of power dissipation and bandwidth cannot be met. This was based on literature, experiments to determine stray capacitance for mounted SMD parts, modelling and tests.
To handle the higher voltages up to 7200Vp/1200Vrms as associated with the HFIT 7.0, a chain of "mica" capacitors are used. In general, capacitors are unreliable in measurement systems, being susceptible to voltage, frequency, temperature and time. However, mica capacitors are known for their stability under a wide range of conditions. Experience with this design, including units returned for periodic calibration have given confidence in this solution better than 0.5% from 320-460kHz (the range of HFIT use).
To calibrate the divider, specially developed HF meters have been used . It is planned to release a range of meters around mid-2017 under the name "HFVM" (high frequency voltmeter). To calibrate the divider, two HFVMs are used, one with a range of 200V, and a second meter with 200mV. A "home made" high frequency amplifier is driven by a digital function generator is used to provide around 150Vrms at the test frequencies of 320kHz/400kHz/460kHz. An oscilloscope monitors the test waveform for harmonic distortion and confirm it is <0.3%.
Although the current HFIT 7.0 contains only 1000:1 divider, the HFIT 8.0 (under preliminary release) includes also internal metering using the HFVM design. The preliminary release versions were validated and performed well within the 2% design specification when compared against external reference methods, over a wide range of voltages, crest factors.
How are the HFVMs designed?
Inside the HFVM, rms voltage is derived from the "explicit" method, which separates the measurement into three stages: square -> mean -> square root. This is known to have a relatively wide bandwidth, and the core device for squaring the waveform has a 40MHz bandwidth. The explicit method has a complication in the wide dynamic range in output of the square stage. Fortunately this complication is unrelated to the frequency response, and so could be investigated and solved at low frequency. In contrast, modern DMMs use an "implicit" (feedback) rms/dc solution which does not suffer the wide dynamic range, but has limited frequency response.
For peak detection, MEDTEQ has invented a new method which has the equivalent accuracy of a 13 bit scope running at 100MHz, but without the complication. It is a simple method which can be thought of as a hybrid between hardware and software, with an algorithm that searches for the peak, rather than measuring it. A key benefit of this approach is that it does not use any diodes as are common in peak detection, as diodes are difficult characterise at high frequency with parameters such as diode reverse recovery time, leakage and capacitance influencing the result.
How is rms traceability provided?
The HFVMs are calibrated against a Fluke 8845A 6.5 digit meter, itself calibrated by JQA. They are tested at a "comfortable" frequency such as 1kHz or 10kHz where the Fluke has high accuracy specifications and there are no questions about traceability.
To verify the bandwidth, the HFVM is validated against thermal methods, using MEDTEQ's HBTS which has 0.01°C resolution, and can be set up to detect errors as small as 0.1% deviation from 50Hz to 500kHz. A home made device called "HFCAL" provides a stabilised output within 0.1% from 5kHz to 500kHz (the HFCAL itself is not calibrated as the stability is confirmed by thermal methods). A typical set up uses a 50.00Vrms which is connected to the Fluke, HFIT, and an a resistor designed to have around 20°C rise. The resistor is kept in a specially designed thermal block which can detect rms voltage changes of 0.1% or better as verified at low frequency, again using the Fluke as the traceable reference. The Fluke is then removed, and the frequency adjusted to 10, 20, 50, 100, 300, 400 and 500kHz. Via the temperature observed by the HBTS, the rms output of the HFCAL can be monitored as being stable within 0.2%. The indicated values of the HFVM is also observed and confirmed to be within 50.00Vrms ±0.1%.
Due to the complexity of working in high frequency, the HFVM is also designed to be inherently accurate (for both peak and rms), which means they works "as assembled" without any adjustments. All potentially limiting factors such as resistor tolerances, op-amp slew rate, stray parameters and the like are designed to be negligible. The design bandwidth is >11MHz, the point calculated to have 0.1% error at 500kHz. No frequency compensation is applied. This focus on an inherently accurate design is considered important as ensures the thermal validation above is genuinely independent. It would not be acceptable if, for example, the HFVM included compensation at high frequency that was first adjusted using a thermal method, and later verified by a similar method.
The thermal experiments above have also been repeated many times in both ad-hoc and formal settings, and the HFVM itself has undergone several iterations. Despite different configurations, components, resistors the design has given a reliable 0.2% bandwidth of 500kHz, which is far more accurate than required for IEC 60601-2-2 testing.
Is MEDTEQ traceability legal?
Yes and no. Traceability itself is provided as described above. However, a medical device manufacturer would be using MEDTEQ as a subcontractor (or supplier) for purpose of the regulations such as the medical device directive. This is fine as long as some kind of assessment is performed to confirm that MEDTEQ is an appropriate supplier. Recently, HFIT calibration/shipping reports are provided with an explanation how traceability is provided. However, in the future it is planned to provide a detailed validation report, similar to what might be expected for software validation of test equipment.
What about laboratory ISO 17025 traceability with accreditation?
No. In addition to traceability itself, ISO 17025 requires quality assurance, management systems, insurance and audits by an accreditation agency. MEDTEQ cannot offer this. Calibration agencies are being contacted for support to develop a special program for the high frequency, high voltage area.
In the background it has to be kept in mind that no virtually no accredited agencies work in the high voltage/high frequency region. As such, there may be two options available:
- Refer to the calibration of the high voltage probes and scopes. This is the current practice and accepted for the time being. But keep in mind the potential for real world errors especially in poorly adjusted high voltage probes; equipment settings and other issues mentioned above. Use the MEDTEQ devices (e.g. HFIT divider) as cross check and investigate the errors.
or
- Discuss with the lab QA department/accreditation agencies for special allowance until such time as traceable calibration is provided by ISO 17025 accredited agencies. In pure regulations and accreditation only traceability is required. There will always be some unusual situations where accredited 3rd party ISO 17025 calibration simply does not exist. While internal procedures or accreditation agencies may ISO 17025 calibration, exceptions to the rule should be allowed as long as it is properly documented (e.g. basis for traceability).