The basic idea behind essential performance is that some things are more important than others. In a world of limited resources, regulations and standards should try to focus on the important stuff rather than cover everything. A device might have literally 1000’s of discrete “performance” specifications, from headline things such as equipment accuracy through to mundane stuff like how many items an alarm log can record. And there can be 100’s of tests proving a device meets specifications in both normal and fault condition: clearly it’s impossible check every specification during or after each one of these tests. We need some kind of filter to say OK, for this particular test, it’s important to check specifications A, B and F, but not C, D, E and G.
Risk seems like a great foundation on which to decide what is really “essential”. But is it a complicated area, and the “essential performance“ approach in IEC 60601-1 is doomed to fail as it oversimplifies it to a single rule: "performance ... where loss or degradation beyond the limits ... results in an unacceptable risk".
A key point is that using acceptable risk as the criteria is, well, misleading. Risk is in fact the gold standard, but in practice it gets messy because of a bunch of assumptions hiding in the background. Unless you are willing to tease out these hidden assumptions, it’s very easy to get lost. For example, most people would assume that the correct operation of an on/off switch does not need to be identified as “essential performance”. Yet if the switch fails, the device then fails to treat, monitor or diagnose as expected, which is a potential source of harm. But your gut is still saying … nah, it doesn’t make sense - how can an on/off switch be considered essential performance? The hidden assumption is that the switch will rarely fail - instinctively we know that modern switches are sufficiently reliable that they are not worth checking, the result of decades of evolution in switch design. And, although there is a potential for harm, the probability is generally low: in most cases the harm is not immediate and there is time to get another device. These two factors combined are the hidden assumptions that - in most cases - means that simple on/off switch is not considered essential performance.
In practice, what is important is highly context driven, you can't derive this purely from the function. Technology A might be susceptible to humidity, technology B to mechanical wear, technology C might be so well established that spot checks are reasonable. Under waterproof testing, function X might be important to check, while under EMC test function Y is far more susceptible.
Which means that simply deriving a list of what is "essential performance" out of context makes absolutely no sense.
In fact, a better term to use might be "susceptible performance", which is decided and documented on a test by test basis, taking into account:
technology used (degree to which it well established, reliable)
susceptibility of the technology to the particular test
the relationship between the test condition and expected normal use (e.g. reasonable, occasional, rare, extreme)
the severity of harm if the function fails
Note this is still fundamentally risk based: the first three parameters are associated with probability, and the last is severity. That said, it is not practical to analyse the risk in detail for each parameter, specification or test: there are simply too many parameters and most designs have large margins so that there are only a few areas which might be sensitive in a particular test. Instead, we need to assume the designer of the device is sufficiently qualified and experienced to know the potentially weak points in the design, as well as to develop suitable methods including proxies to detect if a problem has occurred. Note also that IEC 60601-1 supports the idea of “susceptible performance” in that Clause 4.3 states that only functions/features likely to be impacted by the test need to be monitored. The mistake is that the initial list of “essential performance” is done independently of the test.
The standard also covers performance under abnormal and fault condition. This is conceptually different to “susceptible performance” as it is typically not expected that devices continue to perform according to specification under abnormal conditions. Rather, manufacturers are expected to include functions or features that minimise the risk associated with out-of-specification use: these could be called “performance RCMs”: risk control measures associated with performance under abnormal conditions. A common example is a home use thermometer, which has a function to blank the temperature display when the battery falls to levels that might impact reliable performance. Higher risk devices may use system monitoring, independent protection, alarms, redundant systems and even back up power. Since these are risk control measures, they can be referenced from the risk management file and assessed independently to “susceptible performance”. Performance RMS can be tricky as it pulls into focus the issue of what is “practical”: many conditions are easy to detect, but many others are not; those that are not detected may need to be written up as risk/benefit if the risk is significant.
Returning to “susceptible performance”, there are a few complications to consider:
First is that "susceptible performance" presumes that, in the absence of any particular test condition, general performance has already been established. For example, a bench test in a base condition like 23°C, 60% RH, no special stress conditions (water ingress, electrical/magnetic, mechanical etc.). Currently in IEC 60601-1 there is no general clause which establishes what could be called "basic performance" prior to starting any stress tests like waterproof, defib, EMC and so on. Even now, this is a structural oversight in the standard, since it allows the test to focus on parameters that are likely to be affected by the test, which only makes sense if the other parameters have already been confirmed.
Second is that third party test labs are often involved and the CB scheme has set rules that test labs need to cover everything. As such there is reasonable reluctance to consider true performance for fear of exposing manufacturers to even higher costs and test labs thrown into testing they are not qualified to perform. This needs to be addressed before embedding too much performance in IEC 60601-1. Either we need to get rid of test labs (not a good idea), or structure the standards that allows test labs to separate out those generic tests they are competent to perform from specialised tests, as well as practical ways in which to handle those specialised aspects when then cross over into generic testing (such as an IPX1 test).
Third is that for well established technology (such as diagnostic ECGs, dialysis, infusion pumps) it is in the interests of society to establish standards for performance. As devices become popular, more manufacturers will get involved; standardisation helps users be sure of a minimum level of performance and protects against poor quality imitations. This driver can range from very high risk devices through to mundane low risk devices. But the nature of standards is such that it is very difficult to be comprehensive: for example, monitoring ECG have well established standards with many performance tests, but many common features like ST segment analysis are not covered by IEC 60601-2-27. The danger here is using defined terms like “essential performance” when a performance standard exists can mislead people to think that the standard covers all critical performance, when in fact it only covers those aspects that have been around long enough to warrant standardisation.
Finally, IEC 60601-1 has special requirements for PEMS for which applicability can be critically dependent on what is defined as essential performance. These requirements can be seen as special design controls, similar to what would be expected for Class IIb devices in Europe. They are not appropriate for lower risk devices, and again using the criteria of “essential performance” to decide when they are applicable creates more confusion.
Taking these into account, it is recommended to revert a general term "performance", and then consider five sub-types:
Basic performance: performance according manufacturer specifications, labelling, public claims, risk controls or can be reasonably inferred from the intended purpose of the medical device. Irrespective of whether there are requirements in standards, the manufacturer should have evidence of meeting this basic performance.
Standardised performance: requirements and tests for performance for well established medical devices published in the form of a national or international standard.
Susceptible performance: subset of basic and/or standardised performance to be monitored during a particular test, decided on a test by test basis, taking into account the technology, nature of test, severity if a function fails and other factors as appropriate, with the decisions and rationale documented or referenced in the report associated with the test.
Critical performance: subset of basic and/or standardised performance performance which if fails, can lead to significant direct or indirect harm with high probability; this includes functions which provide or extract energy, liquids, radiation or gases to the patient in a potentially harmful way; devices which monitor vital signs with the purpose of providing alarms for emergency intervention, and other devices with similar risk profile (Class IIb devices in Europe can be used as a guide). Aspects of critical performance are subject to additional design controls as specified in Clause 14 of IEC 60601-1
Performance RCMs: risk controls measures associated with performance under abnormal conditions, which may include prevention by inherent design (such as physical design), prevention of direct action (blanking display, shut off output), indication, alarms, redundancy as appropriate.
Standards should then be structured in a way that allows third party laboratories to be involved without necessarily taking responsibility for performance evaluation that is outside the laboratories competence.