Knowledge

The Frequency of Quality Control in Clinical Laboratory

The Frequency of Quality Control in Clinical Laboratory

The Quality Control in clinical laboratory is used to reduce, detect and correct deficiencies in the internal analytical process. It is conducted in a laboratory before delivering the results to the patient. It improves the overall quality of results in a lab setting.

Quality control is a measure to ensure accuracy and precision. It can help to figure out how well the same result is reproduced over time and under different conditions. Basically, quality control in clinical laboratory is conducted when each shift starts, and whenever results seem to be wrong.

Fig: Quality Control in Clinical Laboratory

Quality control material must follow the same matrix as specimens collected by the patient. It should consider the properties like turbidity, viscosity, color, and composition. With less vial-to-vial variability, it must be easy to use. The variability could be misunderstood as the systematic error in the instrument or method.

It should be stable for long periods and available in large enough amounts for one batch to last one year. Liquid controls are lot convenient compared to lyophilized controls.

Interpretation of QC Data

It can cover both statistical and graphical approaches. The Levey-Jennings chart data is more visual and smooth. It is an easy way to detect random error and trends or shifts in calibration.

What are Control Charts?

It is an approach to study the variation in the process to improve economic efficiency. These methods are relevant to the proper monitoring of the variation of the process.

We also know this as the process-behavior chart or “Shewhart Chart”. The control chart is a tool to assess variation in the process.

The control chart is a more specific run chart.

It is one of the seven quality control tools-

  1. Pareto chart
  2. Histogram
  3. Control chart
  4. Checksheet
  5. Effect and cause diagram
  6. Catter diagram
  7. Flowchart.

Control charts avoid unwanted process changes, give details on process capability, and offer diagnostic data.

What is the Levey-Jennings Chart?

It is a graphical tool which is having quality control data to provide visual insight to ensure lab test is doing well. The standard deviations are used to measure the distance from the mean. It is named after E.R. Jennings and S. Levey. In 1950, they suggested using individual control chart of Shewhart in the clinical lab.

Rules like Westgard rules are applied to find out whether results from samples can release when control was done. The Westgard rules are also formulated on numerical rules. Westgard rules are usually helpful to analyze Shewart chart data.

Westgard rules help define specific limits of work for a specific assay and it can help detect both systematic and random errors. These rules are specified in automated analyzers to find out when to reject the analytical run.

These rules must be applied properly so that you can detect true errors when reducing false rejections. Rules applied to hematology instruments and high volume chemistry must generate low rates of false rejection.

Quality Control Frequency of Laboratory Testing

How often is QC right?

As we all know that labs need to perform quality control every day. But, is there any right frequency of quality control in clinical laboratory? Is it really sufficient to run quality control once in a day? What is the right frequency to run quality control samples in your lab?

So, how would you consider this tip to work out the ideal frequency of quality control in clinical laboratory for assays? You have to consider different factors when it comes to deciding the ideal frequency of quality control in clinical laboratory.

Here are the questions you should ask –

  • Which assays are more stable?
  • Which tests have higher impact and which have a higher risk in case of erroneous results?
  • What is the frequency of QC evaluations?
  • How many samples are you running from one QC evaluation to another?

You will get the right answers by asking the right answers.

Which assays are more stable?

Naturally, some assays are better than others in works and give better outcomes constantly. On the other side, some cannot perform consistently. They are not much stable and they are at high risk of error.

Labs should determine which assays are more stable than others and be sure that they are running right frequency of quality control in clinical laboratory, too.

By using the program reporting peer group, you can monitor the comparison of peer acts and method validation. This way, labs can determine their accuracy from time to time.

Which tests have higher impact and risk for mistaken results?

You need to run quality control in clinical laboratory more often for tests that are at higher risk. There is also a greater risk to the patient’s health.

It is very important that, results need to be both reliable and accurate. A test is at high risk which has the following characteristics.

So, QC must run too often in the following cases –

  • The test which is acted on quickly.
  • Can support the decision of clinician in isolation.
  • The Detrimental consequence in case it is wrong.
  • And a test which is performing on a sample which is painful/hard to collect.

How many specimens are you running during QC?

Let’s consider an example. There are two labs ‘Lab A’ and ‘Lab B’. Both of these labs run QC every morning. Isn’t it sufficient?

Lab A can test 10 samples every day, Lab B can test 1000 samples every day. Is it still fair enough to perform QC once every day for each lab? Suppose there is an error after testing 50% of samples in the test system. Neither lab would detect any failure of QC until next morning.

They will have wrong patient results. They will need to re-evaluate the samples from last successful event of QC. Lab B would have to repeat 1000 samples of the patients. It is a huge waste of both resources and time.

Basically, samples of the patients should determine in batches. Especially every 50 to 100 samples, ending and starting with an evaluation of QC. Obviously, it can ultimately save money and time and it will easily reduce the risk of harm.

What is the duration between evaluations of QC?

Let’s take another example. Both Lab A and Lab B choose to change their strategy of quality control to run QC in every 100 samples. It is usually a good decision for Lab B as Quality Control will be run too often and reduces the risk for patients.

So, it is not a good move for Lab A. Suppose an error is taking place after running 50 samples. Lab B will detect the issue right away on the first day and can investigate the issue to avoid the release of mistaken results.

The error would have occurred on the fifth day for Lab A of patient testing but the issue will not detect until day 10. It could spell disaster for a lab, with the release of faulty results to incur the cost, cause false diagnosis, and the result in an adverse effect on patient care.

Hence, it is important to keep in mind the time between QC and samples to test. Keep time between tests which are shorter than the time conducting any right action.  It is a good option to choose the right frequency of quality control.

Testing before the event ensures proper confidence that results are reliable since the previous quality control test. Testing samples just after the event ensure proper control of the test system before running more samples.

Importance of Individual Quality Control In Clinical Laboratory

Quality control is the process to detect analytical errors in the labs setting. It is used to ensure both, the accuracy of test results to provide the best patient care.

Poor performance results in treatment delay, false diagnosis, and high cost because of retesting. Hence, it is very important to ensure the reliability and accuracy of both results.

The use of multi-analytical single control by the third-party will help you consolidate existing control. It can save you from buying separate controls for each lab system. Fewer controls to stock and purchase, mean limited controls in managing and limited numbers of vendors to deal with.

To reduce costs and as a substitute of third-party controls, a lot of labs choose pooled sera in final testing.

There are different issues relevant to this method.

  • Stability wouldn’t validate like third-party control.
  • Analyte levels are more likely in a normal range of the patient when determining work at high levels.
  • Higher risk of infection. Pooled sera may not test at the level of a donor for diseases like Hepatitis and HIV.
  • No long-term and regular supply. Third party controls which are stable and have higher shelf life provide long-term monitoring of QC.
  • It can also help to detect the shifts on changing calibrator or reagent lot.

A lot of reagents have their own control values on the basis of very limited outcomes. It causes very unrealistic ranges and large standard deviations. Sometimes, target ranges are so wide that a lab may not fall off despite their overall performance.

Along with it, the makers usually use the same kind of raw materials in the reagents, and quality controls. So, it is important for separate labs to generate multi-method, and multi-analyzing data.

It is true that third party QC can help lower risk of bias and offer the proper assessment of the acts.

Benefits of Third Party Controls to Detect Errors in Laboratory

Here are some case studies which explain the role of third-party controls to detect reagent, instrument, and procedural mistakes.

Reagent Errors

  • Problem– Poor recovery has been reported by patients for different analytes in immunoassay control. The reagent produces QC and was performing as determined.
  • Resolution – In some investigations, they have got notification from makers of the reagent about changes to the formulation. The reagent and that shift with the controls were also expected. The controls of the reagent manufacturer didn’t show any issue. Hence, the issue may have been unnoticed if there was no testing of a third party control apart from internal quality control.

Instrument Errors

  • Resolution – After considering the control, some EQA samples are sent to the customers by labs. These samples recovered around 25% low. The customer used this information to revert back to the makers and confirmed low results because of a manufacturing defect.
  • Problem solved– In Human Assayed Multi-sera, a customer reported 25% low in Bilirubin results on weekly basis. They urged manufacturers of reagent to control every day and they didn’t see low results.
  • Reported problem – Customer reported results for Thyroglobulin with the immunoassay control were 4x higher on analyzer in comparison to other systems. However, the manufacturer’s QC didn’t show the same issue.
  • Resolution – An EQA data has been reviewed for this platform. It is confirmed that there was a huge change in results in comparison to other instruments. After getting in touch with the makers, they sent the copy of the bulletin suggesting positive bias with some reagent batches.

The manufacturer control of the instrument has not shown any issue as it may have been unnoticed in case third-party control hadn’t been tested apart from their quality control.

Are Third Party Controls Needfull for Regulatory Reasons?

Even though it is not needfull by many regulatory standards and bodies from different parts of the world.

The ideal third-party controls, such as –

NATA or the National Association of Testing Authorities, Australia, suggest using separate controls over the pooled sera. The producers have controls in Medical Testing, by Australian Standard 4633.

The Clinical and Laboratory Standards Institute or CLSI recommends using third-party controls. The QC materials shouldn’t be different from calibrator. It also suggests ensuring that the procedure has an individual assessment of the work, along with the calibration procedure of measurement.

Determining Impact of Frequency of QC testing on quality of reported results

The probability of refusing analytical run which is having an important off-control error condition is the typical measure to evaluate the quality testing.

However, the chance of refusing the analytical test is not affected by changes in the frequency of QC testing. It is important to conduct a different measure to determine the impact of QC testing.

The rise in a lot of results was reported with the presence of error which is out of control. It is a performance measure which is determined by changes in the number of QC testing.

This measure is derived for various lab testing modes and out of control error and showed the worst case rise in different cases. Hence, the laboratory can design QC strategies which control the number of results which have been reported.

To determine the impact of QC testing, it is vital to go beyond thinking. On the basis of expected rise, a better measure has a dual benefit in different results which have been reported.

Also, It can assess the impact of QC-testing changes and focus on the quality of reported results instead of the quality of lab batches.

What is CLIA Individualized Quality Control Plan (IQCP)?

The IQCP CLIA is a risk-based, well-developed, objective approach to perform QC testing. IQCP is based on unique lab testing. Also, the IQCP may use under CMS regulation.

When IQCP is important, it can replace existing EQC (Equivalent Quality Control) process. It is about to reduce the aspect of Outer QC needs and lab costs.

We can consider IQCP as a lab-specific plan instead of the usual standard plan. Product/makers details, needs, and specific setting would have an impact on IQCP development.

However, the CLIP and concepts will not change and labs may choose to go through existing QC to present compliance.

The IQCP is having few important concepts –

  • Risk Assessment
  • Quality Control Plan
  • Quality Assessments

The Quality Control Plan (QCP)

The QCP Quality Control Plan is a complete strategy which consists of all control processes. Also, to lower residual risks as well as approaches to detect errors with both monitoring and prevention strategies. The QCP can address potential risks before they cause failures.

In the quality control the plan, resources, practices, and procedures are important to control the quality of the test. With the help of risk profiles, labs may figure out that test needs QCP or tests with same risk profiles. It will also explain how to keep track of performance with quality after implementing it with time.

Quality Assessment

This method is effective to survey the performance of the Quality Control Process with consistent lab monitoring. Monitoring is not all about a one-time review of the data, it also consists of trending with time.

Here are some of the common examples-

  • Proficiency testing results.
  • Review of quality control.
  • Patient test review.
  • Competency tests.
  • Test turnaround times.
  • Rejection rates of the specimen.
  • Preventive/corrective action.

Why Use Lab Me Analytics for Quality Control?

Lab Me can quickly interpret your blood test results and can help you to forecast disease in future. Lab me can also alert you about early issues before they take place. With Lab Me Analytics, you never confuse between letters and numbers.

You can easily track your CBC results in real time, compare and interpret results quickly. This application gives quick details when providing information to help your doctor.

Sometimes second matters the most and there is no chance of waiting for busy doctors and follow-ups. You can interpret your lab work instantly, compare it to last tests, and share the same with your friends and family.

 

Ref

https://www.ncbi.nlm.nih.gov/pubmed/18927244

https://www.mlo-online.com/quality-control-in-clinical-laboratory-samples.php

http://www.labtestsblog.com/quality-control-laboratory-testing/

https://www.mlo-online.com/understanding-the-clia-individualized-quality-control-plan-iqcp.php

http://laboratory-manager.advanceweb.com/quality-control-frequency/

 

About Kate Patterson

Kate Patterson is a communication specialist and writer at Lab Me Analytics.. She has been researching medical technology and machine learning for the past five years, conducting interviews with experts, and users, and figuring out the best practices. She has a degree in journalism and public relations and a strong passion for disruptive medical technology.