Prescreening LCMS standards and QCs
Posted: Fri Oct 02, 2020 1:51 am
by vickig
Is anyone "prescreening" cal standards and QCs prior to including them in a GLP analysis? Can this be done in R&D mode? Would like to make sure they were prepared correctly before processing and analyzing them with our GLP samples, but our QA thinks this data needs to be included in the regulated run in order to justify rejecting them if needed.
Re: Prescreening LCMS standards and QCs
Posted: Thu Oct 08, 2020 11:21 am
by lmh
Caveat: I'm not really a regulatory person. But I'm inclined to agree with your QA people:
Firstly, the accuracy of your method depends on the standard, so everything that's been done to the standard ought to be recorded. I'm guessing that the argument here is that it's only a screen, so the actual numbers are never used. All we're doing is throwing away solutions that we think we made up incorrectly. The standards that we don't throw away have a concentration that depends entirely on how they were made, and that part of the procedure is documented and recorded. Actually this is a fallacy. If we throw away standards that fall outside a particular window, then if our screening method is biased, it will reject more standards on one side of the correct value than the other. In effect, the standards we let through will be aligned to the (biased) window of the screening method, so if we were aiming for a 1.000uM standard, we might find that on average our standards are now around 1.010uM because instead of accepting anything from 0.990 to 1.010, our biased pre-screen is accepting anything from 1.000 to 1.020. And if the concentration of the standard that we ultimately use can depend on the pre-screen to which it was subjected, then the pre-screen affects the accuracy of the final result, and is part of the method that led to that result, so it needs to be documented and included.
Secondly, any good method documents its own reliability. If an instrument regularly fails to produce results that are within the specifications you expect from the method you're running, then it indicates that either the instrument isn't appropriate for the method, or the instrument's faulty, both of which are things anyone looking at the final result needs to know. If your preparation of standards is sufficiently unreliable that they often fall out of specification, then you probably want to know about it! In fact, in an extreme case, your standards are more-or-less random, and its your screening method that is determining the entire calibration of the method, an extreme case of point 1, above. (if your standards very rarely fall outside the expected window, then the pre-screen isn't useful anyway).
Thirdly, related to point 2, regulators get very, very unhappy if they see anything that looks like "testing into specification", repeating something and testing until you get the value you want. They might see pre-screening as testing-the-standard-into-specification, rather than preparing a standard that you expect to be correct. In effect, they may feel that you're just making bad standard after bad standard until you hit one that looks right, rather than making a good standard. You'd need some sort of evidence to demonstrate that this isn't the case.