Recommended Calibration Method

Discussions about GC and other "gas phase" separation techniques.

23 posts Page 1 of 2
Goal: Determine the response factor of my analyte gas to determine mole % in samples.

Proposed methods: external calibration or Standard Addition.

Short background: The gas is not readily available at ≥99.95 mole % from any vendor. We purchase this gas from random sellers, process it to clean it up, and send it to a 3rd party vendor to certify it's purity. Sometimes it's beneficial to know the purity mid-processing and I would like to calibrate our GC-MS properly to do this. The range of mole % we typically work with and see is between 99.95 to 98.00, and 99.60 is the minimum we can send to production.

My initial thought is to use several of the certified samples we sent to our 3rd party vendor already, run it through our GC-MS, and create a calibration curve from those. I'm skeptical of that idea because I know I can send two samples of gas from the same batch, on the same day, and receive a variation in purity (+/- 0.05 mole % typically), but it might just be the nature of the beast... and I also feel like I leave a lot of room for error because I don't know if all of those samples were run on the same conditions, GC, etc...

The standard additions method would still require me to use a high purity sample that has been certified from our 3rd party vendor.

All comments, questions, and recommendations are gladly welcomed!
Is your project super secret? Can you share what you're actually trying to determine and how (the mechanics of your sampling, etc.)? I'm not completely clear on what you're saying here but I have a feeling that you might be expecting too much from your analysis.

People around here sometimes fall into the trap of thinking that analytical measurements are absolute. They are not. There is always error involved. We do our best to minimize the errors but even in the perfect situation, there is still error in the measurement. I've seen situations where we've told a customer that there's 12 ppm of something in the sample. The next time they submit it comes back 14 ppm, the sky starts to fall and they think something's wrong with the process. In reality, the 12 is not different from the 14 - within our ability to measure it.
rb6banjo wrote:
Is your project super secret? Can you share what you're actually trying to determine and how (the mechanics of your sampling, etc.)? I'm not completely clear on what you're saying here but I have a feeling that you might be expecting too much from your analysis.

People around here sometimes fall into the trap of thinking that analytical measurements are absolute. They are not. There is always error involved. We do our best to minimize the errors but even in the perfect situation, there is still error in the measurement. I've seen situations where we've told a customer that there's 12 ppm of something in the sample. The next time they submit it comes back 14 ppm, the sky starts to fall and they think something's wrong with the process. In reality, the 12 is not different from the 14 - within our ability to measure it.


I can provide more detail!

I'm trying to determine the mole % of bromotrifluoromethane (Halon 1301) in various samples on a daily basis. But I'm not sure which calibration method I should use to attain a Response Factor for the Halon 1301.

For sampling, I intend to flash vaporize the compressed liquid into an evacuated gas bulb or tedlar bag to atmospheric pressure, then use a deflected gas tight syringe for injection.
Would it be easier to quantify the impurities and subtract that from 100% to obtain the %purity of the major component?
The past is there to guide us into the future, not to dwell in.
So, are you actually analyzing for the impurities in the Halon 1301? If yes, what are they? With gases, you really don't have a "matrix" problem (no partitioning between gas and condensed phases). An external calibration should work for you as long as you calibrate near and around the concentrations of the impurities.

I do a lot of headspace sampling above condensed phases (liquids and solids). The matrix has a huge effect on how the analytes partition into the gaseous headspace. Standard addition is essential for me because of this. I use techniques like solid-phase microextraction and stir-bar-sorptive extraction to get my analytes out of the matrix. They are very matrix-dependent techniques.
James_Ball wrote:
Would it be easier to quantify the impurities and subtract that from 100% to obtain the %purity of the major component?


James, that's a good suggestion. We have 12 common impurities that typically range from 0.01 to 0.25 mole % of the samples. I would have to calibrate for each impurity, right? This seems to pose more room for error, no?
rb6banjo wrote:
So, are you actually analyzing for the impurities in the Halon 1301? If yes, what are they? With gases, you really don't have a "matrix" problem (no partitioning between gas and condensed phases). An external calibration should work for you as long as you calibrate near and around the concentrations of the impurities.

I do a lot of headspace sampling above condensed phases (liquids and solids). The matrix has a huge effect on how the analytes partition into the gaseous headspace. Standard addition is essential for me because of this. I use techniques like solid-phase microextraction and stir-bar-sorptive extraction to get my analytes out of the matrix. They are very matrix-dependent techniques.


No, the impurities are relatively unimportant to us. We are primarily interested in the Halon 1301 composition in the sample.

Would you say standard addition is not preferred because I do not have a matrix issue?
I'm just having trouble visualizing how you analyze what is essentially a "pure" material by the method of standard addition unless you're looking for the impurities. Let's say I have ethanol and I know that methanol is the only unwanted impurity. I calibrate for methanol and get the ethanol by difference. I can't picture how I'd add more ethanol to what is already essentially pure ethanol and use the method of standard addition to get it. I could use the method of standard addition to get the methanol.

Generally, in when employing the method of standard addition correctly, you have to add the standard at a concentration that is greater than the analyte in the sample and still in the linear dynamic range of the detector. When you've already got a pure material, you can't do it. Your spike is too close to the analyte concentration and you can't get outside the error of the measurement.
chemengineerd wrote:
Goal: Determine the response factor of my analyte gas to determine mole % in samples.

Proposed methods: external calibration or Standard Addition.

Short background: The gas is not readily available at ≥99.95 mole % from any vendor. We purchase this gas from random sellers, process it to clean it up, and send it to a 3rd party vendor to certify it's purity. Sometimes it's beneficial to know the purity mid-processing and I would like to calibrate our GC-MS properly to do this. The range of mole % we typically work with and see is between 99.95 to 98.00, and 99.60 is the minimum we can send to production.

My initial thought is to use several of the certified samples we sent to our 3rd party vendor already, run it through our GC-MS, and create a calibration curve from those. I'm skeptical of that idea because I know I can send two samples of gas from the same batch, on the same day, and receive a variation in purity (+/- 0.05 mole % typically), but it might just be the nature of the beast... and I also feel like I leave a lot of room for error because I don't know if all of those samples were run on the same conditions, GC, etc...

The standard additions method would still require me to use a high purity sample that has been certified from our 3rd party vendor.

All comments, questions, and recommendations are gladly welcomed!


Determining the purity of high purity materials is a specialised area. How does the 3rd party do it ?

Peter
Peter Apps
Here's what I'm talking about. This is the derivation that demonstrates why the method of standard addition works. In the end, if R1 is too close to R0, you end up dividing by what is essentially zero.

https://1drv.ms/i/s!AkH-uI0tnY5Ledwl9_2xNd9znjg

You can't determine pure substances by the method of standard addition.
With either calibration method, the mole% range you are looking for is near the degree of uncertainty in the measuring instrument itself when the level is so high. If you took 100 mole% halon and inject it three times into the instrument you will get a different area count under the peak each time, the difference in those areas is the lowest uncertainty possible for the analysis method(you should really use 3x that for true uncertainty 3xstandard deviation of the readings).

If you look at something that is 0.2mole% and you have a 10% uncertainty window then your numbers would vary from 0.18 to 0.22 mole%, which when subtracted from 100% would give a much smaller total uncertainty.

However, yes, you would need to calibrate for the impurities separately for the most accurate determination of purity. I know some places that will just inject an approximately pure substance and take the total area of all peaks found, and subtract the area of all unwanted peaks then divide the subtracted area by the total area to get an Area% number which they assume is equal to %purity. This is the quick and dirty way of assessing purity and many places use it. If tenths of mole% are crucial, I would look at calibrating for and quantifying for each impurity if they are known compounds.
The past is there to guide us into the future, not to dwell in.
rb6banjo wrote:
Here's what I'm talking about. This is the derivation that demonstrates why the method of standard addition works. In the end, if R1 is too close to R0, you end up dividing by what is essentially zero.

https://1drv.ms/i/s!AkH-uI0tnY5Ledwl9_2xNd9znjg

You can't determine pure substances by the method of standard addition.


I can't view this link at work for one reason or another. But I'll check it out when I get home, thank you!
Peter Apps wrote:
chemengineerd wrote:
Goal: Determine the response factor of my analyte gas to determine mole % in samples.

Proposed methods: external calibration or Standard Addition.

Short background: The gas is not readily available at ≥99.95 mole % from any vendor. We purchase this gas from random sellers, process it to clean it up, and send it to a 3rd party vendor to certify it's purity. Sometimes it's beneficial to know the purity mid-processing and I would like to calibrate our GC-MS properly to do this. The range of mole % we typically work with and see is between 99.95 to 98.00, and 99.60 is the minimum we can send to production.

My initial thought is to use several of the certified samples we sent to our 3rd party vendor already, run it through our GC-MS, and create a calibration curve from those. I'm skeptical of that idea because I know I can send two samples of gas from the same batch, on the same day, and receive a variation in purity (+/- 0.05 mole % typically), but it might just be the nature of the beast... and I also feel like I leave a lot of room for error because I don't know if all of those samples were run on the same conditions, GC, etc...

The standard additions method would still require me to use a high purity sample that has been certified from our 3rd party vendor.

All comments, questions, and recommendations are gladly welcomed!


Determining the purity of high purity materials is a specialised area. How does the 3rd party do it ?

Peter


I called 1 of 2 vendors that does this type of testing and they told me they use standard addition for calibration. Whether that was standard addition for the impurities or analyte, it was unclear. But from what I'm gathering here, they probably meant they did standard addition calibration for the impurities.
James_Ball wrote:
With either calibration method, the mole% range you are looking for is near the degree of uncertainty in the measuring instrument itself when the level is so high. If you took 100 mole% halon and inject it three times into the instrument you will get a different area count under the peak each time, the difference in those areas is the lowest uncertainty possible for the analysis method(you should really use 3x that for true uncertainty 3xstandard deviation of the readings).

If you look at something that is 0.2mole% and you have a 10% uncertainty window then your numbers would vary from 0.18 to 0.22 mole%, which when subtracted from 100% would give a much smaller total uncertainty.

However, yes, you would need to calibrate for the impurities separately for the most accurate determination of purity. I know some places that will just inject an approximately pure substance and take the total area of all peaks found, and subtract the area of all unwanted peaks then divide the subtracted area by the total area to get an Area% number which they assume is equal to %purity. This is the quick and dirty way of assessing purity and many places use it. If tenths of mole% are crucial, I would look at calibrating for and quantifying for each impurity if they are known compounds.


The most accurate determination of purity is crucial to us. So it seems like I will be moving forward with calibrating for the impurities.

Would you prefer one method over the other for daily analysis? I'd prefer not to calibrate every single day... or for every single sample. I'm assuming I can run a known amount of an impurity several times, calculate a response factor from the average peak areas and concentration of injected impurity, then repeat this calibration every few weeks.
You will be working over a very narrow range of main component content, so narrow that you can assume that the detector response is linear against content.

If you run multiple replicates of a standard with known content you will get a result for response per unit content. If you then run multiple replicates of your unknowns you get a result for content of the unknown. If there is bias it will be approximately the same for standards and samples ans so it will cancel out, and the uncertainty in the bias will be accounted for the by uncertainty due to random variation in the analysis (injection volume repeatability, temperatures, pressures etc). You reduce the uncertainty in the response factor and the result by running multiple replicates and taking means - the standard deviation of the mean is the standard deviation of individual results divided by the square root of the number of replicates. The uncertainty of your result adds to the uncertainty in the content of the standard.

Peter
Peter Apps
23 posts Page 1 of 2

Who is online

In total there are 6 users online :: 0 registered, 0 hidden and 6 guests (based on users active over the past 5 minutes)
Most users ever online was 188 on Wed Nov 02, 2016 2:58 am

Users browsing this forum: No registered users and 6 guests