What is leak detection
Leak detection is the process of testing for leakage in a vessel. The item being tested can vary in size, dimension, or function but the overarching goal is always to identify the presence of a leak. Additional information that may be sought includes the leak location and size of the leak.
How is leak detection performed?
There are many ways to detect a leak. The method used will often take into consideration the expected size of the leak which will determine the sensitivity required of a given test method as well as testing costs and throughput requirements. Common leak detection methods include:
Pressure or Vacuum Decay – Using this method, the item being tested is either put under pressure or brought under vacuum. The pressure or vacuum is then monitored to see how it changes over time. Using this information, one can identify the leak rate. This testing tends to have a lower cost and is most suitable for large leak rates. As the target leak rate gets smaller, throughput using this method will be reduced.
Tracer Gas – Using this method, the item being tested is subjected to tracer gas – helium and hydrogen are common examples. The item is then connected to a leak detector which is purpose built for measuring tracer gas concentration. Using the detector measurement, the leak rate can be determined. This testing tends to have higher cost due to a more complex detection system and the need to provide tracer gas. This method is suitable for smaller leak rates. Helium leak detection is often performed when leak rates being measured are extremely small. Where leak rates are somewhat larger, costs can be reduced by working with tracer gas at lower concentration.
Direct Measurement – Sometimes the leaking fluid can be directly measured. This is common for leak detection on refrigeration systems. Using equipment which is purpose built to detect the target refrigerant, one can measure the refrigerant leak rate.
How do I know my equipment is working properly?
Depending on the type of equipment being used, the means of verifying functionality will vary. In most cases, equipment is validated using standards. A standard is simply a device which his been independently verified to produce a known result. With leak detection, a calibrated leak is often used to verify the equipment is performing properly. The frequency of verification will depend on the user’s requirement. In the most stringent cases, a calibrated leak is referenced for every unknown item tested. In less extreme cases the calibrated leak can be referenced much less frequently
I don't want ANY leaks!
Often it is not as simple as saying “I don’t want any leaks”. Rather, one must specify at which point a given leak rate is unacceptable, also known as a reject limit. Reject limits are largely driven based on the implication of a given leak in a given application. For example, lets assume a refrigeration system has a leak at a particular joint which is leaking refrigerant at a rate of .01 oz/year. This equals a leak rate of ~2.2 x 10-6 atm cc/s at 25 °C. If we assume the refrigeration system in question contains 14 oz of refrigerant when fully charged, after 25 years, this leak will leave our system with 98% percent of the original charge. This leak is unlikely to be of concern to us. Now let’s assume instead of a refrigeration system, we have found the same leak (2.2 x 10-6 atm cc/s at 25 °C) in a pacemaker. It is critical that this implantable device perform flawlessly for its user. A leak of this size is about 7000 times larger than the acceptable rate. Thus, the application of the item being tested must provide context for what is and what is not an acceptable leak rate.