Can 3D Engineering Solutions find porosity of a 0.2mm size?

Recently, 3D Engineering Solutions was asked if we could detect porosity of a 0.2mm in a customer casting.

Finding your 0.2mm porosity should not be an issue.

We use Perkin-Elmer 16”x16” Panel detectors with a 0.200mm pixel pitch.  That means that for each of the thousands of x-rays taken of your casting, the individual pixel size is 0.2mm x 0.2mm.  So basically, the panel will see right at a 0.2mm defect.  Beyond this, two other factors work in our favor: magnification and Sub-Voxel Resolution.

Magnification has the effect of decreasing this 0.2mm value.  The minimum magnification is around 1.6.  With magnification alone in the worst case condition the minimum defect size would be 200 microns (size of panel pixel) / 1.6 (magnification) = 125 micron effective resolution.

Sub-Voxel (a 3D pixel) Resolution is the ability of the software to take the created cubic voxel and determine where the actual surface is within that voxel.  The manufacturer of the software told me that his software can do this at a 10:1 ratio.  However, the industry standard is a 3:1 ratio (resolution to accuracy).  So even within an effective worst case 125 micron voxel size the actual surface of the defect can be determined to about 1/3 of this or around 42 microns.

The CT Scanning Advantage

Our machine is a Metrology Grade CT (MCT) machine that even goes one step beyond this. By controlling the positioning of the part, temperature, and source to detector distance to a very high degree, we can state that the accuracy will be no worse than 9+L/50 microns (and measurement uncertainty 11+0.03L microns per our ISO17025 requirements).  The effective accuracy can be better due to magnification but this stated accuracy and measurement uncertainty is the worst case.

The interesting thing about CT scanning is that resolution (and accuracy) increase with decreasing sample size.  The smaller the part that we have the better the resolution and accuracy we can provide.  We have calculated right at sub-micron accuracy for some very small medical devices that we have scanned!

Challenges for Defect Detection

defect_measurement

The following images show the distribution of the defects that can be found. They are color coded to size according to diameter shown in the color scale bar to the left. All defects less than 0.200 mm have been omitted. There is a transparency applied to the part to allow viewing of the full distribution of defects. Defect sizes are the measurement of the diameter of the smallest sphere that can totally encompass the defect.

The complicating factors include the material and total effective thickness.  These factors can cause noise in the data and may impact the ability of the software to distinguish certain small voids for example.  The software corrects for the majority of this, but there is always a chance that this will have an impact.  Parts with more dense materials (nickel, lead, gold, etc) or very thick parts are more likely to suffer from these effects.  These effects leave some ‘tell-tale’ signs in the data, which we call beam hardening.  This occurs when lower energy x-rays are absorbed in the material before being able to fully penetrate the part.  Some of this can be reduced by filtering the x-rays before they enter the part.  It will have the general tendency toward false positives, but an experienced CT engineer will be able to differentiate this.  The part in this image (right) was likely filtered and the data has already been processed through software that has cleaned up the vast majority of this effect.

 

 

,
0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *