PET Bottle Neck Visual Inspection Technology: Giving Beverage Packaging a "Smart Eye"
PET bottles are widely used in the beverage, cosmetics, and pharmaceutical industries due to their lightweight, high transparency, and excellent physical properties. However, the bottle neck, as a crucial part ensuring airtightness, directly affects the safety and shelf life of the contents. Traditional manual inspection is inefficient and error-prone, unable to meet the demands of modern high-speed production lines (up to 36,000 bottles per hour). Machine vision-based automated inspection technology has emerged as a core means of ensuring product quality. This article will systematically analyze the technical principles, method classifications, application scenarios, and development trends of PET bottle neck visual inspection.
I. Technical Challenges: Why is Bottle Neck Inspection So Challenging?
PET bottle neck inspection faces multiple technical challenges, primarily stemming from the high-speed, high-precision requirements of the industrial environment:
Extremely High Precision Requirements: Bottle neck defects are diverse, including notches, burrs, chipping, flash, and black spots, with minute dimensions (such as millimeter-level chipping), requiring an inspection accuracy of over 99.9%.
Speed and Real-Time Pressure: Production line speeds often reach several bottles per second, requiring inspection time to be compressed to within 50 milliseconds. Any delay could result in a large number of defective products entering the market.
Complex Interference Factors: Environmental noise such as bottle mouth reflections, liquid foam, label shadows, and mechanical vibrations can easily interfere with image acquisition, necessitating optimized optical design and algorithm anti-interference capabilities.
Defect Diversity: Irregular defect shapes (e.g., external defects, internal defects, through defects) and low contrast between transparent bottles and defects make traditional threshold segmentation methods prone to missed detections.
II. Classification of Detection Methods: From Traditional Image Processing to Deep Learning
Based on technological evolution, PET bottle mouth visual inspection methods can be divided into three categories:
Traditional Image Processing Methods: Based on threshold segmentation, region localization, and grayscale contrast, this method extracts the ROI (Region of Interest) at the bottle mouth and performs differential calculations with a defect-free template. For example:
Self-Template Method: Constructs a ring-shaped template on the bottle mouth end face and identifies defects by subtracting grayscale values, achieving a detection accuracy of 99.9% in less than 50 milliseconds.
Gray-Level Consistency Method:This method uses the RANSAC algorithm to fit the elliptical contour of the bottle opening, then analyzes the gray-level uniformity of the region. Detection speed can reach 10 milliseconds/frame.
Advantages: Computationally simple, suitable for regular defects;
Disadvantages: Relies on manually set thresholds, poor adaptability to complex defects.
Machine Learning Classification Method: This method uses models such as Support Vector Machines (SVM) and neural networks, requiring a large number of samples to train the classifier. For example: By extracting defect features (such as texture and shape), SVM is used to distinguish defect types.
Advantages: Can identify diverse defects;
Disadvantages: Requires retraining when changing bottle types, lower real-time performance.
**Deep Learning and Differential Models:** Emerging methods combine deep networks and attention mechanisms to improve the detection rate of complex defects:
**Differential Feature Model:** Images of the bottle to be inspected and a defect-free reference bottle are acquired. Features are extracted using a dual encoder, then a differential feature map is calculated to optimize the discriminative power. Finally, a classifier is used to determine the result. This type of method can effectively suppress overexposure interference and is suitable for reflective scenes.
Advantages: Strong anti-interference, suitable for small defects;
Disadvantages: High computational resource requirements.
The table below compares typical solutions and performance of three types of methods:
| Method Type | Representative Technology | Detection Accuracy | Detection Speed | Applicable Scenarios |
| Traditional Image Processing | Self-template grayscale difference method | 99.9% | <50ms | Regular contour defects (breakage, gaps) |
| Machine Learning | SVM classification method | 98-99.2% | 10-50ms | Multi-class defect classification |
| Deep Learning | Differential feature model | >99.5% | Hardware configuration dependent | Complex defects (black spots, flash) |
III. Application Scenarios: Covering the entire production line
The vision inspection system has been embedded in the entire PET bottle production chain. Key application nodes include:
Preform Inspection (Pre-blow molding)
A station is set up before the blow molding machine, using 6 high-resolution CCD cameras to image the preform's mouth, shoulder, and bottom 360°, detecting defects such as flash, gaps, and black spots. A camera above the mouth is dedicated to checking sealing surface defects to prevent defect magnification after blow molding.
Full Bottle Inspection (Post-Filling)
After filling, four sets of CCD cameras are used to inspect the liquid level, bottle cap (breakage of safety ring, high cap, crooked cap), and coding quality. A 120° surround layout is used, combining front and backlighting to compensate for foam interference and improve the accuracy of liquid level detection.
Label and Packing Inspection
After labeling, four 90° spaced cameras are used to detect label misalignment and printing errors; after packing, an online checkweigher is used to verify the weight and reject missing products.
IV. System Flow and Key Technologies
A complete vision inspection system includes the following core components:
Image Acquisition
Hardware Configuration: Basler gigabit network industrial cameras and LED ring or strip light sources are used to capture images of the bottle mouth at close range. The light source design must suppress reflections; for example, strip light sources can reduce overexposure.
Optical Optimization: Overexposed images are corrected using an automatic encoder-decoder to improve grayscale consistency.
Image Processing
ROI Location: The bottle mouth area is accurately extracted using Hough transform, RANSAC ellipse fitting, or symmetry axis positioning methods.
Feature Extraction: The ROI is processed by grayscale conversion, filtering, and binarization, and then defect features are enhanced through differential calculation or spatial attention mechanisms.
Defect Classification and Removal: Defects are identified based on feature map thresholds or classifier results (such as Softmax), triggering a pneumatic rejection device to remove defective products.
V. Development Trends and Challenges
The future of PET bottle neck visual inspection technology will evolve in the following directions:
Intelligent Upgrade: Deep learning models will be further optimized, such as using lightweight networks to achieve real-time edge detection, reducing reliance on cloud computing power.
Multimodal Fusion: Combining 3D vision and X-ray imaging to detect hidden defects such as internal cracks and bubbles.
Closed-Loop Quality Control: Detection data is fed back to the production line, adjusting blow molding and filling parameters in real time, achieving a leap from "detection" to "prevention."
Summary
PET bottle neck visual inspection technology replaces human eyes with machine vision, solving the quality control challenges in high-speed production scenarios. From traditional image processing to differential deep learning, detection accuracy and efficiency have continuously improved, becoming the cornerstone of intelligent manufacturing in industries such as beverages and pharmaceuticals. In the future, with the synergistic evolution of algorithms and hardware, this technology will continue to move towards greater intelligence and a closed-loop end-to-end process.

