Manual visual inspection of printed circuit boards (PCBs) is slow, error-prone, and expensive. Here's how we deployed YOLOv8 on NVIDIA Jetson devices to achieve 99.2% defect detection accuracy at 1,200 units per hour - a 4x improvement over manual inspection.
The Challenge
Our client, a mid-size electronics manufacturer, was struggling with quality control. Their manual inspection process caught only about 85% of defects and was a major bottleneck on their production line. They needed a system that could detect solder bridges, missing components, misalignments, and surface scratches in real-time.
Why YOLOv8
We chose YOLOv8 for several reasons: its superior speed-accuracy tradeoff compared to two-stage detectors, excellent support for export to TensorRT (critical for edge deployment), and the ability to train custom models with relatively small datasets using transfer learning.
The YOLOv8n (nano) variant was particularly well-suited for edge deployment - small enough to run on NVIDIA Jetson Orin Nano with room to spare, yet accurate enough for our defect detection requirements.
Results at a Glance
Data Collection & Labeling
We collected 8,000 images of PCBs across 6 defect categories using high-resolution industrial cameras. Labeling was done with a combination of Roboflow for annotation and active learning to prioritize the most informative samples. We used aggressive data augmentation - rotation, brightness variation, and synthetic defect overlay - to expand our training set to 25,000 images.
Training Pipeline
We started with YOLOv8n pretrained on COCO and fine-tuned on our PCB dataset for 300 epochs using an NVIDIA A100 GPU. Key training decisions included using mosaic augmentation for the first 250 epochs (disabled for the final 50 to stabilize), a cosine annealing learning rate schedule starting at 0.01, and mixed-precision training for speed.
Edge Deployment
Deploying to NVIDIA Jetson required careful optimization. We exported the model to TensorRT with FP16 precision, which reduced inference time from 45ms to under 15ms. The entire system - camera capture, preprocessing, inference, and result reporting - runs in a containerized Docker setup for easy deployment and updates across the factory floor.
Lessons Learned
- Lighting consistency is everything. We spent more time on camera and lighting setup than on model training. Consistent illumination eliminated most false positives.
- Start with the smallest model. YOLOv8n was sufficient for our use case. Bigger models added latency without meaningful accuracy gains on our specific defect types.
- Active learning accelerates labeling. By prioritizing uncertain samples, we achieved 99%+ accuracy with 60% less labeled data than a random sampling approach.
- Build a feedback loop. Every flagged defect goes through human review, and confirmed results feed back into the training pipeline for continuous improvement.