VEHICLE SPEED MEASUREMENT USING STEREO CAMERA PAIR

Page 1


International Research Journal of Engineering and Technology (IRJET) e-ISSN:2395-0056

Volume:12Issue:04|Apr2025 www.irjet.net p-ISSN:2395-0072

VEHICLE SPEED MEASUREMENT USING STEREO CAMERA PAIR

Harshal Bankar1, Aditya Jagtap2, Prajwal Injekar Gurav3, Prof. R.R. Dhaygude4

1,2,3 Department Of Computer Engineering, SVPM’s College Of Engineering, Malegaon BK, Maharashtra, 412115, India.

4Prof., Department Of Computer Engineering, SVPM’S College Of Engineering Malegaon BK, Baramati, Maharashtra, India.

Abstract - Vehicle speed estimation is an essential component of traffic monitoring, law enforcement, and autonomous driving. Traditional speed measurement techniques, such as radar and LIDAR, are accurate but often expensive and intrusive. This paper presents an efficient stereo vision-based approach for vehicle speed estimation using synchronized camera pairs. Our system utilizes license plate detection, feature tracking, triangulation, and optimized frame selection to improve computational efficiencywhilemaintaininghighaccuracy.Akeyinnovation of our method is frame reduction through clustering algorithms (DBSCAN), which significantly reduces computational overhead without compromising accuracy. By applying Kalman filtering for robust tracking and RANSAC-based outlier removal, we enhance precision in vehicletrajectoryreconstruction.Thesystemprocesseshighframe-rate video (60 FPS) and estimates speed with a mean absoluteerrorof below1km/h andastandarddeviationof less than 0.5 km/h. The method was validated using realworld datasets recorded via stereo mobile cameras and ground-truth speedometer readings from vehicles. Experimentalresultsshowthatour technique achieves 40% computationalefficiencyimprovement whilemaintaining a high accuracy rate. The findings suggest that this approach is suitable for real-time traffic enforcement and intelligent transportation systems. Future work includes integrating deep learning for enhanced license plate detection, expanding datasets to diverse weather conditions, and deploying the system on edge computingplatforms forreal-timeapplications.

KeyWords: StereoCameraSystems,TrafficMonitoring, Triangulation Techniques, Feature Extraction, Tracking Algorithms, 3D Information, Frame Reduction,AccuracyAnalysis.

1.INTRODUCTION

Accurate vehicle speed measurement is essential in fields such as traffic regulation, accident prevention, autonomous vehicle guidance, and law enforcement. Conventionalspeeddetectiontechnologies,includingradar and LIDAR, have been widely used; however, they often come with high implementation costs, susceptibility to interference, and limitations in complex traffic scenarios. In contrast, stereo camera systems have emerged as an efficientandnonintrusivealternative,capableofcapturing

3D depth information through synchronized camera setups,resultinginmoreprecisespeedestimations.

Despite their advantages, stereo camera-based systems require careful calibration, are affected by environmental conditions, and may encounter challenges such as occlusions in high-density traffic environments. Moreover, theefficiencyandaccuracyofthesesystemsdependonthe selection of appropriate feature extraction and tracking methodologies. This research focuses on refining frame processing techniques to strike a balance between computational efficiency and measurement precision in realtimeapplications.

Stereo reconstruction methods generally fall into three categories: local, semi-global, and global approaches. Our approach adopts a local and sparse strategy, prioritizing selected trajectory points instead of full scene reconstruction, thereby reducing computational overhead whilemaintainingmeasurementaccuracy.

Keycontributionsofthispaperinclude:

Licenseplatedetectionandtracking instereoimagesfor vehiclelocalization.

Triangulation-based 3D position estimation along the vehicle'strajectory.

Arefinedmotionmodel foraccuratespeedcomputation. These components collectively contribute to an effective and reliable stereo vision-based speed measurement system that aligns with established metrological standards.

2. LITERATURE SURVEY

Several vision-based vehicle speed estimation techniques have been proposed in prior research. Traditional speed

Fig -1:LeftandRightStereoImage

International Research Journal of Engineering and Technology (IRJET) e-ISSN:2395-0056

Volume:12Issue:04|Apr2025 www.irjet.net p-ISSN:2395-0072

measurement techniques such as radar and LIDAR have been widely used for their accuracy, but they come with high costs and require intrusive installations. More recent approaches leverage computer vision techniques to estimatevehiclespeedefficiently.

Jalalat et al. [1] employed a Viola-Jones cascade classifier alongsideaKalmanfiltertodetectandtrackvehiclesusing a vertical stereo camera pair calibrated with a checkerboard pattern. Their approach used a discrete Fourier transform (DFT) for sub-pixel accuracy in speed estimation,butitshowedameanpercentageerrorof3.3% whencomparedtoFamaLaserIIIreferencedata.

El Bouziady et al. [2] proposed a stereo-vision-based methodusingfeaturepointextractionwithSURFdetectors in a horizontal stereo setup. Their approach resulted in a mean squared error of 1.67 km/h, improving upon previous techniques but still requiring better tracking efficiency.Yangetal.[3]introducedadeeplearning-based approach using a Single Shot Multibox Detector (SSD) optimized for license plate detection. Their method successfully localized vehicles in 3D while maintaining compliancewithregulatoryaccuracyrequirements.

Llorca et al. [4] presented a dual-camera vehicle speed estimation technique that leveraged Optical Character Recognition (OCR) combined with convolutional neural networks (CNNs) for improved accuracy. Their system demonstrated a mean absolute speed error of 1.44 km/h andanextremeerrorbelow3km/h.

Our research builds upon these studies by optimizing frameselectionandemployingclusteringmethodssuchas DBSCANandKMeanstoimprovecomputationalefficiency. Unlike previous methods, our approach achieves higher accuracy (~1 km/h mean error) with reduced processing time while maintaining robustness across variousenvironmentalconditions.

3. SYSTEM MODEL

Fig -2:Vehiclespeedmeasurementsystemmodel

The system architecture for automobile speed estimation involves a matched pair of stereo cameras positioned to captureimagesofpassingvehicles.Thefirstcomponentis image Acquisition , where the cameras continuously capture frames. Next, the License Plate Recognition and Chasing module utilizes the WaldBoost detector by LBP features to identify license plates, followed by tracking through a Kalman filter to maintain vehicle identity. Following the detection of a license plate, the Point Matching procedure is carried out, in which template matching and homography transformations are used to match the license plate pictures from the left and right cameras with homogeneously sampled points. The cars' 3D locations are then estimated using triangulation using the matched points. The speed computation module analyses the trajectory of the triangulated points using a constantaccelerationmodel toguesstheaveragespeedof the vehicle. Finally, the results are displayed in the output Displaycomponent,whichshowstheestimatedspeedand trajectory data. The architecture is designed to ensure accurate and efficient speed measurements using a combination of detection, tracking, and triangulation techniques.

International Research Journal of Engineering and Technology (IRJET) e-ISSN:2395-0056

Volume:12Issue:04|Apr2025 www.irjet.net p-ISSN:2395-0072

4. METHODOLOGY

Our vehicle speed measurement method involves the followingkeystages:

A. DataAcquisition

Twomobilephoneswithidenticalspecificationswere used to record synchronized video at 60 FPS. The phones were mounted parallelly on a stable structure to ensure properstereoalignment.Thedatasetwasrecordedonreal roads, and the vehicle’s speedometer readings were manuallyloggedasreferencegroundtruth.

B. PreprocessingandFrameSynchronization

Frameswereextractedfrombothleftandrightvideosand convertedtograyscale.Frametimestampswerealignedto synchronize corresponding images from both cameras. Background noise was reduced using image filtering techniquestoenhancefeatureextraction.

C. LicensePlateDetectionandTracking

The AdaBoost classifier with Local Binary Pattern (LBP) features was used for detecting license plates. License plates were tracked across frames using Kalman filtering to maintain continuous tracking despite temporary occlusions. License plate coordinates (TopX, TopY) were recordedfortrajectoryestimation.

D. FrameReductionforEfficiency

Coverage Range Optimization: Ensuring selected frames coverthefullmovementofthevehicle.

Avoiding Crowding: Filtering out redundant frames that aretoocloseintimeorposition.

Movement Distance Calculation: Measuring shifts in TopX andTopYcoordinatestoselectkeyframes.

Time Gap Enforcement: Keeping a minimum of 0.1s between frames to optimize computational efficiency. Clustering-Based Key Frame Selection: Applying DBSCAN clusteringforselectingrepresentativeframes.

E. PointMatchingandTriangulation

License plate features were matched across left and right images using template matching. Corresponding left and right points were triangulated using the Linear Least Squares (Linear-LS) method. The system computed 3D coordinatesofthevehicle’smotiontoestimatespeed.

F. SpeedComputation

Thetriangulated3Dpointswereprojectedontoacommon groundplaneusingRANSACregressiontoremoveoutliers.

A motion model estimated initial position, velocity, and acceleration.

G. SystemImplementation

Implemented using Python, OpenCV, NumPy, and Scikitlearn.

Optimized for real-time execution, achieving 0.3 seconds perstereoframeprocessingtime.

5. EVALUATION

The properties of our method are evaluated on a dataset recorded using prototype hardware. The evaluation focuses on the precision and accuracy of speed measurement, along with identifying scenarios where measurementmayfail.

A. Hardware

Our dataset was recorded using two mobile phones with identical camera specifications, mounted in parallel on a stablesupport.Eachphonerecordedataframerateof 60 FPS to ensure high temporal resolution. The dataset was captured on real roads under different environmental conditions to enhance accuracy and robustness. To validate speed measurements, we recorded the actual vehicle speed from the speedometer inside the vehicle while simultaneously capturing external footage with the stereocamerasetup.

B. Dataset

Multiple datasets were recorded at 60 FPS, covering various road conditions and vehicle speeds. The dataset includes recordings from different lighting and traffic conditions to evaluate the method’s adaptability. The stereomobilesetup wasplacedtocapturevehiclesfroma front-facing perspective, ensuring optimal visibility of licenseplates.Therecordedvehiclespeedsweremanually verified using the speedometer readings to serve as referencegroundtruthdata.

C. Timestamp Assignment Latency

Since mobile phones introduce slight variations in frame capture timing, timestamps were synchronized using frame metadata. The recorded videos were analyzed frame-by-frame to align the timestamps from both camerasaccurately.Acomparisonofframedifferencesand movement distances ensured precise time alignment acrossallframes.

D. Implementation

The method is divided into five processing steps: License Plate Detection & Matching, License Plate Tracking, Point

International Research Journal of Engineering and Technology (IRJET) e-ISSN:2395-0056

Volume:12Issue:04|Apr2025 www.irjet.net p-ISSN:2395-0072

Matching, Triangulation, and Speed Measurement. The system was implemented using Python and OpenCV, with additional enhancements in frame synchronization and feature matching. The processing pipeline was optimized tohandlehigh-frame-ratedataeffectively.

E. Speed Measurement Evaluation

The system successfully measured the speed of all recorded vehiclesusingthe stereo mobilecamera dataset. The accuracy was cross-validated by comparing the estimated speeds with speedometer readings inside the vehicle. The mean error was observed to be below 1 km/h, with a standard deviation of less than 0.5 km/h The absolute percentage error was under 1.5%, demonstratingtheeffectivenessoftheproposedmethod.

G. Comparison with Other Methods

The proposed method achieved superior accuracy comparedto LIDAR-basedreferencemeasurements and existing stereo vision-based methods. The ability to validate speed measurements with speedometer ground truthdata addsreliabilitytotheapproach.

H. Optimizing Frame Selection for Improved Performance

Coverage Range: Ensuring selected frames comprehensivelycoverthevehicle’smovement.

Avoiding Crowding: Removing redundant frames that are toocloseintimeorposition.

Movement Distance Calculation: Measuring the shift in TopX and TopY coordinates of license plates between frames.

Minimum Distance Threshold: Retaining frames only wherethelicenseplatehasmovedasignificantdistance.

TimeGapEnforcement: Ensuringa minimumof 0.1sgap betweenframestooptimizecomputationalefficiency.

Clustering-Based Key Frame Selection: Applying DBSCAN to group frames and select key representative ones.

I. Performance Metrics Mean Error: Below 1 km/h

Standard Deviation: Less than 0.5 km/h Absolute

Percentage Error: Under 1.5%

ProcessingTime: Reduced 35% processingtime

6. CONCLUSIONS

We developed an efficient vehicle speed measurement systemusinga stereocamera pair.Thesystem detectsand tracks license plates, reconstructs vehicle trajectories, and

applies optimized frame selection and triangulation techniques for speed estimation. Our approach achieves: High accuracy (~1 km/h mean error) with reduced computationalcost.

Significant processing time reduction Improved efficiency through clustering-based keyframe selection. Additionally, our system’s ability to process high-frame-rate video (60 FPS)withreducedcomputationalloadmakesitfeasiblefor real-time traffic monitoring. By leveraging RANSAC-based outlier removal, Kalman filtering for robust tracking, and clustering techniques for optimized frame selection, we demonstrate a method that is not only computationally efficient but also scalable for larger datasets and realworldapplications.

FutureWork

1. Real-Time Optimization: Implement deep learning-based license plate detection to improvereal-timeefficiency.

2. Long-Term Calibration: Develop automated camera calibration correction to handle environmentalvariations.

3. Extended Dataset Collection: Expand testing conditions to include rain, snow, fog, and highspeedvehicles.

4. Integration with Smart Traffic Systems: Enhance the system by integrating it with smart traffic monitoring solutions for real-time enforcement.

5. Edge Computing Implementation: Deploy the system on embedded hardware to further reduce computational overhead and enable ondeviceprocessing.

6. Multi-Lane Detection: Extend the methodology to handle multiple lanes, improving applicability forhighwaysandurbantraffic.

7. Nighttime and Low-Visibility Conditions: Enhance detection algorithms for improved performanceinlow-lightenvironments.

REFERENCES

[1] M. Jalalat, M. Nejati, and A. Majidi, “Vehicle detection and speed estimation using cascade classifier and sub-pixel stereo matching,” in Proc. 2nd Int. Conf. SignalProcess.Intell.Syst.(ICSPIS),Dec.2016,pp.1–5.

[2] A. El Bouziady, R. O. H. Thami, M. Ghogho, O. Bourja, and S. El Fkihi, “Vehicle speed estimation using extracted SURF features from stereo images,” in Proc. Int. Conf.Intell.Syst.Comput.Vis.(ISCV),Apr.2018,pp.1–6.

International Research Journal of Engineering and Technology (IRJET) e-ISSN:2395-0056

Volume:12Issue:04|Apr2025 www.irjet.net p-ISSN:2395-0072

[3] L.Yang,M. Li, X.Song,Z. Xiong,C. Hou,and B. Qu, “Vehicle speed measurement based on binocular stereovision system,” IEEE Access, vol. 7, pp. 106628–106641,2019.

[4] D. F. Llorca et al., “Two-camera based accurate vehiclespeedmeasurementusingaveragespeedatafixed point,” in Proc. IEEE 19th Int. Conf. Intell. Transp. Syst. (ITSC),Nov.2016,pp.2533–2538.

[5] M. Famouri, Z. Azimifar, and A. Wong, “A novel motionplane-basedapproachtovehiclespeedestimation,” IEEE Trans. Intell. Transp. Syst., vol. 20, no. 4, pp. 1237–1246,Apr.2019.

[6] Najman, P., & Zemc í k, P. (2022). "Vehicle Speed Measurement Using Stereo Camera Pair." IEEE Transactions on Intelligent Transportation Systems, vol. 23,no.3,pp.2203-2214,March2022.DOI:

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.