Accurate localization of EV charging stations is crucial for enabling autonomous vehicles to perform reliable charging operations, which forms the foundation of uninterrupted service. However, existing methods, such as template matching and deep learning, face significant challenges. Template matching often fails under perspective variations, while deep learning approaches struggle with real-time performance due to computational complexity. Additionally, the diversity in EV charging station designs and complex environmental conditions further complicate the localization process. To address these issues, we propose an enhanced ORB feature matching algorithm that incorporates deblurring techniques and color-invariant processing, ensuring scale invariance and improved robustness in dynamic scenarios.
Our approach begins with a preprocessing stage that tackles motion-induced blur and noise. We employ a multi-scale pyramid combined with fuzzy layer segmentation to handle non-uniform blur effectively. This method decomposes the image into blurred and non-blurred regions, estimating a non-binary mask and applying adaptive deconvolution. The deblurring objective function is defined as:
$$ \arg \min_{L_i} \|k_{fi} \otimes L_i – B_{fi}\|^2 + \lambda \| \Delta L_i \|_0 $$
and for the blur kernel:
$$ \arg \min_{k_{fi}} \|k_{fi} \otimes L_i – B_{fi}\| + \gamma \|k_{fi}\|^2 + f(k_{fi}) $$
where \( L_i \) represents the latent sharp image, \( k_{fi} \) is the blur kernel, \( B_{fi} \) denotes the blurred layer, and \( \lambda \), \( \gamma \) are regularization parameters. This process significantly enhances image clarity, facilitating better feature extraction for EV charging station identification.

Following deblurring, we introduce a color invariant model based on the Kubelka-Munk theory to handle illumination variations and enhance feature distinctiveness. The spectral radiance model is expressed as:
$$ E(\lambda, x) = i(x)[1 – \rho_f(x)]^2 R_{\infty}(\lambda, x) + i(x)\rho_f(x) $$
where \( E(\lambda, x) \) is the spectral reflectance, \( i(x) \) is the intensity, \( \rho_f(x) \) is the Fresnel coefficient, and \( R_{\infty}(\lambda, x) \) is the reflectance. The color invariant \( H \) is derived as:
$$ H = \frac{E_\lambda}{E_{\lambda\lambda}} = \frac{\partial E / \partial \lambda}{\partial^2 E / \partial \lambda^2} = \frac{\partial R_{\infty}(\lambda, x) / \partial \lambda}{\partial^2 R_{\infty}(\lambda, x) / \partial \lambda^2} $$
Transforming this into RGB space using a linear transformation:
$$ \begin{bmatrix} E \\ E_\lambda \\ E_{\lambda\lambda} \end{bmatrix} = \begin{bmatrix} 0.06 & 0.63 & 0.27 \\ 0.30 & 0.04 & -0.35 \\ 0.34 & -0.06 & 0.17 \end{bmatrix} \times \begin{bmatrix} R \\ G \\ B \end{bmatrix} $$
This model allows us to compute color invariants that are robust to lighting changes, critical for consistent EV charging station recognition across different environments.
To achieve scale invariance, we construct a scale space using integral images and box filters. The integral image \( I_{\sum}(x,y) \) is defined as:
$$ I_{\sum}(x,y) = \sum_{i=0}^{i<x} $$=""
This enables efficient computation of regional sums. The scale space is built by convolving the color invariant image with Gaussian filters:
$$ L(x,y,\sigma) = G(x,y,\sigma) * I(x,y) $$
where \( \sigma \) is the scale parameter. The Fast-Hessian matrix is then used for extremum detection:
$$ H_F = \begin{bmatrix} D_{xx}(x,y,\sigma) & D_{xy}(x,y,\sigma) \\ D_{yx}(x,y,\sigma) & D_{yy}(x,y,\sigma) \end{bmatrix} $$
Here, \( D_{xx}, D_{xy}, D_{yy} \) are approximations of second-order Gaussian derivatives obtained via box filters. The determinant of this matrix identifies feature points:
$$ \text{Det}(H) = D_{xx} \cdot D_{yy} – (\omega D_{xy})^2 $$
with \( \omega \approx 0.9 \) as a compensation factor. This approach ensures that our algorithm can handle scale variations commonly encountered when approaching an EV charging station.
For feature description, we use the rBRIEF descriptor, which provides rotation invariance. The binary test function is defined as:
$$ \tau[p;x,y] = \begin{cases} 1 & \text{if } p(x) > p(y) \\ 0 & \text{otherwise} \end{cases} $$
and the oriented descriptor is computed as:
$$ g_n(p, \theta) = f_n(p) \mid (x_i, y_i) \in S_\theta $$
where \( S_\theta \) is the rotated point set. Matching is performed using Hamming distance, followed by an accelerated RANSAC algorithm to remove outliers. The evaluation function for inlier selection is:
$$ F(i) = \sum_{j=1}^{c} \frac{R(i,j)}{1 + Y(i,j)} $$
with
$$ R(i,j) = \exp\left(-\frac{l(A_i, A_i) – l(B_j, B_j)}{Y(i,j)}\right) $$
and
$$ Y(i,j) = [l(A_i, A_i) + l(B_j, B_j)] / 2 $$
This ensures robust matching even in cluttered backgrounds around EV charging stations.
Finally, for pose estimation, we use the PnP algorithm with known 3D coordinates of feature points on the EV charging station. The mapping between template and test images is given by:
$$ \begin{bmatrix} x’ \\ y’ \\ 1 \end{bmatrix} = H \begin{bmatrix} x \\ y \\ 1 \end{bmatrix} $$
where \( H \) is the homography matrix. The 3D points are defined based on the physical dimensions of the EV charging station, allowing us to compute the relative pose between the camera and the station.
To validate our method, we conducted extensive experiments comparing it with traditional ORB and SIFT algorithms. The deblurring performance was evaluated using PSNR and SSIM metrics, as shown in the table below:
| Method | PSNR (dB) | SSIM |
|---|---|---|
| DeblurGAN-v2 | 32.52 | 0.92 |
| Our Method | 36.15 | 0.95 |
Our deblurring approach significantly outperforms existing methods, providing clearer images for feature extraction. Additionally, we tested the robustness under different blur types, as summarized below:
| Blur Type | Simulated Scenario | PSNR (dB) | SSIM |
|---|---|---|---|
| Gaussian Blur | Uniform blur, σ=3.0 | 36.21 | 0.94 |
| Motion Blur | Non-uniform blur, l=15.0 | 34.57 | 0.96 |
| Mixed Blur | Complex environment | 30.43 | 0.89 |
For feature matching, we compared the number of detected features, correct match rate, and time consumption across different scenarios. The results demonstrate that our algorithm consistently achieves higher match rates and better distribution of features, essential for reliable EV charging station localization.
| Scenario | Algorithm | Feature Count | Time (s) | Correct Rate (%) |
|---|---|---|---|---|
| Scale Change | ORB | 152 | 0.26 | 82.70 |
| Scale Change | SIFT | 247 | 0.45 | 96.40 |
| Scale Change | Our Method | 398 | 0.57 | 95.10 |
| Rotation + Scale | ORB | 223 | 0.39 | 85.30 |
| Rotation + Scale | SIFT | 280 | 0.53 | 93.70 |
| Rotation + Scale | Our Method | 386 | 0.48 | 90.80 |
| Viewpoint Change | ORB | 361 | 0.46 | 80.20 |
| Viewpoint Change | SIFT | 570 | 0.55 | 98.60 |
| Viewpoint Change | Our Method | 916 | 0.61 | 94.70 |
| Illumination Change | ORB | 282 | 0.24 | 84.80 |
| Illumination Change | SIFT | 494 | 0.49 | 92.50 |
| Illumination Change | Our Method | 844 | 0.40 | 93.70 |
In terms of localization accuracy for EV charging stations, we tested our algorithm at various positions relative to the camera. The table below shows the actual positions, computed positions, and errors in millimeters:
| Test | Actual Position (mm) | Computed Position (mm) | Error (mm) |
|---|---|---|---|
| 1 | (0, 50, 700) | (4.31, 49.4, 687.6) | (-4.31, 0.6, 12.4) |
| 2 | (-250, 50, 700) | (-228.5, 46.04, 685.84) | (-21.45, 3.96, 14.16) |
| 3 | (300, 50, 1000) | (277.95, 14.07, 974.77) | (22.05, 8.93, 25.23) |
| 4 | (0, 50, 400) | (-2.25, 37.31, 377.32) | (2.25, 12.69, 22.68) |
| 5 | (-350, 50, 1300) | (-335.8, 43.39, 1270.72) | (-14.2, 6.61, 29.28) |
| 6 | (250, 50, 400) | (370.71, 19.19, 378.65) | (29.29, 30.81, 21.65) |
The results indicate that our method maintains errors within 30 mm in most cases, demonstrating its suitability for precise EV charging station localization. The integration of color invariants and multi-scale features effectively addresses challenges such as texture simplicity and environmental variability, ensuring robust performance in real-world scenarios.
In conclusion, our proposed algorithm significantly enhances the ORB framework by incorporating deblurring, color invariants, and scale invariance. This leads to improved feature matching accuracy and reliability for EV charging station localization, enabling autonomous vehicles to perform efficient and accurate charging operations. Future work will focus on optimizing computational efficiency and extending the approach to handle more dynamic environments and diverse EV charging station designs.
