Download PDF
Research Article  |  Open Access  |  27 Jun 2024

Intensity enhanced for solid-state-LiDAR in simultaneous localization and mapping

Views: 30 |  Downloads: 2 |  Cited:  0
Complex Eng Syst 2024;4:11.
10.20517/ces.2024.06 |  © The Author(s) 2024.
Author Information
Article Notes
Cite This Article

Abstract

Solid-state light detection and ranging (LiDAR) has the advantages of low cost, small size and strong practicability. However, it faces challenges in simultaneous localization and mapping applications due to its small field of view and irregular scanning patterns. A solid-state LiDAR simultaneous localization and mapping system containing intensity information is proposed. In order to solve the irregular scanning characteristics of solid-state LiDAR, we introduce a data preprocessing framework and add intensity feature points to the front-end odometer. This improves the accuracy and robustness of positioning when geometric feature points are scarce, thus solving the problem of feature point degradation caused by a finite field of view. In the back-end optimization stage, we combine the geometric feature residuals with the intensity feature residuals through the proposed consistent difference function, so that the system can maintain good performance even in challenging environments. Finally, we conducted an extensive evaluation of the proposed algorithm on official datasets and various datasets collected from multiple platforms, and the results confirmed the validity of our approach. Compared with the corresponding method, in indoor scenes, the absolute trajectory error and relative attitude error are decreased by 54.5% and 5.3%. In outdoor scenes, the absolute trajectory error and relative attitude error are decreased by 29.6% and 58.8%.

Keywords

Solid-state lidar, intensity, localization, optimization

1. INTRODUCTION

Autonomous robotics is a rapidly developing field and has a good application prospect in various fields. Simultaneous localization and mapping (SLAM) is a fundamental aspect of robotics. Visual SLAM [1] solutions rely heavily on external lighting conditions to capture images and estimate the robot's pose by extracting or matching visual features. However, vision-based SLAM systems often fail in low-light or no-light conditions. Lidar, on the other hand, which uses light detection and ranging (LiDAR) technology, is more robust to environmental changes, including weather and lighting conditions. LiDAR-based SLAM [2] can generate accurate maps that can be used directly for robot navigation and obstacle avoidance, making it increasingly popular in the field.

In recent years, solid-state lidar has garnered increasing attention and development. Compared to mechanical lidar, solid-state lidar offers significant advantages in terms of cost, resolution, and performance that cannot be ignored. However, it also faces several challenges, particularly in SLAM applications. Firstly, its scanning method is often irregular, which means that higher demands are placed on data processing. Unlike the regular scanning of mechanical lidar, irregular scanning requires more complex algorithms for point cloud data processing to achieve accurate localization and map construction[3]. Secondly, the field of view of most solid-state lidars is relatively small, which can lead to feature matching degradation. In situations with a limited field of view, there may be insufficient matching, resulting in reduced performance of the SLAM system. Furthermore, during the continuous movement of the lidar, intra-frame motion can cause distortion in the point cloud data. This distortion can potentially affect the accuracy of the SLAM system and, therefore, requires compensatory measures to address the issue, such as motion distortion correction.

To solve all these problems, in this paper, we propose an inertial SLAM method for solid-state LiDAR that includes intensity information. The method enables accurate, robust, real-time positioning and mapping, and can also operate effectively in challenging degraded environments. In summary, our main contributions are as follows: (a) We propose a point cloud preprocessing framework specifically designed for the irregular scanning of solid-state LiDAR. The framework solves the problems of excessive noise caused by irregular scanning and difficulty in extracting geometric feature points; (b) An intensity normalization method suitable for solid-state LiDAR is proposed to facilitate the subsequent use of intensity information; and (c) An odometer and mapping algorithm that includes intensity information as an aid. With more abundant information, the localization accuracy and robustness in the scenario of scarce geometric feature points are improved.

The rest of this article is organized as follows. In Section 2, we discuss the relevant literature. In Section 3, algorithms for solid-state LiDAR - inertial range measurement and mapping are introduced. Section 3.1 describes the point cloud data preprocessing framework. See Section 3.2 for the feature extraction method. State estimates with intensity information are presented in Section 3.3. All experimental studies are discussed in Section 4. Finally, Section 5 summarizes the thesis.

2. RELATED WORKS

Limited computing resources on mobile platforms result in time-consuming processing of large amounts of raw LiDAR data in SLAM systems. As a solution, several LiDAR-based SLAM frameworks have been proposed that utilize feature extraction. A pioneering work in this field is lidar odometry and mapping (LOAM) [4], which extracts geometric edges and surfaces based on the smoothness of raw LiDAR data, and enables real-time odometry and mapping [5] through plane-to-plane and edge-to-line matching. In order to speed up the calculation, lightweight and ground-optimized LOAM (LEGO-LOAM) [6] extracts the ground features by segmental clustering of geometric plane points, and adopts two-stage optimization for attitude estimation. Using lidar alone can lead to motion distortion and low system output frequency issues. Combining IMU (Inertial Measurement Unit) with lidar can effectively address these problems and significantly improve the system's robustness and accuracy. Unlike the approach of selecting features by sampling uniformity in space, Implicit Moving Least Squares (IMLS)-SLAM [7] employs a unique sampling strategy that considers the contribution of points to the observability of the robot's state. This strategy is designed to ensure both time efficiency and accuracy. On the other hand, Pan et al. focuses on extracting various classified geometric features from the scan data and proposes an efficient registration algorithm[8]. This algorithm can perform pose optimization using point-to-point, plane, and line error metrics. IMU typically acquires data at a much higher rate than lidar. Scholars have proposed various methods to handle high-rate IMU measurements. Zhang[4] and Shan[6] employed the IMU to correct the skew in point clouds and initialize motion estimates for LiDAR odometry based on scan matching. The most straightforward approach is to use IMU for state propagation in methods based on Extended Kalman[9,10]. In the field of graph optimization theory, an effective IMU preintegration technique has been introduced[11]. This technique, which uses Euler angles to parameterize rotation errors, aims to prevent redundant integration of IMU measurements. Shen et al. derived covariance propagation using continuous-time error state dynamics[12]. Forster et al. further enhanced the preintegration theory by incorporating posterior IMU bias corrections[13,14]. Qin et al. represents the pioneering tightly-coupled LiDAR-inertial odometry (LIO) approach that calculates the 6 degrees of freedom (6 DOF) ego-motion through iterated Kalman filtering[15]. LIO-Smoothing and Mapping (SAM) [16] introduces a tightly-coupled LiDAR-Inertial SLAM framework that utilizes redundant information from multiple sensors to achieve more accurate and robust positioning and mapping results. The above methods are specially designed for rotating LiDAR. To address the challenges posed by solid-state lidar, such as limited field of view and irregular scanning patterns, Loam-Livox [17] has developed a software package that can be efficiently integrated with the Livox Mid-40 LiDAR. Another approach, FAST-LIO [18], uses a tightly coupled iterative extended Kalman filter to combine LiDAR functionality with IMU data to achieve an efficient and accurate LiDAR inertial odometer. The new version of FAST-LIO 2.0 [19] eliminates the need for feature extraction and directly matches the original point cloud to the entire map, enabling powerful and efficient LiDAR localization and mapping using the ikd-Tree toolkit. CamVox [20] uses a dense point cloud to automatically calibrate the Livox Lidar and camera, taking advantage of the features of the non-repeat scan mode. In order to improve the accuracy and computational efficiency, LiLi-OM [21] proposes a keyframe-based LiDAR - Inertial SLAM sliding window optimization method. In solving optimization-based problems, Zhang et al. proposes an online method to mitigate degradation by analyzing the geometry of the problem constraints and explaining the impact of environmental degradation on state estimation[22]. In a separate study, Zhao et al. analyzed the characteristics of the Livox MID-40 LiDAR, evaluating the intensity distribution of the laser points in the field of view, the accuracy measurement, and the effect of the color of the target surface on the reflection intensity[23]. This analysis inspired the combination of geometry and intensity information, providing additional constraints for state estimation. Using intensity readings from imaging LiDAR, Shan et al. obtains intensity images that aid in real-time robust positioning[24]. Finally, Li et al. introduces two multiple weighting functions in the optimization process to deal with geometry and intensity factors, which shows strong robustness in degraded environments[25].

In recent years, in addition to extracting geometric features, there has been exploration into semantic-based SLAM[26,27]. These approaches go beyond utilizing just geometric plane and edge features; they also extract semantic information to gain a more comprehensive understanding of the surrounding environment.

3. METHODS

The proposed SLAM framework for solid-state LiDAR is shown in Figure 1. In the figure, red boxes and text mark the main process position of the innovation point in this paper. In order to cope with the challenge of irregular scanning and the need for intensity points in subsequent processing, a feature point extraction algorithm based on intensity and geometric information is proposed. The frame solves the problem of motion distortion by clutter filtering and motion compensation. Firstly, geometric feature points are extracted based on spatial distribution to deal with irregular scanning. In order to ensure the accuracy of extraction, the quadratic plane feature confirmation method is adopted. To solve the problem that the small field of view of solid-state LiDAR leads to the loss of feature points and the degradation of positioning performance, the intensity information is used to extract the intensity feature points. Then, feature point matching is carried out, and a matching algorithm containing intensity information is used to mitigate the wrong plane point matching caused by a limited field of view. Finally, the residual equation is formed by combining the intensity point residual with the original geometric point residual, and the accurate and robust solid-state LiDAR positioning output is obtained.

Intensity enhanced for solid-state-LiDAR in simultaneous localization and mapping

Figure 1. System pipeline of solid-state-LiDAR-inertial SLAM with intensity information assistance.

3.1. LiDAR data preprocess

Generally, in solid-state LiDAR systems, in order to improve the accuracy and robustness of positioning, mapping and closed-loop detection modules, it is necessary to remove outliers such as points near the blind spot of the field of view, points with too high or too low intensity values, horizontal incidence angles, shielding points, and ground points.

Since the point cloud data within a single frame of the Lidar sensor are not generated at the same time, the intra-frame motion distorts the point cloud and causes motion blur, and some outliers appear. The fast elimination of outliers will improve the accuracy of positioning and mapping. In this paper, a piecewise linear method is proposed to divide the data of each frame into three segments in chronological order. The incoming frame is divided into three consecutive subframes, each of which matches independently with the currently constructed map. For each subframe, use the pose of the last point in the subframe to project all points onto the global map. This method reduces the sampling time of each frame to one-third of the original time, speeds up the processing speed, and can effectively compensate for the motion distortion to reduce the motion blur. Segmentation technology can take advantage of the multi-core architecture of the CPU by processing each subframe in parallel in a separate thread. Each subframe matches the global map on a dedicated thread, and once the match is complete, the matched subframe is registered as part of the global map. To speed up the process, a new thread is created to receive the newly registered subframe and a kd tree is constructed to efficiently match the data for the next frame.

For solid-state lidar, due to its unique scanning mode, no beam segmentation, and higher vertical resolution, points in the vertical direction can be considered when calculating curvature. In this paper, a region-based feature point extraction algorithm is proposed to obtain the edge and surface features of point clouds efficiently.

The kth LiDAR input scan frame $$ \mathcal{P}_{i} $$ maps in two dimensions according to horizontal and vertical resolutions. Given a point in the nth LiDAR input scan frame $$ P_{i}=\{x_{i}, y_{i}, z_{i}\}\in \mathcal{P}_{i} $$, the horizontal angle $$ \alpha_{i} $$ and vertical angle $$ \theta_{i} $$ are formulated as:

$$ \begin{equation} \alpha_i=\arctan \left(\frac{y_i}{x_i}\right) \end{equation} $$

$$ \begin{equation} \theta_i=\arctan \left(\frac{z_i}{x_i}\right) \end{equation} $$

Based on horizontal and vertical detection areas, the entire point cloud frame is divided into $$ M \times N $$ rectangular areas. The horizontal region range is $$ [\alpha_{min}, \alpha_{max}] $$, and the vertical region range is $$ [\theta_{min}, \theta_{max}] $$. For a solid-state LiDAR, assuming its horizontal resolution is $$ \alpha_{r} $$ and vertical resolution is $$ \theta_{r} $$, the formula for calculating the parameters of the rectangular area is as follows:

$$ \begin{equation} M=\frac{\alpha_{\max }-\alpha_{\min }}{\alpha_r} \end{equation} $$

$$ \begin{equation} N=\frac{\theta_{\max }-\theta_{\min }}{\theta_r} \end{equation} $$

3.2. Robust feature extraction

Calculate the curvature after mapping the point cloud to the $$ M \times N $$ rectangular region. The distance difference between the points in the square search area around the current point is calculated to obtain the local smoothness. Taking the current point $$ P_k^{(m, n)} $$ as an example, the calculation formula is as follows:

$$ \begin{equation} c_k^{(m, n)}=\frac{1}{\lambda} . \sum\limits_{P_k^{(i, j)} \in S_k^{(m, n)}}\left(\left\|P_k^{(i, j)}\right\|-\left\|P_k^{(m, n)}\right\|\right) \end{equation} $$

where $$ \mathcal{S}_k^{(m, n)}=\left\{\mathbf{p}_k^{(i, j)} \mid i \in[m-L, m+L], j \in[n-L, n+L]\right\} $$, $$ L $$ is predefined square search area side length, and $$ \lambda $$ is the number of points in that region. $$ c_k^{(m, n)} $$ represents the local smoothness of the point; the larger the value, the steeper the local surface, and the corresponding point is the edge point. Conversely, a lower value indicates that the corresponding point is a plane point.

In order to ensure the accuracy of the selected plane features and edge features, the two-stage geometric distribution verification method is used to determine whether the feature points meet the criteria by relying on the geometric distribution of the point cloud. For the current point, select a certain number of adjacent points, calculate their covariance matrix, and get the eigenvalue. The eigenvalues are arranged in ascending order $$ \lambda_1 $$, $$ \lambda_2 $$, and $$ \lambda_3 $$. If the set of these points satisfies the geometry of the plane, then $$ \lambda_1 / \lambda_2<\alpha $$. If the set of these points satisfies the geometric characteristics of the edge, then $$ \lambda_2 / \lambda_3<\beta $$. In this study, $$ \alpha $$is set to 0.22 and $$ \beta $$is set to 0.2 through experimental testing. Intensity feature extraction needs to use the intensity information of point cloud. Intensity measurements are determined by:

$$ \begin{equation} I=\frac{P_r}{P_e}=\frac{D_r^2 \rho}{4 R^2} \eta_{\text{sys}} \eta_{\text{atm}} \cos \theta=\eta_{\text{all}} \frac{\rho \cos \theta}{R^2} \end{equation} $$

where $$ \eta_{\text {all }} $$ is a constant. Therefore, the surface reflectivity $$ \rho $$ is only related to the incident angle $$ \theta $$ and the measurement distance $$ R $$.

$$ \begin{equation} \rho \propto \frac{I R^2}{\cos \theta} \end{equation} $$

Before extracting the intensity features, it is necessary to normalize the intensity values of the point cloud to obtain a unified intensity feature, and its distance can be directly calculated to correspond with the intensity information. When a point is identified as a plane point by two-stage spatial feature determination, the incidence Angle of the point is calculated according to the normal vector of the point. Select the point $$ P_a $$ farthest from the solid-state laser radar source point from the plane point set, and then find the point $$ P_b $$ farthest from the point $$ P_a $$. According to the following geometric formula, we can determine the point $$ P_c $$.

$$ \begin{equation} \mathbf{p}_c=\underset{\mathbf{p}_i \in \mathbb{P}}{\operatorname{argmax}}\left\|\left(\mathbf{p}_a-\mathbf{p}_b\right) \times\left(\mathbf{p}_a-\mathbf{p}_i\right)\right\|_2 \end{equation} $$

Planes can be represented by plane equations:

$$ \begin{equation} \Pi: n_x \cdot x+n_y \cdot y+n_z \cdot z+\bar{d}_i=0 \end{equation} $$

where $$ n_x $$, $$ n_y $$, $$ n_z $$, and $$ \bar{d}_i $$ are the parameters of the plane, and the calculation formula is as follows:

$$ \begin{equation} \overline{\mathbf{n}}_i=\left[n_x, n_y, n_z\right]^T=\frac{\left(\mathbf{p}_a-\mathbf{p}_b\right) \times\left(\mathbf{p}_a-\mathbf{p}_c\right)}{\left\|\left(\mathbf{p}_a-\mathbf{p}_b\right) \times\left(\mathbf{p}_a-\mathbf{p}_c\right)\right\|_2} \end{equation} $$

$$ \begin{equation} \bar{d}_i=-\left(n_x \cdot p_x+n_y \cdot p_y+n_z \cdot p_z\right) \end{equation} $$

$$ \begin{equation} \overline{\mathbf{p}}=\left[p_x, p_y, p_z\right]^T=\frac{1}{m} \sum\limits_{\mathbf{p}_i \in \mathbb{P}} \mathbf{p}_i \end{equation} $$

Where $$ \overline{\mathbf{n}}_i $$ is the unit normal vector of the plane, and $$ \overline{\mathbf{p}} $$ is the center of the plane. The formula for calculating the incident angle of the point is as follows:

$$ \begin{equation} \cos \theta=\frac{\mathbf{p}^T \cdot \mathbf{n}}{|\mathbf{p}|} \end{equation} $$

In order to avoid redundancy of point selection based on intensity Angle and geometric Angle, only the normalized intensity information of plane points is considered in calculating the incidence Angle. After intensity calibration, the intensity value reflects the reflectivity of the point, indicating its material properties. The point cloud image after intensity processing is shown in Figure 2.

Intensity enhanced for solid-state-LiDAR in simultaneous localization and mapping

Figure 2. Intensity normalization and feature extraction. The upper left corner is the distribution map before the intensity normalization, the upper right corner is the distribution map after the intensity normalization, the lower left corner is the feature extraction, and the lower right corner is the real scene with a camera.

The previous section explains in detail the intensity preprocessing steps and mapping point cloud frames onto a two-dimensional matrix plane. Based on the surface smoothness of the point, the intensity smoothness is introduced as a measure to describe the local intensity distribution of the surface. Taking the current point $$ P_k^{(m, n)} $$ as an example, the intensity smoothness is calculated by:

$$ \begin{equation} c_{ik}^{(m, n)}=\frac{1}{\lambda} \cdot \sum\limits_{I_k^{(i, j)} \in S_k^{(m, n)}}\left(\left\|I_k^{(i, j)}\right\|-\left\|I_k^{(m, n)}\right\|\right) \mid \end{equation} $$

The diagram in Figure 2 illustrates the process of feature extraction. Blue denotes geometric plane feature points, yellow represents geometric edge feature points, and red indicates intensity edge feature points.

3.3. State estimation

For geometric feature points, the matching method can be referenced from Li[21]. However, unlike geometric information, intensity information differs in the dimensions and cannot be calculated on the original data, so it needs to be unified under one dimension for fusion. Considering that the range interval of the negative exponential function is 0-1, the consistency difference function which can integrate the intensity information and the geometry information is designed. Under the same geometric consistency, the closer the intensity of two points, the higher their correspondence. The consistency difference function they match is:

$$ \begin{equation} w_i^o=e^{-\left(d+d^I\right)} \end{equation} $$

$$ \begin{equation} d=\left|\hat{\mathbf{p}}_k-\mathbf{p}^I\right| \end{equation} $$

$$ \begin{equation} d^I=\|I-\bar{I}\| \end{equation} $$

The final consistent difference function $$ w_i^o $$, the geometric consistent difference function $$ d $$, and the intensity consistent difference function $$ d^I $$ are used in the calculation process of intensity residuals, which is similar to that of geometric residuals. Each plane point in the current frame $$ \mathbf{p}_k \in P_k $$ is mapped to the local coordinate system using:

$$ \begin{equation} \hat{\mathbf{p}}_k=\mathbf{T}_k \cdot \mathbf{p}_k \end{equation} $$

Where $$ \hat{\mathbf{p}}_k $$ denotes the point mapped to the local map, and $$ \mathbf{T}_k $$ represents the attitude change estimated in the previous frame. For intensity feature points, select the point $$ \mathbf{p}^I $$ that corresponds to the smallest consistency cost function in the intensity local plot for calculating the intensity residual D. The intensity residual $$ f_I\left(\hat{\mathbf{p}}_k\right) $$ is computed using:

$$ \begin{equation} f_I\left(\hat{\mathbf{p}}_k\right)=\left|\hat{\mathbf{p}}_k-\mathbf{p}^I\right| \end{equation} $$

The final residual equations of the system include geometric edge point residuals $$ f_{\mathcal{E}}\left(\hat{\mathbf{p}}_i\right) $$, geometric planar point residuals $$ f_{\mathcal{S}}\left(\hat{\mathbf{p}}_i\right) $$, and intensity feature point residuals $$ f_I\left(\hat{\mathbf{p}}_i\right) $$:

$$ \begin{equation} \mathbf{T}^*=\underset{\mathbf{T}^*}{\arg \min } \sum\limits_{\mathbf{p}_i \in \mathcal{P}_{\mathcal{E}}}\left|f_{\mathcal{E}}\left(\hat{\mathbf{p}}_i\right)\right|+\sum\limits_{\mathbf{p}_i \in \mathcal{P}_{\mathcal{S}}}\left|f_{\mathcal{S}}\left(\hat{\mathbf{p}}_i\right)\right|+\sum\limits_{\mathbf{p}_i \in \mathcal{P}_I}\left|f_I\left(\hat{\mathbf{p}}_i\right)\right| \end{equation} $$

4. RESULTS

In order to validate the accuracy and robustness of the proposed solid-state LiDAR location algorithm, this section conducts an experimental evaluation using both publicly available and customized datasets. The computational cost and evaluation metrics of different algorithms were measured on laptops equipped with AMD 6-core 5800H processors and Nvidia 3050 graphics cards.

To quantitatively evaluate the accuracy of the algorithm, two widely used SLAM evaluation indexes, absolute trajectory error (ATE) and relative attitude error (RPE), are introduced. ATE provides an intuitive representation of overall positioning accuracy, while RPE compares relative attitude increments rather than absolute attitude, providing insight into local accuracy. These two indexes effectively measure the accuracy and robustness of the algorithm.

The selection of parameters is based on a comprehensive understanding of the algorithm and results from experiments across various scenarios. The robustness of the algorithm is verified through systematic experiments in different environmental settings. Additionally, comparative experiments with existing algorithms are conducted to demonstrate the superiority of our method. Same as Zhao [28], we test the proposed method in three scenarios.

We selected three scenarios to evaluate our approach. Scenario 1 involved capturing the indoor environment of a warehouse using an automated guided vehicle (AGV) equipped with an Intel-L515 solid-state Lidar. Scenario 2 entailed recording an outdoor environment on a university campus library using an AGV with a Livox-Xsens suite. Scenario 3 depicted a de-escalation scene recorded in an indoor hallway. To showcase the effectiveness of our method, we compared it with several traditional algorithms in LiDAR systems, including Solid-State LiDAR SLAM (SSL-SLAM), LIvox Lidar-inertial Odometry and Mapping (LLI-OM), and Fast LiDAR-inertial Odometry.

In Scene 1, the ground truth is provided by the VICON system. The data set used in the indoor scene is the public data set supporting the SSL-SLAM project. The acquisition device is Intel RealSense l, 515 solid-state Lidar, and SSL-SLAM is the only open source algorithm suitable for l, 515 solid-state LiDAR. Other algorithms are not applicable, so we only use this algorithm to compare with our method. The algorithm is evaluated against ground truth and SSL-SLAM algorithms to assess the localization results. The experimental findings are presented in Figure 3. It is evident that the SSL-SLAM algorithm exhibits a significant deviation from the actual ground truth at 35 s, whereas the algorithm proposed in this study only deviates during the final turn. Towards the end of the experiment, the robot platform moves backward and then forward. For such short trajectory segments, SSL-SLAM fails to provide accurate localization, whereas our proposed algorithm closely aligns with real-world conditions on the ground. The error comparison of experimental results is detailed in Table 1. It can be observed that our proposed algorithm demonstrates superior accuracy and robustness compared to state-of-the-art algorithms in indoor scenarios. The error level is one order of magnitude lower due to the introduction of the intensity information of the algorithm in this paper. Especially for indoor scenes, due to less interference and more one-dimensional data, the accuracy of regular features will be greatly improved, reflecting the advantages of the algorithm in this paper. The ATE and RPE are decreased by 54.5% and 5.3%.

Intensity enhanced for solid-state-LiDAR in simultaneous localization and mapping

3. Experimental results. The first column is the localization and mapping performance in Scenes 1, 2, 3, respectively, from top to bottom. The second column is the trajectory plot in every Scene. The third column is the comparison of three-axis positions in every Scene.

Table 1

Experimental error evaluation in Scene 1

IndexSSL_SLAMProposed
ATE0.33 m0.1 5m
RPE0.019 m0.018 m

Scene 2 is an outdoor. Because the data set used in the outdoor scene is Livox solid-state LiDAR, and the SSL-SLAM algorithm is inapplicable, so it is not compared with this algorithm. The Livox-Avia LiDAR is used in sync with the Xsens MTi-G-700 IMU (six-axis, 150 Hz). The ground truth is provided by Real-Time Kinematic (RTK). It can be seen from the result figure that compared with the advanced LiLi-OM and FAST-LIO algorithms, the proposed algorithm has smaller error and higher positioning accuracy and robustness. The error comparison of experimental results is shown in Table 2. It can be seen that in outdoor scenes, compared with LiLi-OM and FAST-LIO algorithms, the proposed algorithm has higher accuracy and robustness. Outdoor scenes are much more complex than indoor ones. In the back-end optimization stage, the geometric and intensity feature residuals are combined to enhance the robustness of the system. The experimental results show the importance of this step for outdoor scenes. The ATE and RPE are decreased by 29.6% and 58.8%.

Table 2

Experimental error evaluation in Scene 2

IndexLiLi-OMFAST-LIOProposed
ATE49.44 m10.88 m7.6 6m
RPE0.65 m0.34 m0.14 m

Scene 3 is a challenging scenario, the actual length is 274 m, and the duration is 417 s. The LiDAR is directed toward the ceiling, where there are hardly any geometric edge points. In this scene, the other two algorithms have data at the beginning. In the operation process, due to the challenging scene, the positioning failed successively, so there is no data. The results show that the LiLi-OM and FAST-LIO algorithms fail at 65 and 240 s, respectively, due to the lack of geometric feature points on the ceiling. However, the algorithm successfully achieved positioning in 417 s, with an acceptable error range. The error comparison of experimental results is shown in Table 3. In challenging environments, the algorithm manages to keep the error within an acceptable range when the other two algorithms fail. Because the intensity information is added to the algorithm, and the residuals of geometric features and intensity features are fused in the residual part, the proposed method can still obtain a good positioning effect in the degenerate environment with fewer features.

Table 3

Experimental error evaluation in Scene 3

IndexLiLi-OMFAST-LIOProposed
ATE0.23 m
RPE0.14 m

Regarding the complexity of the algorithm, we conducted a comprehensive analysis. All algorithms were tested on a laptop equipped with an AMD 6-core 5800H processor and Nvidia 3050 graphics card. We recorded the percentage of CPU usage of each method at execution time and calculated the average usage to evaluate the computational complexity of each method. The conclusions are shown in Table 4. In Scene 1, only the SSL-SLAM method is available, and it is found that it occupies the same proportion of resources as the proposed method. In Scene 2, the LiLi-OM and FAST-LIO methods and the proposed method can successfully complete all the positioning functions, while the resource proportion of the proposed method is slightly higher because the algorithm complexity of the proposed method is slightly higher and there are too many feature points in the scene which need more computing resource. In Scene 3, both LiLi-OM and FAST-LIO methods failed to complete their tasks and failed to stop after only running for a period of time at the beginning, so the resource proportion is very small. The proposed method also takes up a smaller proportion of resources than that in Scene 2 due to fewer feature points.

Table 4

The computational complexity of each method

IndexSSL_SLAMLiLi-OMFAST-LIOProposed
Scene 116%17%
Scene 220%19%23%
Scene 32.3%2.1%21%

Despite the introduction of new computational steps, the overall complexity of our algorithms did not increase significantly due to our skillful optimization of the data processing workflow. In a complex environment, the algorithm has a fast convergence speed. This is due to the introduction of an adaptive feature extraction mechanism in the algorithm, which enables it to better adapt to environmental changes and integrate new observational data faster.

5. CONCLUSIONS

A localization and mapping algorithm for solid-state LiDAR is proposed. To address the problem of irregular scanning modes of solid-state lidar, a geometric feature extraction strategy suitable for solid-state lidar is proposed. To solve the problem of feature point loss and degradation caused by a limited field of view in solid-state lidar location, an intensity normalization method is designed, and an intensity-based feature point extraction and matching algorithm is proposed. In the process of position update, the intensity point residual is introduced and combined with the geometric point residual to form a complete residual equation, which realizes the accurate and robust positioning output of solid-state Lidar. Tests using open source datasets and self-collected datasets validate the effectiveness and reliability of the algorithm, which is more accurate and robust than the most advanced solid-state LiDAR localization algorithms. Compared with the corresponding method, in indoor scenes, the ATE and RPE are decreased by 54.5% and 5.3%. In outdoor scenes, the ATE and RPE are decreased by 29.6% and 58.8%. The proposed method can be well adapted to indoor and outdoor scenes, but only for relatively stable scenes. In addition, due to the small field of view of solid-state lidar, although the proposed algorithm can have relatively good positioning results, there will be drags during map construction, which will lead to the accumulation of continuous positioning accuracy errors. These are the areas that should be improved in the future.

DECLARATIONS

Acknowledgments

A preliminary version of this work was presented at the 3rd ICAUS 2023 (Nanjing China, September 2023, pp.129-139) and all authors agree to submit and publish it in this new article.

Authors' contributions

Contributed to the study conception and design, material preparation, data collection and analysis were performed, commented on previous versions: Tong X, Li J, Zhao C

The first draft of the manuscript was written: Tong X

All authors read and approved the final manuscript.

Availability of data and materials

Not applicable.

Financial support and sponsorship

This work was supported by NSFC No. 62073264.

Conflicts of interest

Not applicable.

Ethical approval and consent to participate

Not applicable.

Consent for publication

A preliminary version of this work was presented at the 3rd ICAUS 2023 (Nanjing China, September 2023, pp.129-139) and all authors agree to submit and publish it in this new article.

Copyright

© The Author(s) 2024.

REFERENCES

1. Taketomi T, Uchiyama H, Ikeda S. Visual SLAM algorithms: a survey from 2010 to 2016. IPSJ Trans Comput Vis Appl 2017;9:16.

2. Labbé M, Michaud F. Rtab-Map as an open-source lidar and visual simultaneous localization and mapping library for large-scale and long-term online operation. J Field Robot 2018;36:416-46.

3. Jiao J, Zhu Y, Ye H, et al. Greedy-based feature selection for efficient LiDAR SLAM. In: 2021 IEEE International Conference on Robotics and Automation (ICRA); 2021, pp. 5222-8.

4. Zhan J, Sing S. LOAM: lidar odometry and mapping in real-time. In: Robotics: science and system. 2014. Available from: https://api.semanticscholar.org/CorpusID:18612391 [Last accessed on 26 Jun 2024].

5. Lo KL. Linear least-squares optimization for point-to-plane ICP surface registration. 2004. Available from: https://api.semanticscholar.org/CorpusID:122873316 [Last accessed on 26 Jun 2024].

6. Sha T, Englo B. LeGO-LOAM: lightweight and ground-optimized lidar odome-try and mapping on variable terrain. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); 2018, pp. 4758-65.38257644PMC11154502.

7. Deschau JE. IMLS-SLAM: scan-to-model matching based on 3D data. In: 2018 IEEE International Conference on Robotics and Automation (ICRA); 2018, pp. 2480-5.

8. Pa Y, Xiao P, He Y, Shao Z, Li Z. MULLS: versatile LiDAR SLAM via multi-metric linear least square. In: 2021 IEEE International Conference on Robotics and Automation (ICRA); 2021, pp. 11633-40.

9. Weiss S, Achtelik MW, Lynen S, Chli M, Siegwart R. Real-time onboard visual-inertial state estimation and self-calibration of mavs in unknown environments. In: 2012 IEEE International Conference on Robotics and Automatio; 2012, pp. 957-64.

10. Mouriki AI, Roumelioti SI. A multi-state constraint kalman filter for vision-aided inertial navigation. In: Proceedings 2007 IEEE International Conference on Robotics and Automatio; 2007, pp. 3565-72.32455697PMC7288036.

11. Lupto T, Sukkarie S. Visual-inertial-aided navigation for high-dynamic motion in built environments without initial conditions. IEEE Trans Robot 2012;28:61-76.

12. Shen S, Michael N, Kumar V. Tightly-coupled monocular visual-inertial fusion for autonomous flight of rotorcraft MAVs. In: 2015 IEEE International Conference on Robotics and Automation (ICRA); 2015, pp. 5303-10.

13. Forster C, Carlone L, Dellaert F, Scaramuzza D. On-manifold preintegration for real-time visual-inertial odometry. IEEE Trans Robot 2017;33:1-21.

14. Forster C, Carlone L, Dellaert F, Scaramuzza D. IMU preintegration on manifold for efficient visual-inertial maximum-a-posteriori estimation. 2015.

15. Qin C, Ye H, Pranata CE, Han J, Zhang S, Liu M. LINS: a lidar-inertial state estimator for robust and efficient navigation. In: 2020 IEEE International Conference on Robotics and Automation (ICRA); 2020, pp. 8899-906.

16. Shan T, Englot B, Meyers D, Wang W, Ratti C, Rus D. LIO-SAM: tightly-coupled lidar inertial odometry via smoothing and mapping. In: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); 2020, pp. 5135-42.34879118PMC8654169.

17. Lin J, Zhang F. Loam livox: a fas, robus, high-precision lidar odometry and mapping package for lidars of small fov. In: 2020 IEEE International Conference on Robotics and Automation (ICRA); 2020, pp. 3126-31.

18. Xu W, Zhang F. FAST-LIO: a fas, robust lidar-inertial odometry package by tightly-coupled iterated kalman filter. IEEE Robot Autom Lett 2021;6:3317-24.

19. Xu W, Cai Y, He D, Lin J, Zhang F. FAST-LIO2:fast direct lidar-inertial odometry. IEEE Trans Robot 2022;38:2053-73.

20. Zhu Y, Zheng C, Yuan C, Huang X, Hong X. CamVox: a low-cost and accurate lidar-assisted visual slam system. In: 2021 IEEE International Conference on Robotics and Automation (ICRA); 2021, pp. 5049-55.

21. Li K, Li M, Hanebeck UD. Towards high-performance solid-state-lidar-inertial odometry and mapping. IEEE Robot Autom Lett 2021;6:5167-74.

22. Zhang J, Kaess M, Singh S. On degeneracy of optimization-based state estimation problems. In: 2016 IEEE International Conference on Robotics and Automation (ICRA); 2016, pp. 809-16.

23. Zhao Y, Huang K, Lu H, Xiao J. Extrinsic calibration of a small FoV LiDAR and a camera. In: 2020 Chinese Automation Congress (CAC); 2020, pp. 3915-20.

24. Shan T, Englot B, Duarte F, Ratti C, Rus D. Robust place recognition using an imaging lidar. In: 2021 IEEE International Conference on Robotics and Automation (ICRA); 2021, pp. 5469-75.

25. L, H, Tia B, She H, L, J. An intensity-augmented LiDAR-inertial SLAM for solid-state LiDARs in degenerated environments. IEEE Trans Instrum Meas 2022;71:1-10.

26. Chen X, Milioto A, Palazzolo E, Giguère P, Behley J, Stachniss C. Suma++: efficient LiDAR-based semantic SLAM. In: 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); 2019, pp. 4530-4537.

27. Chen G, Wang B, Wang X, Deng H, Wang B, Zhang S. PSF-LO: parameterized semantic features based lidar odometry. In: 2021 IEEE International Conference on Robotics and Automation (ICRA); 2021, pp. 5056-62.

28. Zhao C, Li J, Chen A, Lyu Y, Hua L. Intensity augmented solid-state-LiDAR-inertial SLAM. In: Proceedings of 3rd 2023 International Conference on Autonomous Unmanned Systems (3rd ICAUS 2023); pp. 129-39. Available from: https://link.springer.com/chapter/10.1007/978-981-97-1103-1_12 [Last accessed on 26 Jun 2024]35877632PMC9323171.

Cite This Article

Export citation file: BibTeX | EndNote | RIS

OAE Style

Tong X, Li J, Zhao C. Intensity enhanced for solid-state-LiDAR in simultaneous localization and mapping. Complex Eng Syst 2024;4:11. http://dx.doi.org/10.20517/ces.2024.06

AMA Style

Tong X, Li J, Zhao C. Intensity enhanced for solid-state-LiDAR in simultaneous localization and mapping. Complex Engineering Systems. 2024; 4(2): 11. http://dx.doi.org/10.20517/ces.2024.06

Chicago/Turabian Style

Xi Tong, Jiaxing Li, Chunhui Zhao. 2024. "Intensity enhanced for solid-state-LiDAR in simultaneous localization and mapping" Complex Engineering Systems. 4, no.2: 11. http://dx.doi.org/10.20517/ces.2024.06

ACS Style

Tong, X.; Li J.; Zhao C. Intensity enhanced for solid-state-LiDAR in simultaneous localization and mapping. Complex. Eng. Syst. 2024, 4, 11. http://dx.doi.org/10.20517/ces.2024.06

About This Article

© The Author(s) 2024. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, sharing, adaptation, distribution and reproduction in any medium or format, for any purpose, even commercially, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Data & Comments

Data

Views
30
Downloads
2
Citations
0
Comments
0
0

Comments

Comments must be written in English. Spam, offensive content, impersonation, and private information will not be permitted. If any comment is reported and identified as inappropriate content by OAE staff, the comment will be removed without notice. If you have any queries or need any help, please contact us at support@oaepublish.com.

0
Download PDF
Share This Article
Scan the QR code for reading!
See Updates
Contents
Figures
Related
Complex Engineering Systems
ISSN 2770-6249 (Online)

Portico

All published articles are preserved here permanently:

https://www.portico.org/publishers/oae/

Portico

All published articles are preserved here permanently:

https://www.portico.org/publishers/oae/