Download PDF
Research Article  |  Open Access  |  12 May 2024

Towards environment perception for walking-aid robots: an improved staircase shape feature extraction method

Views: 308 |  Downloads: 46 |  Cited:  0
Intell Robot 2024;4(2):179-95.
10.20517/ir.2024.11 |  © The Author(s) 2024.
Author Information
Article Notes
Cite This Article

Abstract

This paper introduces an innovative staircase shape feature extraction method for walking-aid robots to enhance environmental perception and navigation. We present a robust method for accurate feature extraction of staircases under various conditions, including restricted viewpoints and dynamic movement. Utilizing depth camera-mounted robots, we transform three-dimensional (3D) environmental point cloud into two-dimensional (2D) representations, focusing on identifying both convex and concave corners. Our approach integrates the Random Sample Consensus algorithm with K-Nearest Neighbors (KNN)-augmented Iterative Closest Point (ICP) for efficient point cloud registration. The results show an improvement in trajectory accuracy, with errors within the centimeter range. This work overcomes the limitations of previous approaches and is of great significance for improving the navigation and safety of walking assistive robots, providing new possibilities for enhancing the autonomy and mobility of individuals with physical disabilities.

Keywords

Walking-aid robots, environment perception, staircase recognition, computer vision, feature extraction

1. INTRODUCTION

Understanding the environment is crucial for robots to traverse through complex terrains, especially in uneven terrains involving stairs[13]. Accurately perceiving and recognizing the shape feature of staircases remains a significant challenge for walking-aid robots, significantly influencing their autonomy and safety. This paper presents an innovative and robust staircase shape feature extraction method to enhance the environment perception abilities of walking-aid robots.

In recent years, the development of robotic systems has been increasingly focused on assisting individuals in various daily activities[47], notably in mobility assistance[821]. Walking-aid robots have emerged as important tools, assisting individuals with limited mobility to walk in various terrains. However, their seamless traversal of stairs still requires improvement. The ability of these robots to help people successfully navigate stairs depends on their accurate detection and negotiation of staircases, emphasizing the importance of efficient environment perception.

Plenty of research has been conducted using images or point cloud on staircase detection. Earlier approaches, such as those by Cong et al.[22] and Murakami et al.[23], utilized image-based methods, including edge detection on RGB images. However, these methods present limitations in low-light conditions and lack of depth information. Point cloud, often collected from Light Detection and Ranging (LiDAR) sensors and depth cameras, has been utilized to overcome these issues. The depth point cloud helps estimate accurately the geometry and location of staircases, which is crucial for navigation. Various techniques, including variations of the Random Sample Consensus (RANSAC) algorithm, have been used extensively for plane segmentation and staircase detection[24]. Recent advancements in stair detection involve a deep learning-based end-to-end method, treating stair line detection as a multitask involving coarse-grained semantic segmentation and object detection. This strategy has shown high accuracy and speed, significantly outperforming previous methods[25]. Open3D[26], an open-source three-dimensional (3D) data processing library, has also been widely applied to process staircase point cloud[27]. However, implementing existing staircase recognition methods on walking-aid robots presents significant challenges. These challenges stem from the limited computational capacity of such robots and the dynamic nature of their movement alongside human users. This movement often results in unpredictable fluctuations in the captured images or point cloud. Additionally, the camera position on these robots at lower altitudes leads to a restricted field of view, further complicating the recognition process. These factors collectively pose difficulties in effectively deploying staircase recognition technologies in walking-aid robots. Therefore, related work on these robots usually focused on simple geometry size recognition of the environment structure. The references[28] and[29] proposed using a depth camera to measure the size of obstacles and the distance between the exoskeleton and obstacles to determine whether the obstacle is crossable. Based on these assessments, the exoskeleton adapts by switching between predefined operational modes accordingly. The reference[30] proposed to reduce the dimension of the depth point cloud of the environment to reduce computational burden and recognize the shape parameters of the stairs (e.g., the height and width of the staircase) by the RANSAC algorithms. A significant drawback of existing methods is the lack of global positioning and motion state information. The ability to adapt based on predefined operational modes does not compensate for the robot's inability to understand its position within a larger environmental context. This limitation makes it hard to execute predictive and adaptive control on robots which is essential for advanced navigation and safety strategies. It reduces the robot's effectiveness in complex or dynamically changing environments.

In light of these limitations, our research seeks to address these gaps by offering a more comprehensive method for environmental perception. By focusing on advanced feature extraction techniques and point cloud registration algorithms, our method aims to enhance the robot's ability to perceive, understand, and adapt to its surroundings. This includes improving global positioning and motion state awareness, which are crucial for predictive and adaptive control. We have conducted preliminary work to construct an environmental map through point cloud registration algorithms and achieve real-time self-awareness of the robot's position in the environment by perceiving relative motion between frames[31]. Utilizing this information can guide the walking-aid robot to achieve more robust movement in complex environments. The above process often involves the following steps:

Data Acquisition: Use sensors such as depth camera[1] and LiDAR[18] to obtain point cloud data from the environment.

Feature Extraction: Extract key features from the point cloud[31,32], such as surface features or corners, for matching in subsequent steps.

Point Cloud Registration Algorithm: Employ registration algorithms to align feature points collected at different times or positions[33,34], creating a unified environment map and estimating the relative motion between adjacent frames by comparing their features.

In specific scenarios, systematically extracting corner points or straight lines from point cloud for registration can reduce computational costs and mitigate the risk of erroneous matching due to shape similarities. The core of our study is an improved feature extraction method that enables these robots to better understand and interact with their surroundings, especially staircases. Our previous work concentrated on extracting convex corners from staircases in point cloud. Although the method was effective in many scenarios, we identified certain limitations, particularly in situations with restricted perspectives, leading to errors in point cloud alignment. Our current research addresses these issues, presenting an innovative approach that offers more robust and accurate feature extraction, even in challenging viewpoints.

This paper outlines our novel method, beginning with acquiring 3D point cloud data using depth cameras mounted on walking-aid robots. We delve into the specifics of transforming this data into a two-dimensional (2D) representation and the subsequent steps for feature extraction. Our approach is comprehensive, considering various camera perspectives and incorporating both convex and concave corners in the extraction process. We employ advanced algorithms such as RANSAC and K-Nearest Neighbors (KNN)-augmented Iterative Closest Point (ICP) to enhance its accuracy and efficiency.

The main contribution of this paper is that it introduces a robust method for extracting staircase shape features from point cloud. This method significantly improves upon previous techniques by accurately identifying featured corner points in staircases, even under restricted viewpoints and fast movement scenarios. This enhancement addresses the limitations of earlier methods and ensures more reliable and robust feature extraction. By integrating the RANSAC algorithm and the KNN-augmented ICP method, the paper presents an improved performance for point cloud registration. This advancement significantly enhances the efficiency and robustness of point cloud processing in walking-aid robots when traversing through stairs.

This work represents a step forward in assistive robots in complex terrains. Improving the perception capabilities of walking-aid robots, it aims to contribute to safer and more reliable walking assistance for individuals, thereby enhancing the independence and well-being of individuals with mobility challenges.

2. METHODS

In this section, we introduce our novel approach for extracting staircase shape features. In our earlier work[31], our feature extraction method focused solely on extracting convex corners as feature points from each staircase in point cloud. However, we encountered limitations in certain scenarios where feature points could not be reliably extracted due to restricted perspectives. These limitations led to cumulative errors when performing frame-to-frame point cloud alignment.

To address these challenges and enhance the performance of staircase feature extraction and subsequent point cloud registration, we present an innovative method. This method aims to provide more robust and accurate feature extraction from staircases, even with constrained viewpoints. In this way, it helps mitigate the issues associated with cumulative errors during point cloud alignment, improving overall performance.

The overview of this method is depicted in Figure 1. We acquire 3D point cloud data of the environment using a depth camera mounted on walking-aid robots. These robots include powered prostheses[1,35] and lower-limb exoskeletons[36]. To provide context, Figure 2 illustrates the integration of the depth camera into typical walking-aid robots.

Towards environment perception for walking-aid robots: an improved staircase shape feature extraction method

Figure 1. An overview of the proposed staircase shape feature extraction method. 3D: Three-dimensional; IMU: inertial measurement unit.

Towards environment perception for walking-aid robots: an improved staircase shape feature extraction method

Figure 2. An example of how the depth camera is integrated into typical walking-aid robots, e.g., lower-limb prostheses.

Once the raw 3D environmental point cloud, denoted as $$ ^{\text {Camera}}\boldsymbol {P}_{ \rm {3D}} $$, is rotated by Equation 1 to align with the ground coordinate system:

$$ { }^{\text {Ground}} \boldsymbol {P}_{ \rm {3D}}={ }^{\text {Ground}} \boldsymbol {R}_{\text {Camera}} \cdot { }^{\text {Camera}} \boldsymbol {P_{ \rm {3D}}}, $$

where $$ { }^{\text {Ground}} \boldsymbol {P}_{ \rm {3D}} $$ and $$ { }^{\text {Camera}} \boldsymbol {P}_{ \rm {3D}} $$ are the point cloud in the ground coordinate system and camera coordinate system, respectively, and $$ { }^{\text {Ground}} \boldsymbol {R}_{\text {Camera}} $$ is the rotation matrix from the camera to the ground coordinate system, which can be calculated from the Euler angles measured by the inertial measurement unit (IMU) attached to the camera.

The rotated point cloud $$ ^{\text {Ground}} \boldsymbol {P}_{ \rm {3D}} $$ is then subjected to dimension reduction, as given in Equation 2, where $$ n $$ represents the total number of points in $$ ^\text{Ground}\boldsymbol {P}_{ \rm {3D}} $$. $$ U $$ is the set of indexes of the points of which the $$ y $$ coordinate (perpendicular to the human's sagittal plane) is between -0.1 and 0.1 m. This subset extraction process ensures that only points included in a narrow segment along the human's walking direction are selected to represent the terrain shape. $$ ^\text{Ground}\boldsymbol {P}_{ \rm {2D}} $$ is the set of 2D point $$ (x_k, z_k) $$ of which the index $$ k $$ is in $$ U $$. Dimension reduction is achieved by only involving the $$ x $$ and $$ z $$ coordinates in $$ ^\text{Ground}\boldsymbol {P}_{ \rm {2D}} $$, which projects all the 3D points, of which the indexes belong to $$ U $$, to the human's sagittal plane. This preprocessing strategy, including point cloud rotation, subset extraction and dimension reduction, leads to the conversion of the 3D data into a 2D point cloud $$ ^{\text {Ground}} \boldsymbol {P}_{ \rm {2D}} $$ that is now represented within the ground coordinate system, which aims to lower the computation burden when dealing with 3D point cloud data.

$$ \begin{equation} \begin{cases} ^\text{Ground}\boldsymbol {P}_{ \rm {3D}} =\left \{{{\left ({{x_{i}, y_{i}, z_{i}} }\right)\left |{ {i=1, \ldots, n} }\right.} }\right \} \\ U=\left \{{{\left.{ i }\right |-0.1 \rm {m} < y_{i} < 0.1 \rm {m}} }\right \} \\ ^\text{Ground}\boldsymbol {P}_{ \rm {2D}} =\left \{{{\left.{ {\left ({{x_{k}, z_{k}} }\right)} }\right |k\in U} }\right \} \\ \end{cases} \end{equation} $$

The process of point cloud registration for all points within the 2D point cloud set demands significant computational power from the hardware and may lead to prolonged processing times. In response to these challenges, our earlier work[31] proposed to extract featured corner points from the 2D point cloud to lower computational burdens and streamline the registration process.

In light of the unique characteristics of staircases, we conduct a comprehensive analysis that includes all potential camera perspectives in this work. We systematically identify and select specific corner points on staircases, including both convex and concave corners, as the focal points of interest. This selection process is carried out to capture the unique geometric properties of staircases.

In particular, we choose seven distinct 2D point cloud shapes, each corresponding to a specific camera perspective when positioned at a relatively low altitude, typically around 0.5 to 1 meter above the ground. These perspectives are carefully chosen to ensure a comprehensive coverage of possible viewpoints from which staircases may be observed by the walking-aid robots.

The identified corner points of interest, as illustrated in Figure 1, are described as follows:

Point A (depicted as the dark green dot): This represents the convex corner point at the lowest visible staircase within the camera's perspective.

Point B (indicated by the dark yellow dot): This corresponds to the concave corner point between two adjacent staircases.

Point C (represented by the purple dot): This designates the convex corner point at the uppermost visible staircase when two visible staircases are in the perspective.

It is worth noting that the numbers associated with these corner points indicate the potential number of horizontal lines, i.e., stairsteps, that can be extracted from the point cloud data. This comprehensive approach of corner point selection enables us to capture the diversity of staircases from varying camera perspectives, enhancing our ability to analyze and understand their geometric properties effectively.

To facilitate the extraction of corner points from staircases, we employ the RANSAC algorithm[30] to isolate stairsteps and then identify the corners. In detail, the stairsteps are identified by iteratively selecting a subset of the observed point cloud using the RANSAC algorithm, and the starting and ending points of the stairsteps are identified as the corner points.

As depicted in Figure 3, our workflow begins with utilizing the RANSAC algorithm to determine the number of stairsteps visible within the camera's perspective. This initial step provides a crucial parameter, allowing us to gain insights into the staircase structure and layout.

Towards environment perception for walking-aid robots: an improved staircase shape feature extraction method

Figure 3. Classification of typical 2D point cloud of staircases. 2D: Two-dimensional.

Subsequently, we employ a set of if-else rules to classify the captured point cloud data into the aforementioned seven distinct shapes. These rules consider the specific geometric characteristics of staircases observed from different camera perspectives. By adhering to these classification rules, we accurately categorize the extracted point cloud data, ensuring that each shape is precisely identified.

Applying the RANSAC algorithm, in conjunction with the classification rules, is an important part of our approach. It not only facilitates the precise identification of corner points but also contributes significantly to the overall accuracy of the feature extraction process. This level of precision is essential for subsequent analyses and applications, enhancing the robustness and reliability of our method.

Once the initial model is determined via RANSAC and the feature corner points are successfully extracted, we proceed with the KNN-augmented ICP[37] for point cloud registration and estimation of the motion of the depth camera integrated into the walking-aid robot. The integration of RANSAC and the KNN-augmented ICP enables a two-tiered feature extraction and registration approach. Initially, RANSAC provides a robust estimation of the staircase geometry, effectively handling outliers and incomplete data. Subsequently, KNN-ICP refines this estimation by ensuring that only the most relevant points are used for final alignment, enhancing the accuracy under dynamic conditions and restricted viewpoints.

This process of the KNN-augmented ICP comprises the following key steps:

KNN Point Selection: In KNN-ICP, we start by using the KNN algorithm to select the nearest neighbors for each feature point in point cloud $$ \boldsymbol {P}_{\text{feature, t}} $$ captured in timestep $$ t $$, i.e., to match each feature point $$ p_i $$ in feature point cloud $$ \boldsymbol {P}_{\text{feature, t}} $$ with the nearest K points in point cloud $$ \boldsymbol {P}_{\text{feature, t+1}} $$ captured in timestep $$ t+1 $$. Since we are only interested in the nearest neighbor for each feature point for ICP alignment, the value of K is set to 1. This can be expressed as:

$$ \text{KNN}(p_i) = \{q_{i}\}. $$

Here, $$ p_i $$ is the $$ i $$th point in point cloud $$ \boldsymbol {P}_{\text{feature, t}} $$, $$ \text{KNN}(p_i) $$ represents finding the nearest point in point cloud $$ \boldsymbol {P}_{\text{feature, t+1}} $$ matched with $$ p_i $$, and $$ q_{i} $$ indicates the nearest neighbor point.

ICP Transformation Optimization: Once the KNN matching is done, we proceed to optimize the transformation between the matched points using the ICP algorithm. The objective of ICP is to find a transformation matrix that maps the points from two sequential point clouds while minimizing the distance between them. ICP typically uses the least squares method to minimize the error, which can be expressed as:

$$ \min\limits_{T} \sum\limits_{i=1}^{N} || T(p_i) - q_{i} ||^2. $$

Here, $$ T $$ is the transformation matrix, $$ N $$ is the number of points in point cloud $$ \boldsymbol {P}_{\text{feature, t}} $$, $$ p_i $$ is the $$ i $$th point in point cloud $$ \boldsymbol {P}_{\text{feature, t}} $$, and $$ q_{i} $$ is the nearest point in point cloud $$ \boldsymbol {P}_{\text{feature, t+1}} $$ matched with $$ p_i $$. The iteration of ICP begins with an initial guess of the transformation matrix. At each iteration, for each point in the source point cloud $$ \boldsymbol {P}_{\text{feature, t}} $$, we find the closest point in the target point cloud $$ \boldsymbol {P}_{\text{feature, t+1}} $$. The transformation matrix $$ T $$ is updated using the least squares method and applied to the source point cloud. The iteration process continues until the transformations converge ($$ \Delta T \leq T_{ \rm {th}} $$, where $$ \Delta T $$ is the change in the displacement component of the transformation matrix $$ T $$ in each iteration and $$ T_{ \rm {th}} $$ is the threshold for convergence, taken as $$ 10^{-6} $$) or until the maximum number of iterations (taken as 20) is reached to obtain the final transformation.

Trajectory Estimation: Utilizing the derived transformations between point clouds captured at sequential time steps, we can effectively estimate the camera motion trajectory by aggregating the frame-to-frame displacements of the point clouds. In detail, the camera motion is reversed to that of the point cloud it captured. Assuming the sequentially derived transformations between point clouds from timestep $$ 1 $$ to $$ I $$ are $$ (x_t, z_t), t\in(1, 2, ..., I) $$, the accumulated trajectory of the camera is then calculated by:

$$ {traj}_\text{camera} = \{(-x_1, -z_1), (-x_1-x_2, -z_1-z_2), ...(-\sum\limits_{t=1}^{I}x_t, -\sum\limits_{t=1}^{I}z_t))\}. $$

The trajectory of the robot can be further calculated from $$ {traj}_\text{camera} $$ and the forward kinematic model of the robot. This trajectory information proves invaluable for gaining insights into the dynamic movements of both the camera and the walking-aid robot as they navigate the environment over an extended period.

Following these steps, we effectively extract staircase shape features and register the point clouds from different frames. The pseudocode of the proposed algorithm can be expressed as Algorithm 1. The code and some evaluation datasets have been released at https://github.com/Seas00n/MPV_2024.git.

Algorithm 1 Staircase feature extraction, point cloud registration, and camera motion trajectory estimation
 1: function Stair_feature_extraction($$ ^\text{Ground}\boldsymbol {P}_{ \rm {2D}} $$)
 2:           $$ stair\_num = \text{RANSAC}(^\text{Ground}\boldsymbol {P}_{ \rm {2D}}) $$                                    Towards environment perception for walking-aid robots: an improved staircase shape feature extraction method Identify the number of stairsteps
 3:           Classify the staircase shape according to Figure 3 and identify the feature points $$ \boldsymbol {P}_{ \rm {feature}} $$
 4:           return$$\boldsymbol{P}_{\text {feature }} $$
 5: function KNN_ICP($$ source\_cloud, target\_cloud $$)
 6:           Initialize the transformation matrix $$ T $$ as an identity matrix
 7:           for$$ iter = 1 $$ to $$ MaxIterations $$do
 8:                Find the nearest correspondences between $$ source\_cloud $$ and $$ target\_cloud $$
 9:                Update $$ T $$ using the least square method according to Equation 4
10:               Apply $$ T $$ to $$ source\_cloud $$
11:               Calculate the change $$ \Delta T $$ in the displacement component of $$ T $$
12:               if$$ \Delta T \leq T_{ \rm {th}} $$then
13:                       break
14:           returnT
15: $$ t = 1 $$
16: while 1 do
17:           $$ \boldsymbol {P}_{\text{feature, t}} $$ = Stair_feature_extraction($$ ^\text{Ground}\boldsymbol {P}_{ \rm {2D}, t} $$)
18:           $$ \boldsymbol {P}_{\text{feature, t+1}} $$ = Stair_feature_extraction($$ ^\text{Ground}\boldsymbol {P}_{ \rm {2D}, t+1} $$)
19:           $$ T $$ = KNN_ICP($$ \boldsymbol {P}_{\text{feature, t}}, \boldsymbol {P}_{\text{feature, t+1}} $$)
20:           Derive the transformation $$ (x_t, z_t) $$ and calculate $$ traj_\text{camera} $$ according to Equation 5
21:           $$ t = t+1 $$

3. PERFORMANCE EVALUATION

3.1. Comparisons with existing methods

In this section, we evaluate our presented staircase shape feature extraction method and draw comparisons with our prior approach[31], which focuses solely on extracting convex corner points, and also a widely-used and state-of-the-art method – the Open3D built-in ICP algorithm. To assess the performance of our feature extraction method, we employ an indirect evaluation metric, that is, the absolute trajectory error between the estimated camera motion trajectory derived from our method and the ground truth trajectory recorded by the motion capture system (calculated by Equation 6, where $$ P_{\mathrm{est}, t} $$ and $$ P_{\mathrm{gt}, t} $$ are the estimated position and the ground truth position at timestep $$ t $$, respectively, and $$ I $$ is the total number of timesteps).

$$ X=\sqrt{\frac{\sum\nolimits_{t=1}^I\left(P_{\mathrm{est}, t}-P_{\mathrm{gt}, t}\right)^2}{I}} $$

As shown in Figure 4, a male subject was instructed to attach a Time-of-Flight (ToF) depth camera (pmd Camboard pico flexx2; the parameter of the camera is shown in Table 1) and an IMU (IM948, 150 Hz) above his knee. His task involved ascending stairs while wearing these devices for eight repeated trials. The width of the stairs is 28 cm, and the height of the stairs is 9 cm (the first step) and 12 cm (subsequent steps). Throughout the experiment, the sampling rate of the point cloud was set to 30 Hz. Data from IMU and the camera were acquired in two threads, and their approximate synchronization was achieved by capturing and fusing the latest data from both threads. In addition to this data, precise ground truth positional information for the camera was captured by the motion capture system (Raptor-12HS, Motion Analysis Corporation, USA) at a frequency of 120 Hz. The motion capture markers were also attached to the toe and heel of the subject to record the position of his foot, but this information was not utilized in this work. The ICP algorithm uses the extracted feature points to estimate the camera motion in the global coordinate system. The average time for feature extraction is $$ \sim $$6 ms, while the KNN-ICP algorithm takes an average of $$ \sim $$3 ms.

Towards environment perception for walking-aid robots: an improved staircase shape feature extraction method

Figure 4. The experiment setup for evaluating the proposed staircase feature extraction method. IMU: Inertial measurement unit.

Table 1

Parameters of the depth camera

Camera parameters
Size$$ 71.9 \mathrm{~mm} \times 19.2 \mathrm{~mm} \times 10.6 \mathrm{~mm} $$
Resolution$$ 224 \times 172 $$ pixels
Frame rateUp to $$ 60 \mathrm{~fps} $$
Average power consumption$$ 300 \mathrm{~mW} $$
Measurement range$$ 0.1 \sim 4 \mathrm{~m} $$
Weight$$ 13 \mathrm{~g} $$

The experiment results, as presented in Figure 5, indicate that the absolute trajectory error across all trials falls within the centimeter range. The outcomes reveal that the enhanced feature extraction method, as introduced in this paper, contributes to reducing absolute trajectory errors in seven out of eight trials. This implies that the camera's estimated motion trajectories, obtained through the proposed method, align more closely with the ground truth than trajectories derived solely from extracting convex corner points as features.

Towards environment perception for walking-aid robots: an improved staircase shape feature extraction method

Figure 5. The experiment results for estimating the camera motion trajectories with the extracted staircase feature points. The blue skeleton represents the lower limb position of the human subject at the beginning of each trial. The red curves depict the ground truth motion trajectories of the camera recorded by the motion capture system. The blue curves embody the estimated camera trajectory with the proposed method (denoted as VO New). The green dashed curves display the estimated camera trajectory with our earlier work extracting only the convex corner points[31] (denoted as VO Pre). $$ \mathbf{X}_\text{New} $$ and $$ \mathbf{X}_\text{Pre} $$ indicate the absolute trajectory error for the two methods.

To demonstrate that our feature extraction system is more suitable for visually constrained prosthetic systems, we conducted comparative experiments with four repeated trials using the Open3D built-in ICP algorithm[26] to obtain relative displacement. In the experiment, to ensure fairness, both our algorithm and Open3D's algorithm used the same 2D point cloud. The point cloud was replicated in columns to transform into a 3D point cloud for calling Open3D's built-in ICP method. Based on statistics, the average processing time for Open3D ICP algorithm on the replicated five-column point cloud is $$ \sim $$10 ms, longer than the simplified KNN-ICP method within the 2D plane (3 ms). The odometer trajectory estimation and absolute errors for different algorithms are shown in Figure 6. The comparison results on the absolute trajectory error and the processing time demonstrate that the proposed method can achieve better estimation accuracy and a faster processing speed due to the dimension reduction process.

Towards environment perception for walking-aid robots: an improved staircase shape feature extraction method

Figure 6. The comparison results with the Open3D's built-in ICP algorithm. The purple dashed curves represent the estimated camera trajectory with the Open3D's built-in ICP algorithm (denoted as VO Open3D). 3D: Three-dimensional; ICP: Iterative Closest Point.

3.2. Evaluation of the robustness of the proposed method

To evaluate the robustness of our approach across wider scenarios, we tested it on another staircase with stairs that are 27.5 cm wide and 14.5 cm high. The absolute trajectory error results in four repeated trials on this higher staircase are presented in Figure 7. The results show that the absolute trajectory errors on both kinds of stairs are of the same scale, demonstrating the robustness of the proposed method on various stair sizes.

Towards environment perception for walking-aid robots: an improved staircase shape feature extraction method

Figure 7. The absolute trajectory error results on the higher staircase.

3.3. Evaluation on a robotic transfemoral prosthesis

The proposed approach has also been evaluated on an actual robotic transfemoral prosthesis to demonstrate real-world viability. As shown in Figure 8, the camera was mounted on the knee joint of the prosthesis. An amputee subject was instructed to wear the prosthesis to climb stairs. The estimated camera trajectory was also recorded and compared with the ground truth. The results are shown in Figure 8. The estimated camera motion trajectory by the proposed method still aligns well with the ground truth, affirming its viability for application on actual robots. The error is a bit larger than that obtained on the healthy subject due to unavoidable mechanical shaking of the joints and also the adapting piece connecting the camera and the prosthesis.

Towards environment perception for walking-aid robots: an improved staircase shape feature extraction method

Figure 8. The experiment setup and absolute trajectory error results on the actual robotic transfemoral prosthesis.

4. DISCUSSION

The findings presented in this study on improved staircase shape feature extraction for walking-aid robots hold significant implications for the field of walking-aid robots. The successful development of an algorithm capable of accurately perceiving and interpreting staircase shapes represents a notable advancement to enhance the autonomy, safety, and adaptability of robotic systems navigating complex environments. It is also crucial to explore the practical application scenarios where our staircase shape feature extraction method significantly enhances the operation of walking-aid robots. By delving into these aspects, we aim to underscore the method's impact on robot navigation and safety in real-world environments.

4.1. Key findings of this work

The key findings of this work in staircase shape feature extraction for walking-aid robots are:

1. Robust Feature Extraction: The developed method successfully overcomes the limitations of existing approaches, ensuring reliable extraction of staircase features even in challenging scenarios, such as restricted viewpoints and rapid movements of the robot.

2. Improved Point Cloud Registration: The integration of RANSAC and KNN-augmented ICP algorithms significantly enhances the point cloud registration process, leading to more accurate and efficient handling of environmental data.

3. Enhanced Navigation Capabilities: The advancements in feature extraction and point cloud processing contribute to the improved navigation capabilities of walking-aid robots, particularly in complex environments with staircases.

Overall, these findings represent a substantial step forward in robotics, particularly in enhancing the environmental perception and navigational proficiency of walking-aid robots.

4.2. Potentials for real-time control of walking-aid robots

The proposed method is promising for integration into real-time control of walking-aid robots. Specifically, when applied to a powered transfemoral prosthesis, it substantially improves the prosthesis's environmental awareness, particularly during stair climbing activities. By leveraging our proposed feature extraction method alongside forward kinematics modeling, the prosthesis comprehensively understands its whole-body position within the environment. This is critical for navigating the complex geometries of staircases with greater precision and safety.

Moreover, the ability to estimate the prosthesis's translational velocity through our feature extraction and ICP method adds another layer of advantage to its control system. This velocity estimation, combined with the positional awareness provided by the feature extraction, allows for the deployment of Kalman filters to predict the prosthesis's pose with an accurate state transition equation. This prediction capability is essential for implementing model predictive control, enabling the prosthesis to move freely and efficiently on stairs, avoiding collisions with stair risers and ensuring a smooth locomotion experience for the user.

The efficiency of our method is underscored by the average time consumption of each iteration of feature extraction and ICP, recorded at around 6 and 3 ms during our experiments. Given that the swing phase of a human's gait cycle lasts approximately 800 ms[10,38], the processing speed of our approach is well within the requirements for real-time model predictive control of walking-aid robots. This demonstrates the feasibility of integrating our method into the control systems of such walking-aid devices without introducing latency that could compromise operational efficiency or safety.

While this paper does not focus on real-time control, the implications of our findings for this application are profound. Integrating our proposed method with the control systems of walking-aid robots represents a promising direction for future research. In subsequent work, we plan to delve deeper into this integration, aiming to showcase the full potential of our method in enhancing the autonomy, adaptability, and safety of walking-aid robots in real-world scenarios.

4.3. Practical application scenarios

One notable application scenario for our method is in residential and public buildings where staircases vary widely in design and complexity. It enables walking-aid robots to accurately identify and navigate these staircases, adjusting to different angles, widths, and materials. For instance, in a multi-floor home, a robot could assist individuals by safely guiding them up and down stairs, adapting its real-time movements to avoid obstacles and optimize safety. This method can help enhance the autonomy of the walking-aid robot by providing it with a detailed understanding of its surroundings and the positions of its joints in the global coordinate system when moving on stairs; thus, the robot can (1) identify the start and end points of staircases; (2) calculate the safest path to avoid collisions using artificial potential field, etc., and keep balance using whole-body control, etc.; (3) dynamically adjust its joint movements based on its own motion state and the staircase's geometry detected through our feature extraction method; and (4) autonomously navigate between staircases using optimal control[39], etc. Such adaptability is crucial for ensuring the robot can operate independently, without constant human supervision, thereby improving the efficiency of assistance provided to patients.

The proposed method can also be applied in outdoor scenarios. Compared to depth cameras based on structured light (such as Intel RealSense and Microsoft Kinect V1) or stereo vision methods, the ToF camera used in this work is more resistant to external light interference[40] and can acquire the 3D environmental point cloud in front of the camera under outdoor lighting conditions.

4.4. Advantages

Combining the presented RANSAC and the KNN-augmented ICP is particularly effective in dynamic environments and restricted viewpoints for several reasons:

1. Robustness to Motion: The ability of RANSAC to handle outliers means that the motion of the robot does not affect the staircase shape classification.

2. Accuracy in Real-Time Environments: The efficiency of KNN in the ICP algorithm ensures that the point cloud registration only uses the most relevant points and remains accurate even when the environment changes, which is crucial for real-time locomotion assistance.

3. Adaptability to Different Viewpoints: By augmenting ICP with KNN, the alignment of the point cloud data is based on the most relevant and geometrically consistent points, which is critical in scenarios with limited field of view. This ensures that each data segment contributes to a comprehensive understanding of the staircase geometry.

4.5. Limitations

It is essential to acknowledge certain limitations. Despite significant advancements, the algorithm performance might still be influenced by certain highly irregular or uncommon staircase designs not extensively represented in the presented staircase point cloud classifications in Figure 1. Overcoming these limitations might necessitate further refinement and expansion of the environment point cloud dataset to encompass a wider array of staircase variations.

While improvements have been made, the method may still face challenges in extremely dynamic or unpredictable environments, where rapid changes in the robot's movement or environmental conditions could affect the accuracy of feature extraction and point cloud registration.

4.6. Future research directions

Future research directions for this work include:

1. Algorithm Optimization for Diverse Hardware: Developing more efficient algorithms that can be effectively implemented on a wider range of walking-aid robots with varying computational capabilities.

2. Enhanced Adaptability in Dynamic Environments: Focusing on improving the robustness of the feature extraction method to better handle highly dynamic environments and unpredictable scenarios.

3. Integration with Advanced Sensing Technologies: Exploring the integration of emerging sensing technologies to further refine environmental perception and feature extraction accuracy.

4. Real-World Testing and Validation: Conducting extensive real-world testing to validate and refine the proposed method under various environmental conditions and scenarios.

5. CONCLUSIONS

In conclusion, this paper presents significant advancements in the domain of staircase shape feature extraction for walking-aid robots. The proposed approach successfully addresses the limitations of previous methods, offering a more robust and accurate system for environmental perception in complex terrains, particularly staircases. The integration of algorithms, such as RANSAC and KNN-augmented ICP, has been demonstrated to substantially enhance the point cloud registration process, thereby improving the accuracy and efficiency of feature extraction. The results show that the absolute trajectory error across all trials falls within the centimeter range.

The findings demonstrate the potential of this method in enhancing the navigational capabilities of walking-aid robots, contributing to safer and more reliable mobility assistance. However, there are still limitations, such as the challenges in highly dynamic environments, which highlight areas for future improvement. Future research should focus on optimizing algorithms to enhance adaptability in dynamic settings and integrating cutting-edge sensing technologies. Additionally, real-world testing and validation are essential to further refine the method and ensure its practical applicability in various environmental conditions. Overall, this work is expected to provide new opportunities for improving the quality of life for individuals with mobility impairments.

DECLARATIONS

Authors' contributions

Wrote the manuscript: Chen X

Made substantial contributions to conception and design of the study and performed data analysis and interpretation: Chen X, Wang Y

Performed data acquisition and provided technical support: Chen C

Provided administrative and material support: Leng Y, Fu C

Availability of data and materials

The experiment data is available upon request. Please contact the corresponding author by email.

Financial support and sponsorship

This work was supported by the National Natural Science Foundation of China [Grant U1913205, 62103180 and 52175272], the Science, Technology and Innovation Commission of Shenzhen Municipality [KCXFZ20230731093401004, ZDSYS20200811143601004], and the Stable Support Plan Program of Shenzhen Natural Science Fund [Grant 20200925174640002].

Conflicts of interest

All authors declared that there are no conflicts of interest.

Ethical approval and consent to participate

Data collection was approved and performed under the supervision of the Sustech Medical Ethics Committee (approval number: 20210009, date: 2021/3/2). The consent to participate is signed by the human subject.

Consent for publication

Not applicable.

Copyright

© The Author(s) 2024.

REFERENCES

1. Zhang K, Luo J, Xiao W, et al. A subvision system for enhancing the environmental adaptability of the powered transfemoral prosthesis. IEEE Trans Cybern 2020;51:3285-97.

2. Hood S, Gabert L, Lenzi T. Powered knee and ankle prosthesis with adaptive control enables climbing stairs with different stair heights, cadences, and gait patterns. IEEE Trans Robot 2022;38:1430-41.

3. Dong C, Yu Z, Chen X, Chen H, Huang Y, Huang Q. Adaptability control towards complex ground based on fuzzy logic for humanoid robots. IEEE Trans Fuzzy Syst 2022;30:1574-84.

4. Yan Q, Huang J, Tao C, Chen X, Xu W. Intelligent mobile walking-aids: perception, control and safety. Adv Robot 2020;34:2-18.

5. Wang C, Peng L, Hou ZG, Li J, Zhang T, Zhao J. Quantitative assessment of upper-limb motor function for post-stroke rehabilitation based on motor synergy analysis and multi-modality fusion. IEEE Trans Neural Syst Rehab Eng 2020;28:943-52.

6. Ma Z, Zhao J, Yu L, et al. A review of energy supply for biomachine hybrid robots. Cyborg Bionic Syst 2023;4:0053.

7. Zhou M, Yu Q, Huang K, et al. Towards robotic-assisted subretinal injection: a hybrid parallel–serial robot system design and preliminary evaluation. IEEE Trans Ind Electron 2020;67:6617-28.

8. Zhang K, Liu H, Fan Z, et al. Foot placement prediction for assistive walking by fusing sequential 3D gaze and environmental context. IEEE Robot Autom Lett 2021;6:2509-16.

9. Yang B, Huang J, Chen X, Xiong C, Hasegawa Y. Supernumerary robotic limbs: a review and future outlook. IEEE Trans Med Robot Bionics 2021;3:623-39.

10. Chen X, Zhang K, Liu H, Leng Y, Fu C. A probability distribution model-based approach for foot placement prediction in the early swing phase with a wearable IMU sensor. IEEE Trans Neural Syst Rehab Eng 2021;29:2595-604.

11. Chen X, Liu Z, Zhu J, Zhang K, Leng Y, Fu C. Comparison of machine learning regression algorithms for foot placement prediction. In: 2021 27th International Conference on Mechatronics and Machine Vision in Practice (M2VIP); 2021 Nov 26-28; Shanghai, China. IEEE; 2021. pp. 169–74.

12. Leng Y, Huang G, Ma L, et al. A lightweight, integrated and portable force-controlled ankle exoskeleton for daily walking assistance. In: 2021 27th International Conference on Mechatronics and Machine Vision in Practice (M2VIP); 2021 Nov 26-28; Shanghai, China. IEEE; 2021. pp. 42–7.

13. Leng Y, Lin X, Yang L, Zhang K, Chen X, Fu C. A model for estimating the leg mechanical work required to walk with an elastically suspended backpack. IEEE Trans Human Mach Syst 2022;52:1303-12.

14. Chen C, Zhang K, Leng Y, Chen X, Fu C. Unsupervised sim-to-real adaptation for environmental recognition in assistive walking. IEEE Trans Neural Syst Rehab Eng 2022;30:1350-60.

15. Ma T, Wang Y, Chen X, et al. A piecewise monotonic smooth phase variable for speed-adaptation control of powered knee-ankle prostheses. IEEE Robot Autom Lett 2022;7:8526-33.

16. Chen X, Chen C, Wang Y, et al. A piecewise monotonic gait phase estimation model for controlling a powered transfemoral prosthesis in various locomotion modes. IEEE Robot Autom Lett 2022;7:9549-56.

17. Chen C, Cao Y, Chen X, Wu D, Xiong C, Huang J. A fused deep fuzzy neural network controller and its application to pneumatic flexible joint. IEEE/ASME Trans Mech 2023;28:3214-25.

18. Chen N, Chen X, Chen C, Leng Y, Fu C. Research on the human-following method, fall gesture recognition, and protection method for the walking-aid cane robot. In: 2022 IEEE International Conference on Cyborg and Bionic Systems (CBS); 2023 Mar 24-26; Wuhan, China. IEEE; 2023. pp. 286–91.

19. Wakita K, Huang J, Di P, Sekiyama K, Fukuda T. Human-walking-intention-based motion control of an omnidirectional-type cane robot. IEEE/ASME Trans Mech 2013;18:285-96.

20. Di P, Hasegawa Y, Nakagawa S, et al. Fall detection and prevention control using walking-aid cane robot. IEEE/ASME Trans Mech 2016;21:625-37.

21. Wang E, Chen X, Li Y, Fu Z, Huang J. Lower-limb motion intent recognition based on sensor fusion and fuzzy multi-task learning. IEEE Trans Fuzzy Syst 2024;32:2903-14.

22. Cong Y, Li X, Liu J, Tang Y. A stairway detection algorithm based on vision for UGV stair climbing. In: 2008 IEEE International Conference on Networking, Sensing and Control; 2008 Apr 06-08; Sanya, China. IEEE; 2008. pp. 1806–11.

23. Murakami S, Shimakawa M, Kivota K, Kato T. Study on stairs detection using RGB-depth images. In: 2014 Joint 7th International Conference on Soft Computing and Intelligent Systems (SCIS) and 15th International Symposium on Advanced Intelligent Systems (ISIS); 2014 Dec 03-06; Kitakyushu, Japan. IEEE; 2014. pp. 1186–91.

24. Sriganesh P, Bagree N, Vundurthy B, Travers M. Fast staircase detection and estimation using 3D point clouds with multi-detection merging for heterogeneous robots. In: 2023 IEEE International Conference on Robotics and Automation (ICRA); 2023 May 29 - Jun 02; London, United Kingdom. IEEE; 2023. pp. 9253–59.

25. Wang C, Pei Z, Qiu S, Tang Z. Deep leaning-based ultra-fast stair detection. Sci Rep 2022;12:16124.

26. Zhou QY, Park J, Koltun V. Open3D: a modern library for 3D data processing. arXiv. [Preprint] Jan 30, 2018. [accessed on 2024 May 8]. Available from: http://dx.doi.org/https://doi.org/10.48550/arXiv.1801.09847.

27. Matsumura H, Premachandra C. Deep-learning-based stair detection using 3D point cloud data for preventing walking accidents of the visually impaired. IEEE Access 2022;10:56249-55.

28. Ramanathan M, Luo L, Er JK, et al. Visual environment perception for obstacle detection and crossing of lower-limb exoskeletons. In: 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); 2022 Oct 23-27; Kyoto, Japan. IEEE; 2022. pp. 12267–74.

29. Liu DX, Xu J, Chen C, Long X, Tao D, Wu X. Vision-assisted autonomous lower-limb exoskeleton robot. IEEE Trans Syst Man Cybern Syst 2021;51:3759-70.

30. Zhang K, Xiong C, Zhang W, et al. Environmental features recognition for lower limb prostheses toward predictive walking. IEEE Trans Neural Syst Rehab Eng 2019;27:465-76.

31. Chen C, Chen X, Yin S, et al. Enhancing prosthetic safety and environmental adaptability: a visual-inertial prosthesis motion estimation approach on uneven terrains. arXiv. [Preprint] Apr 29, 2024. [accessed on 2024 May 8]. Available from: https://doi.org/10.48550/arXiv.2404.18612.

32. Oßwald S, Gutmann JS, Hornung A, Bennewitz M. From 3D point clouds to climbing stairs: a comparison of plane segmentation approaches for humanoids. In: 2011 11th IEEE-RAS International Conference on Humanoid Robots; 2011 Oct 26-28; Bled, Slovenia. IEEE; 2011. pp. 93–8.

33. Pomerleau F, Liu M, Colas F, Siegwart R. Challenging data sets for point cloud registration algorithms. Int J Robot Res 2012;31:1705-11.

34. Maken FA, Ramos F, Ott L. Bayesian iterative closest point for mobile robot localization. Int J Robot Res 2022;41:851-74.

35. Hong Z, Bian S, Xiong P, Li Z. Vision-locomotion coordination control for a powered lower-limb prosthesis using fuzzy-based dynamic movement primitives. IEEE Trans Autom Sci Eng 2024;21:1188-200.

36. Qian Y, Wang Y, Chen C, et al. Predictive locomotion mode recognition and accurate gait phase estimation for hip exoskeleton on various terrains. IEEE Robot Autom Lett 2022;7:6439-46.

37. Patel P, Hare R, Tang Y, Patel N. 3D multi-angle point cloud stitching using iterative closest-point stitching and K-nearest-neighbors. In: 2022 International Conference on Cyber-Physical Social Intelligence (ICCSI); 2022 Nov 18-21; Nanjing, China. IEEE; 2022. pp. 625–30.

38. Chen X, Yu Z, Zhang W, Zheng Y, Huang Q, Ming A. Bioinspired control of walking with toe-off, heel-strike, and disturbance rejection for a biped robot. IEEE Trans Ind Electron 2017;64:7962-71.

39. Zhao J, Lv Y, Zeng Q, Wan L. Online policy learning-based output-feedback optimal control of continuous-time systems. IEEE Trans Circuits Syst Ⅱ Express Briefs 2024;71:652-6.

40. Wang Z. Review of real-time three-dimensional shape measurement techniques. Measurement 2020;156:107624.

Cite This Article

Export citation file: BibTeX | EndNote | RIS

OAE Style

Chen X, Wang Y, Chen C, Leng Y, Fu C. Towards environment perception for walking-aid robots: an improved staircase shape feature extraction method. Intell Robot 2024;4(2):179-95. http://dx.doi.org/10.20517/ir.2024.11

AMA Style

Chen X, Wang Y, Chen C, Leng Y, Fu C. Towards environment perception for walking-aid robots: an improved staircase shape feature extraction method. Intelligence & Robotics. 2024; 4(2): 179-95. http://dx.doi.org/10.20517/ir.2024.11

Chicago/Turabian Style

Xinxing Chen, Yuxuan Wang, Chuheng Chen, Yuquan Leng, Chenglong Fu. 2024. "Towards environment perception for walking-aid robots: an improved staircase shape feature extraction method" Intelligence & Robotics. 4, no.2: 179-95. http://dx.doi.org/10.20517/ir.2024.11

ACS Style

Chen, X.; Wang Y.; Chen C.; Leng Y.; Fu C. Towards environment perception for walking-aid robots: an improved staircase shape feature extraction method. Intell. Robot. 2024, 4, 179-95. http://dx.doi.org/10.20517/ir.2024.11

About This Article

Special Issue

This article belongs to the Special Issue Rehabilitation Robots and Intelligent Assistance Systems
© The Author(s) 2024. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, sharing, adaptation, distribution and reproduction in any medium or format, for any purpose, even commercially, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Data & Comments

Data

Views
308
Downloads
46
Citations
0
Comments
0
2

Comments

Comments must be written in English. Spam, offensive content, impersonation, and private information will not be permitted. If any comment is reported and identified as inappropriate content by OAE staff, the comment will be removed without notice. If you have any queries or need any help, please contact us at support@oaepublish.com.

0
Download PDF
Share This Article
Scan the QR code for reading!
See Updates
Contents
Figures
Related
Intelligence & Robotics
ISSN 2770-3541 (Online)
Follow Us

Portico

All published articles are preserved here permanently:

https://www.portico.org/publishers/oae/

Portico

All published articles are preserved here permanently:

https://www.portico.org/publishers/oae/