Download PDF
Original Article  |  Open Access  |  11 Jul 2024

Echocardiographic cardiac views classification using whale optimization and weighted support vector machine

Views: 162 |  Downloads: 26 |  Cited:  0
Vessel Plus 2024;8:29.
10.20517/2574-1209.2023.140 |  © The Author(s) 2024.
Author Information
Article Notes
Cite This Article

Abstract

Aim: A significant medical diagnostic tool for monitoring cardiovascular health and function is 2D electrocardiograms. For computerized echocardiogram (echo) analysis, recognizing how this device performs is essential. This paper primarily focuses on detecting the transducer's viewpoint in cardiac echo videos using spatiotemporal data. It distinguishes between different viewpoints by monitoring the heart's function and rate throughout the cycle of heartbeats. Computer-aided diagnosis (CAD) examination sizes are the first steps toward computerized classification of cardiac imaging tests. Since clinical analysis frequently starts with automatic classification, the current view can enhance the detection of Cardiac Vascular Disease (CVD).

Methods: This research article uses a Machine Learning (ML) algorithm called the Integrated Metaheuristic Technique (IMT), which is the Whale Optimization Algorithm with Weighted Support Vector Machine (WOA-WSVM).

Results: The parameters in the classification are optimized with the assistance of WOA, and the echo is classified using WSVM. The WOA-WSVM classifies the images effectively and achieves an accuracy of 98.4%.

Conclusion: The numerical analysis states that the WOA-WSVM technique outperforms the existing state-of-the-art algorithms.

Keywords

Cardiac vascular disease, cardiac view, machine learning, classification, image processing, accuracy

INTRODUCTION

An echocardiogram (echo) is employed to identify cardiac-related disease using the motion of the wall, abnormalities, and cardiac region[1]. The ultrasound's benefits for blood flow studies enhance the comprehensiveness of the discussion. Ultrasound offers non-invasive and real-time imaging, enabling the assessment of dynamic blood flow (BF) patterns and velocities. Doppler ultrasound, a common technique, measures BF by detecting changes in sound wave frequency caused by moving blood cells. This enables clinicians to diagnose vascular conditions, including arterial stenosis, venous thrombosis, and arteriovenous malformations.

Additionally, ultrasound aids in monitoring BF dynamics during surgical procedures, contributing to improved patient outcomes and enhanced clinical decision making (CDM). echo depicts the cardiac movements and structure, providing functional and anatomical information about the heart. The echo examination requires manual intervention and evaluation[2,3]. For medical Imaging, the position of the transducer differs during an echo examination, capturing different anatomical heart sections. Medical image processing (MIP) studies offer various temporal and spatial characteristics[4]. echo often relies on the manual identification of key regions, particularly the left ventricle, which is assessed by experts for accurate interpretation[5].

Techniques for automated echo interpretation are becoming more user-friendly with advancements in computer vision (CV) for MIP[6]. Recent studies have focused on developing methods to automatically distinguish cardiac echo patterns for disease classification, leveraging existing cardiac knowledge. However, achieving precise cardiac evaluations remains challenging due to variations in cardiac anatomy under different transducer positions[7]. Understanding the specific transducer position and angle is crucial for standardizing the transducer motion during wall motion examination. Doppler gates must be accurately positioned for the visualization of valves[8,9]. Therefore, understanding the transducer angle is a critical initial step in interpreting cardiac echo videos[10].

This research addresses the issue of determining the transducer orientation from spatiotemporal data in cardiac echo videos. The necessity for computer-aided diagnosis (CAD) has motivated researchers to identify distinct views[11]. To give statistical reports summarizing potential diagnoses, the research is developing a numerically guided decision support system (DSS) for cardiologists, drawing on consensus decisions from other doctors who have examined patients with similar symptoms[12,13]. The central concept involves identifying comparable patients using underlying multimodal data, thereby achieving statistical DSS. The research employs the machine learning (ML) technique, using patient echocardiograms for training. Given the variability in cardiac appearances from different angles, prior knowledge of the cardiac anatomy is necessary for filtering and model selection during this analysis.

Unlike previous methods, this research uses the heart functioning portrayed through a perspective as an additional trait to distinguish between views[14]. An active shape model is employed to depict shape and texture in an echo frame. After tracking an active shape model (ASM) through a heart cycle, the motion information is projected into the eigen-motion feature space of the viewpoint class for matching[15-20]. This research employs geometric and textural signs for localization rather than relying on delineating entire areas or their outlines to anchor view templates[21-25].

Population-based WOA can avoid local optima and get a globally optimal solution. Due to these benefits, WOA may be used without structural changes to solve different limited or unconstrained optimization issues in practical applications[26-30]. Support vector machines (SVM) perform comparatively well when there is a large class gap. SVM exploits memory well in large dimensional spaces, and WOA is integrated with WSVM to improve performance in more dimensions than samples. The cardiac views are effectively classified using this hybridized approach, where superiority is weighed using performance analysis[31-35].

Investigations using transthoracic echo are often conducted following a procedure that uses several probe positions to provide uniform heart images[36-40]. The morphophysiological descriptions must be accurate since they are the foundation for evaluating heart function. Since clinical analysis frequently starts with the current view, automated classification helps update workflow. Up to seven different cardiac images are predicted using classification models developed using convolutional neural networks (CNNs) and AlexNet[41-45].

The field of echocardiography is essential in cardiology. However, human interpretation has several limits. Deep learning (DL) is an emerging method for analyzing MIP, yet its application in image analysis remains limited due to the complexities of learning. CNN annotate various aspects of echo images[46-50]. This strategy will affect the classification performance since the training process is efficient and the best feature selection (FS) is not used. The features are optimized using the optimization approach, which improves classification performance.

An approach for determining features that use the histogram focused on the gradients of the medical image is the scale-invariant features technique (SIFT) and pyramid matching kernel (PMK). This method has been determined to work well for medical data[51-55]. The ML boosting approach, which combines local-global features with multi-object feature identification, effectively achieves classification. The views are created using the spatial region's layout according to the template. The echo video's frames and end-diastolic are used to classify the views.

The back propagation neural network (BPNN) with SVM classifies the medical images. Statistical and histogram approaches are used to collect the features. Using the obtained features, the views of the images are classified. Texture and shape information are captured using the active shape model approach (ASMA). The collected data are monitored across many frames to extract data across the motion. The sequence fit is minimized at the classification stage, and the data are focused by developing a minimum change in the Eigenspaces[56-58]. ASMA defines the form gained during the time-consuming training phase. According to the literature, cardiac view classification involves efficient FS to reduce the dimension and classification to obtain the outcome successfully. The whale optimization (WO) technique attains the size for exploration and exploitation[59,60].

The work discussed here aimed to provide completely automated, reliable, real-time view detection techniques that use WOA-WSVM, making it easier for medical practitioners to develop these methods from a clinical perspective[61-65]. Additionally, the research investigated the possibility of using these techniques for automatic 2D view extraction and orientation guidance to locate the best views in 2D Ultrasound images.

The following are the contributions of this research in comparison to earlier research:

(a) Significantly more patient data than before have been annotated and trained, and extensive patient-based cross-validation and testing have been done to ensure fair results where the existing technique uses one or two videos to retrieve the frames.

(b) Consideration of up to six of the most common cardiac views: Sub-costal view (SCV), short axis view (SAV), mid-esophageal view (MEV), Long axis view (LAV), apical two chamber view (A2CV), and apical four chamber view (A4CV).

(c) Two general classifications and the proposed technique participate in classification analysis. The classification technique, in comparison, is based on recent work in the field and is practical and accurate.

The remainder of the paper is organized as follows: the overview of cardiac disease and the impact of view detection, motivation, contribution, and the analysis of literary works are detailed in Section "INTRODUCTION", the pre-processing and classification process is elucidated in Section "METHODS", the overall result and discussion are illustrated in Section "RESULTS", and the research is concluded with a future recommendation in Section "DISCUSSION".

METHODS

Cardiac view classification

The classification of echo images is clarified in this section. An efficient classification approach is used to evaluate the heart’s functioning. This method removes discarded and noisy information using noise reduction techniques. Redundant data are disregarded using the MF. The pre-processing and classification are detailed in this section. The process of the proposed technique is shown in Figure 1.

Echocardiographic cardiac views classification using whale optimization and weighted support vector machine

Figure 1. Overall Methodology of WOA-WSVM.

Pre-processing

Median filtering (MF) is a nonlinear spatial technique that removes image noise. It is an efficient filtering technique widely applied to remove the salt and pepper noise in the images. It reduces noise in smooth zones and is a type of smoothing method. The averaging filter in this filtering process removes noise with the least amount of edge blur. In MF, each pixel of an image is replaced with the median value of neighboring pixels, including itself. The window size is defined as an odd number of entries (i.e., 3 × 3, 5 × 5, 7 × 7, 9 × 9) so that the median can be computed readily. The pixel values are set in ascending order, and the median value is identified. A median of series (1, 3, 4, 5, 6, 8, 17, 21, 31 = 6), the center value 1 is replaced by the median value 6. This process is constant until all the pixels are changed. Algorithm 1 provides the pseudocode for MF.

Algorithm 1 for pseudocode for MF
Input: Echo Images
Output: Pre-processed Echo Image
Procedure:
Step 1. Initialize the input of ALL images
Step 2. Read the pixels from ALL images.
Step 3. Filter the image using the averaging filter.
Step 4. Select a 2-D window of size 3 × 3.
Step 5. Replace pixel values as 0’s or 255 s in the selected window.
Step 6. Eliminate noise pixels by replacing pixel values.
Step 7. Check the processing pixel as the noisy free pixel.
Step 8. Remove noise using medfilt2.
Step 9. Process all image pixels using steps 1-6.
Step 10. Obtained the enhanced output image
Step 11. End.

Classification

The WOA technique imitates humpback whales' bubble net hunting method. The three components of the hunting strategy are prey encirclement, exploitation, and exploration. In the current best solution, the target prey is selected from the candidates, and updating is started toward the best search agent once it has been started. The equations are defined in Eqs. (1) and (2) to represent the process quantitatively.

$$ \begin{equation} \begin{aligned} \vec{S}=\left|\vec{X} \cdot \overrightarrow{C^{*}}(T)-\vec{C}(t)\right| \end{aligned} \end{equation} $$

$$ \begin{equation} \begin{aligned} \vec{S}(t+1)=\overrightarrow{C^{*}}(t)-\vec{Z} \cdot \vec{S} \end{aligned} \end{equation} $$

where $$(\vec{Z},~\vec{X})$$ indicates the coefficient vectors, the iteration is specified by “t”, and the optimized value acquired from the solution space is C* along the position vector of $$\vec{C}$$. The solution is updated once it identifies the best solution for every iteration. The vector values are determined by the following Eqs. (3) and (4):

$$ \begin{equation} \begin{aligned} \vec{Z}=2 \vec{z} \cdot \vec{r}-\vec{Z} \end{aligned} \end{equation} $$

$$ \begin{equation} \begin{aligned} \vec{X}=2 . \vec{r} \end{aligned} \end{equation} $$

The value of $$\vec{Z}$$ Vector is decreased linearly from the value 2 to 0, and the random value is indicated by $$\vec{r}$$ within the duration of [0, 1] in the exploitation and exploration phases. The updating process by spiraling and shrinking the encircling occurred due to the behavior of the bubble net. The mechanism is Eq. (5).

$$ \begin{equation} \begin{aligned} \vec{C}(t+1)=\vec{S} \cdot e^{b l} \cdot \cos (2 \pi l)+\overrightarrow{C^{*}}(t) \end{aligned} \end{equation} $$

where the distance between the optimal solution and its space is indicated as $$\vec{S}=\left|\overrightarrow{C^{*}}(T)-\vec{C}(t)\right|$$, the random number and constant are indicated as l and b, respectively, which lies in the period of [-1, 1].

In contrast to the exploitation process, search agents are selected randomly, and the locations are updated during the exploration phase. Eqs. (6) and (7) permit the global search procedure.

$$ \begin{equation} \begin{aligned} \vec{S}=\left|\overrightarrow{C_{\text {rand }}}-\vec{C}\right| \end{aligned} \end{equation} $$

$$ \begin{equation} \begin{aligned} \vec{S}(t+1)=\overrightarrow{C_{\text {rand }}}-\vec{Z} \cdot \vec{S} \end{aligned} \end{equation} $$

where the random whale from the current population is indicated by $$\overrightarrow{C_{\text {rand }}}$$.

Weighted Support Vector Machine (WSVM), a supervised classification method, uses a kernel function to transfer indivisible information to separable information through a high-dimensional mapping process[66,67]. The hyperplane in the WSVM defines a buffer between the maximum classes. By including the data points in the classification process, the values of the vectors that depend on the near hyperplane are indicated as support vectors. The RBF kernel function[68] is used and is stated in Eq. (8).

$$ \begin{equation} \begin{aligned} K F(i, j)=e^{\left(\frac{(-\|i-j\|)^{2}}{2 \sigma^{2}}\right)} \end{aligned} \end{equation} $$

The accuracy value of the learning rate works as a fitness function, and its value ranges from [0, 1]. The predicted fitness value, given in Eq. (9), is then calculated by averaging the collected accuracy values.

$$ \begin{equation} \begin{aligned} f t(w, t)=\sum_{k=1}^{N} \frac{A c c y_{w, t, k}}{N} \end{aligned} \end{equation} $$

where the incidence of weight w in the iteration "t" is signified as ft(w,t), the fold in the process is signified as N, and Accywtk signifies acquired accuracy.

Allowing a distinct penalty parameter for every class of MIP is one method to address imbalanced classes and problems with classification involving MIP. For the decision boundary to give the minority classes greater weight, samples of every class are associated with numerous error values determined by class weights. Every sample's weight value is provided in Eq. (10).

$$ \begin{equation} \begin{aligned} S_{i}=C W_{k} \times S W_{i} \end{aligned} \end{equation} $$

where the weight of the class is shown using CWk for the sample "i" with the weight SWi. The class weights of different classes and each sample reflect the significance of weight optimization in SVM. The weight of the class is determined by Eq. (11).

$$ \begin{equation} \begin{aligned} C W_{k}=\frac{\operatorname{Max}\left(N_{k}\right)}{N_{k}}, k=1,2 \ldots . \psi \end{aligned} \end{equation} $$

where the count of the sample is indicated using Nk, and the class with the training sample is indicated using ψ.

The information is lost in most of the classes. The class weight assignment with the significant class in Eq. (12) addresses the issue.

$$ \begin{equation} \begin{aligned} C W_{\text {major }}=\operatorname{mean}\left(C W_{k}\right) \end{aligned} \end{equation} $$

This research offered a method to calculate sample weights using unlabeled data. The selection of the sample weights is crucial. The training samples found at the highest densities of the feature space are significantly more significant than those found near low densities. The reasons for this are: (a) High-density samples reflect the fundamental sample distribution; and (b) The classification process' overall accuracy is impacted more significantly by results on samples in high-density regions of the feature space than low-density regions.

In the feature space, high-density samples have higher weights, and low-density regions have lower weights. The distribution of unlabeled samples determines each training sample's relevance. Optimized parameter retrieval by WSVM and classification procedures are the two main components of the proposed WOA-WSVM approach for ultrasound image classification. The FS step receives the pre-processed images. The dictionary learning process is applied in the feature retrieval process. The relevant characteristics are grouped by the area of interest and used with the WOA-WSVM classifier to label several echo viewpoints. The WO-SVM classifier achieves complex findings in the feature space.

Along with the training samples and learning rate, the WOA-WSVM also includes several other parameters. The training procedure produces a lexicon used in the testing process[69]. Algorithm 2 provides the pseudocode for the classification method WO-SVM.

Algorithm 2 for WO-SVM technique for classification
Input: The incidence of whales in the search agent is N, and Max_itr denotes the count of the iteration
Output: The optimized whale position C* and best fitness function ft(C*)
Step 1. Initialization
Step 2. Itert → 1, the position of whales (n) from the population (FS and SVM parameters)
Step 3. Evaluation of fitness of every whale in the search agent
Step 4. While (itert < Max_itr) Do
Step 5. For Each whale x Do.
Step 6. Update the position of the whale.
Step 7.$$\vec{S}=\left|\vec{X} \cdot \overrightarrow{C^{*}}(T)-\vec{C}(t)\right|$$
Step 8.$$\vec{S}(t+1)=\overrightarrow{C^{*}}(t)-\vec{Z} \cdot \vec{S}$$
Step 9.$$\vec{S}(t+1)=\overrightarrow{C_{r a n d}}-\vec{Z} \cdot \vec{S}$$
Step 10. End For
Step 11. Approximate the whale position (FS and SVM) of every individual whale:
Step 12. Estimating the fitness value of every whale in the search agent
Step 13.$$f t(w, t)=\sum_{k=1}^{N} \frac{A c c y_{w, t, k}}{N}$$
Step 14. If solution space is best, Then
Step 15. itert + 1 → insert
Step 16. End While
Step 17. End If
Step 18. End

RESULTS

The research uses 600 cardiac ultrasound images, of which 35% are used in training and 65% in testing. The images are distributed equally for all 6 classes, 100 for each. The resolution of the image is 300 × 340 at 300 dpi. The experiment uses a Windows 11 OS with 8 GB of RAM, MATLAB R2022a, and a hard disc capacity of 500 GB.

Validation technique

K-fold cross-validation breaks data into k-equivalent sections. The k-1 sections are used in the ML technique's classification process for training, while the residual portions are used to evaluate the classification's effectiveness. K-fold cross-validation is used to assess performance metrics.

Analysis of classification of views and dataset description

The frames of the echo video are used to acquire images. The pre-processing and dictionary learning processes are discussed with depictions. The different views of the US image are discussed in this section.

The dataset comprises 113 echo video sequences, each captured at a resolution of 320 × 240 pixels and a frame rate of 25 Hz. The videos cover distinct viewpoints, including (a) SCV; (b) SAV; (c) MEV; (d) LAV; (e) A2CV; and (f) A4CV. The videos' Electrocardiogram (ECG) waveform facilitated the extraction of heart cycles synchronized at the R-wave peak. Manual labeling was conducted to categorize each video sequence into one of the eight specified views. The dataset includes variable videos and frames for each viewpoint, totaling 2,470 frames across all videos[70,71].

Figure 2A shows the input image, and Figure 2B shows how the MF removes the redundant noise in the image. The pre-processing of echo images enhances the image quality, preparing the image for further processing. The ultrasound image is separated into several blocks, each consisting of a collection of pixels, as shown in Figure 2C. In the learning phase, the dictionary-based learning process is done, and 35% of the dataset is used in the learning process. The learning process is transformed into a lexicon. The remaining images are observed as testing images when the training procedure is complete. The views of the tricuspid and mitral valves are correctly identified from the input images using the WSVM approach with whale optimization. Figure 3 depicts the outcome of the different cardiac views.

Echocardiographic cardiac views classification using whale optimization and weighted support vector machine

Figure 2. The input of Ultrasound image, (A) Input, (B) Pre-Processed, (C) Dictionary Learning.

Echocardiographic cardiac views classification using whale optimization and weighted support vector machine

Figure 3. Different views of echo images, (A) SCV, (B) SAV, (C) MEV, (D) LAV, (E) A2CV, (F) A4CV.

The ultrasound image of the heart is shown in Figure 3 from different angles. The input images classify the tricuspid and mitral valve views from the ultrasound image, and the several perspectives make it easier to retrieve numerous valves. Figure 3A-F depict the different views, namely the sub-costal view (SCV), short axis view (SAV), mid-esophageal view (MEV), long axis view (LAV), apical two chamber view (A2CV), and apical four chamber view (A4CV). These views are further classified with the WOA-WSVM, and the views help in the identification of cardiac-related issues.

Analysis of classification performance

The classification performance is compared using the novel WOA-WSVM approach and the existing CNN-based echo view classification and AlexNet techniques. Performance evaluation uses measures of accuracy, precision, and recall. Accuracy refers to the percentage of correctly classified instances across all classes. Precision measures the percentage of correctly classified instances among all instances classified as a particular class, while recall measures the percentage of correctly classified instances of a particular class among all instances belonging to that class. The performance is given in Eqs. (13)-(15).

$$ \begin{equation} \begin{aligned} Accuracy =\frac{ Number of Correctly Classified Instances }{Total Number of Instances} \times 100 \end{aligned} \end{equation} $$

$$ \begin{equation} \begin{aligned} Precision =\frac{T P}{T P+F P} \times 100 \end{aligned} \end{equation} $$

$$ \begin{equation} \begin{aligned} Recall =\frac{T P}{T P+F N} \times 100 \end{aligned} \end{equation} $$

A comparison of the performance of existing and proposed techniques is given in Table 1.

Table 1

Comparison of performance

AlexNetCNN-Based CVCWOA-WSVM
Precision (%)Recall (%)Precision %Recall %Precision %Recall %
SCV88.396.9SCV97.993.2SCV99.695.6
SAV94.292.4SAV96.797.2SAV98.998.6
MEV96.398.4MEV97.198.3MEV98.999.4
LAV92.396.1LAV96.995.8LAV98.696.9
A2CV94.896.3A2CV96.696.8A2CV97.197.9
A4CV97.896.1A4CV97.897.7A4CV98.798.8
Overall accuracy (%)96.3Overall accuracy (%)97.2Overall accuracy (%)98.4
Runtime (%)Runtime (%)Runtime (%)
GPU20.1GPU10.8GPU3.5
CPU18.3CPU20.5CPU7.6

The provided experimental findings from cross-validation using three different network topologies are assumed in Table 1, and validations are shown for each frame. The best score is exposed in bold.

The classification accuracy is given in Figure 4, where the comparison is made between the proposed WOA-WSVM and existing techniques, namely AlexNet and CNN-based CVC. The accuracy of WOA-WSVM is 2.1% and 1.2% higher than AlexNet and CNN-based CVC, respectively. The value of WOA-WSVM accuracy outperforms the existing state-of-the-art technique.

Echocardiographic cardiac views classification using whale optimization and weighted support vector machine

Figure 4. Comparison of accuracy.

The precision is given in Figure 5, where the comparison is attained by the proposed WOA-WSVM and existing techniques, namely AlexNet and CNN-based CVC. The precision value is compared for different echo views: SCV, SAV, MEV, LAV, A2CV, and A4CV. The precision value of WOA-WSVM for the views SCV, SAV, MEV, LAV, A2CV, and A4CV is 11.3%, 4.7%, 2.6%, 6.3%, 2.3%, and 0.9% higher than AlexNet. The precision value of WOA-WSVM for the views SCV, SAV, MEV, LAV, A2CV, and A4CV is 1.7%, 2.2%, 1.8%, 1.7%, 0.35%, and 0.9% higher than CNN-based CVC. The precision value of WOA-WSVM outperforms the existing state-of-the-art technique.

Echocardiographic cardiac views classification using whale optimization and weighted support vector machine

Figure 5. Comparison of precision.

The recall is given in Figure 6, where the comparison is attained by the proposed WOA-WSVM and existing techniques, namely AlexNet and CNN-based CVC. The recall value is compared for different echo views: SCV, SAV, MEV, LAV, A2CV, and A4CV. The recall value of WOA-WSVM for the views SCV, SAV, MEV, LAV, A2CV, and A4CV is 2.7%, 6.5%, 0.5%, 2.5%, 0.8%, and 2.6% higher than AlexNet. The recall value of WOA-WSVM for the views SCV, SAV, MEV, LAV, A2CV, and A4CV is 6.4%, 1.7%, 0.6%, 2.8%, 0.3%, and 1% higher than CNN-based CVC. The recall value of WOA-WSVM outperforms the existing state-of-the-art technique.

Echocardiographic cardiac views classification using whale optimization and weighted support vector machine

Figure 6. Comparison of recall.

Figure 7 compares the overall Graphic Processing Unit (GPU) and Central Processing Unit (CPU) times. The time is stated in milliseconds (ms), and the proposed approach, WOA-WSVM, outperforms the existing state-of-the-art techniques by achieving minimal time.

Echocardiographic cardiac views classification using whale optimization and weighted support vector machine

Figure 7. Comparison of GPU and CPU.

DISCUSSION

The proposed approach achieves a higher recall and precision rate. The accuracy of the WOA-WSVM is 98.4%, which is higher than other approaches. It combines whale optimization with an ML classifier for parameter optimization, enhancing cardiac disease identification with minimal processing time (3.5 ms GPU, 7.6 ms CPU). The WHO is incorporated with an ML classifier to optimize the parameters in the testing and training phases. The WSVM can enhance the classification of diverse perspectives concerning echo motion and anatomical behavior. The WOA-WSVM attains 3.5 and 7.6 ms for GPU and CPU, which is comparatively minimal compared to existing techniques. The WOA-WSVM takes minimal time compared to other techniques, namely AlexNet[40] and CNN-based CVC[45]. The efficient identification of different views of the heart assists in identifying cardiac disease from different perspectives.

The comparison between AlexNet[40], CNN-Based CVC[45], and WOA-WSVM for various validation tasks highlights WOA-WSVM's superior performance. In terms of precision and recall, WOA-WSVM consistently outperforms the other methods across SCV, SAV, MEV, LAV, and A2CV tasks, indicating higher accuracy in identifying positive instances and minimizing false positives. Furthermore, WOA-WSVM achieves the highest overall accuracy at 98.4%, compared to CNN-Based CVC's 97.2%[45] and AlexNet's 96.3%[40]. In terms of runtime efficiency, WOA-WSVM also excels, especially on GPU and CPU, making it the most effective and efficient technique among the three for the given tasks.

The research does detail cross-validation and testing, and it also analyzes more patient information in order to ensure that its results are accurate. Based on current field research, the study compares two main classifications with the recommended approach, which is both accurate and feasible. Additionally, the research uses the WOA-WSVM-ML algorithm to determine the transducer perspective from cardiac echo videos. It aims to provide automated, reliable, and real-time view detection techniques, making them more accessible for medical practitioners. The study also explores the use of these techniques for automatic 2D view extraction and direction control to locate the best views in 2D Ultrasound images.

The limitations of the study include dependency on correct transducer positioning, complexity in viewpoint determination, reliance on robust algorithms, and potential inaccuracies despite high classification accuracy (98.4%) with WOA-WSVM.

The approach can be extended using mathematical modeling to enhance the classification when the dataset is huge. In the current research, SCV, SAV, MEV, LAV, A2CV, and A4CV views are considered, but future studies will consider more A2C, A3C, A4C, A5C, PLA, PSAB, PSAP, and PSAM views and relevant studies related to cardiac disease.

Conclusion

The classification of Echocardiogram (echo) views using different state-of-the-art techniques was examined in the research. Current findings for conventional 2D echocardiography were attained in this research work. The WOA-WSVM attained high accuracy for real-time inference with limited training parameters. While initial demonstrations were impressive, research demonstrates that 2D data are used to improve top-view guidance. Real-time quality control and direction from ultrasound images using 2D volume slices for training can improve outcomes. The proposed approach achieves excellent recall and accuracy rates. The WOA-WSVM achieves 98.4% accuracy, optimizing parameters with whale optimization, enhancing cardiac disease identification, and minimizing processing time to 3.5 ms (GPU) and 7.6 ms (CPU). The WOA is combined with an ML classifier to optimize the parameters in the training and testing phases. The Weighted SVM can improve the classification of different viewpoints about cardiac motion and anatomical behavior.

DECLARATIONS

Authors’ contributions

Software, validation: Canqui-Flores B, Melgarejo-Bolivar RP

Supervision, project administration: Tumi-Figueroa A

Software: Thirukumaran S

Review, statical analysis: Devi GM

Conceptualization, formal analysis, data collection, methodology, software, validation, writing-original draft: Sengan S

Availability of data and materials

Not applicable.

Financial support and sponsorship

None.

Conflicts of interest

All authors declared that there are no conflicts of interest.

Ethical approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Copyright

© The Author(s) 2024.

REFERENCES

1. Galea N, Catapano F, Marchitelli L, et al. How to perform a cardio-thoracic magnetic resonance imaging in COVID-19: comprehensive assessment of heart, pulmonary arteries, and lung parenchyma. Eur Heart J Cardiovasc Imaging 2021;22:728-31.

2. Rischard J, Waldmann V, Moulin T. Assessment of heart rhythm disorders using the AliveCor heart monitor: beyond the detection of atrial fibrillation. JACC Clin Electrophysiol 2020;6:1313-5.

3. Pollard JD, Haq KT, Lutz KJ, et al. Electrocardiogram machine learning for detection of cardiovascular disease in African Americans: the jackson heart study. Eur Heart J Digit Health 2021;2:137-51.

4. Kaisti M, Panula T, Leppänen J, et al. Clinical assessment of a non-invasive wearable MEMS pressure sensor array for monitoring of arterial pulse waveform, heart rate and detection of atrial fibrillation. NPJ Digit Med 2019;2:39.

5. Hagendorff A, Knebel F, Helfen A, et al. Echocardiographic assessment of mitral regurgitation: discussion of practical and methodologic aspects of severity quantification to improve diagnostic conclusiveness. Clin Res Cardiol 2021;110:1704-33.

6. Sandino CM, Lai P, Vasanawala SS, Cheng JY. Accelerating cardiac cine MRI using a deep learning-based ESPIRiT reconstruction. Magn Reson Med 2021;85:152-67.

7. Kusunose K, Abe T, Haga A, et al. A deep learning approach for assessment of regional wall motion abnormality from echocardiographic images. JACC Cardiovasc Imaging 2020;13:374-81.

8. Litjens G, Ciompi F, Wolterink JM, et al. State-of-the-Art deep learning in cardiovascular image analysis. JACC Cardiovasc Imaging 2019;12:1549-65.

9. Chen TM, Huang CH, Shih ESC, Hu YF, Hwang MJ. Detection and classification of cardiac arrhythmias by a challenge-best deep learning neural network model. iScience 2020;23:100886.

10. Alotaibi FS. Implementation of machine learning model to predict heart failure disease. Int J Adv Comput Sci Appl 2019;10:261-8.

11. Feeny AK, Chung MK, Madabhushi A, et al. Artificial intelligence and machine learning in arrhythmias and cardiac electrophysiology. Circ Arrhythm Electrophysiol 2020;13:e007952.

12. Krittanawong C, Rogers AJ, Johnson KW, et al. Integration of novel monitoring devices with machine learning technology for scalable cardiovascular management. Nat Rev Cardiol 2021;18:75-91.

13. Zhuang Z, Liu G, Ding W, et al. Cardiac VFM visualization and analysis based on YOLO deep learning model and modified 2D continuity equation. Comput Med Imaging Graph 2020;82:101732.

14. Smistad E, Ostvik A, Salte IM, et al. Real-time automatic ejection fraction and foreshortening detection using deep learning. IEEE Trans Ultrason Ferroelectr Freq Control 2020;67:2595-604.

15. Arafati A, Hu P, Finn JP, et al. Artificial intelligence in pediatric and adult congenital cardiac MRI: an unmet clinical need. Cardiovasc Diagn Ther 2019;9:S310-25.

16. Saba SS, Sreelakshmi D, Sampath Kumar P, Sai Kumar K, Saba SR. Logistic regression machine learning algorithm on MRI brain image for fast and accurate diagnosis. Int J Sci Technol Res 2020;9:7076-81. Available from: http://www.ijstr.org/final-print/mar2020/Logistic-Regression-Machine-Learning-Algorithm-On-Mri-Brain-Image-For-Fast-And-Accurate-Diagnosis.pdf [Last accessed on 4 Jul 2024].

17. Sahu AK, Swain G. Reversible image steganography using dual-layer LSB matching. Sens Imaging 2020;21.

18. Banchhor C, Srinivasu N. Integrating cuckoo search-grey wolf optimization and correlative naive bayes classifier with map reduce model for big data classification. Data Knowl Eng 2020;127:101788.

19. Gorla US, Rao K, Kulandaivelu US, Alavala RR, Panda SP. Lead finding from selected flavonoids with antiviral (SARS-CoV-2) potentials against COVID-19: an in-silico evaluation. Comb Chem High Throughput Screen 2021;24:879-90.

20. Niranjan A, Venkata KS, P Deepa S, Venugopal KR. ERCRFS: ensemble of random committee and random forest using stackingC for phishing classification. Int J Emerg Trends Eng Res 2020;8:79-86.

21. Mubarakali A, Ashwin M, Mavaluru D, Kumar AD. Design an attribute based health record protection algorithm for healthcare services in cloud environment. Multimed Tools Appl 2020;79:3943-56.

22. Doppala BP, Midhunchakkravarthy, Bhattacharyya D. Premature detection of cardiomegaly using hybrid machine learning technique. J Adv Res Dyn Control Syst 2020;12:490-8. Available from: https://www.jardcs.org/abstract.php?id=4619 [Last accessed on 4 Jul 2024].

23. Saikumar K, Rajesh V. A novel implementation heart diagnosis system based on random forest machine learning technique. Int J Pharm Rese 2020;12:3904-16.

24. Elsheikh AH, Muthuramalingam T, Shanmugan S, et al. Fine-tuned artificial intelligence model using pigeon optimizer for prediction of residual stresses during turning of inconel 718. J Mater Res Technol 2021;15:3622-34.

25. Ramesh KKD, Kumar GK, Swapna K, Datta D, Rajest SS. A review of medical image segmentation algorithms. EAI Endors Trans Pervas Health Technol 2021;7:27.

26. Kavitha T, Mathai PP, Karthikeyan C, et al. Deep learning based capsule neural network model for breast cancer diagnosis using mammogram images. Interdiscip Sci 2022;14:113-29.

27. Pareek PK, Sridhar C, Kalidoss R, et al. IntOPMICM: intelligent medical image size reduction model. J Healthc Eng 2022;2022:5171016.

28. Kumar EK, Kishore P, Kiran Kumar MT, Kumar DA. 3D sign language recognition with joint distance and angular coded color topographical descriptor on a 2 - stream CNN. Neurocomputing 2020;372:40-54.

29. Saikumar K, Rajesh V, Babu BS. Heart disease detection based on feature fusion technique with augmented classification using deep learning technology. Treat Signal 2022;39:31-42.

30. Rao KS, Samyuktha W, Vardhan DV, et al. Design and sensitivity analysis of capacitive MEMS pressure sensor for blood pressure measurement. Microsyst Technol 2020;26:2371-9.

31. Ahammad SH, Rajesh V, Rahman MZU, Lay-ekuakille A. A hybrid CNN-based segmentation and boosting classifier for real time sensor spinal cord injury data. IEEE Sensors J 2020;20:10092-101.

32. SaiSowmya B, Radha V, Kiran I, PavanKumar T. An efficient way of detecting change in tsunami disaster using CNN. J Adv Res Dyn Control Syst 2020;12:1128-33. Available from: https://www.jardcs.org/abstract.php?id=4369 [Last accessed on 4 Jul 2024].

33. Sengan S, Sagar PV, Ramesh R, Khalaf OI, Dhanapal R. The optimization of reconfigured real-time datasets for improving classification performance of machine learning algorithms. Math Eng Sci Aerosp 2021;12:43-54. Available from: http://nonlinearstudies.com/index.php/mesa/article/view/2497 [Last accessed on 4 Jul 2024].

34. Rachapudi V, Lavanya Devi G. Improved convolutional neural network based histopathological image classification. Evol Intel 2021;14:1337-43.

35. Routray S, Malla PP, Sharma SK, Panda SK, Palai G. A new image denoising framework using bilateral filtering based non-subsampled shearlet transform. Optik 2020;216:164903.

36. Abi B, Acciarri R, Acero MA, et al. DUNE Collaboration. Neutrino interaction classification with a convolutional neural network in the DUNE far detector. Phys Rev D 2020;102:092003.

37. Reddy AVN, Krishna CP, Mallick PK. An image classification framework exploring the capabilities of extreme learning machines and artificial bee colony. Neural Comput Appl 2020;32:3079-99.

38. Mandhala VN, Bhattacharyya D, B V, Rao N T. Object detection using machine learning for visually impaired people. Int J Curr Res Rev 2020;12:157-67. Available from: https://ijcrr.com/uploads/3009_pdf.pdf [Last accessed on 4 Jul 2024].

39. Bhimavarapu U, Battineni G. Skin lesion analysis for melanoma detection using the novel deep learning model fuzzy GC-SCNN. Healthcare 2022;10:962.

40. Rani S, Ghai D, Kumar S, Kantipudi MP, Alharbi AH, Ullah MA. Efficient 3D AlexNet architecture for object recognition using syntactic patterns from medical images. Comput Intell Neurosci 2022;2022:7882924.

41. Sudha GS, Praveena M, Rani GS, Harish TNSK, Charisma A, Asish A. Classification and detection of diabetic retinopathy using deep learning. Int J Sci Technol Res 2020;9:3186-92. Available from: https://www.ijstr.org/final-print/apr2020/Classification-And-Detection-Of-Diabetic-Retinopathy-Using-Deep-Learning.pdf [Last accessed on 4 Jul 2024].

42. Srihari D, Kishore PVV, Kumar EK, et al. A four-stream ConvNet based on spatial and depth flow for human action classification using RGB-D data. Multimed Tools Appl 2020;79:11723-46.

43. Praveen SP, Murali Krishna TB, Anuradha CH, Mandalapu SR, Sarala P, Sindhura S. A robust framework for handling health care information based on machine learning and big data engineering techniques. Int J Healthc Manag 2022; doi: 10.1080/20479700.2022.2157071.

44. Nurmaini S, Rachmatullah MN, Sapitri AI, et al. Deep learning-based computer-aided fetal echocardiography: application to heart standard view segmentation for congenital heart defects detection. Sensors 2021;21:8007.

45. Østvik A, Smistad E, Aase SA, Haugen BO, Lovstakken L. Real-time standard view classification in transthoracic echocardiography using convolutional neural networks. Ultrasound Med Biol 2019;45:374-84.

46. Prabu S, Thiyaneswaran B, Sujatha M, Nalini C, Rajkumar S. Grid search for predicting coronary heart disease by tuning hyper-parameters. Comput Syst Sci Eng 2022;43:737-49.

47. Saikumar K, Rajesh V, Hasane Ahammad SK, Sai Krishna M, Sai Pranitha G, Ajay Kumar Reddy R. CAB for heart diagnosis with RFO artificial intelligence algorithm. Int J Res Pharm Sci 2020;11:1199-205. Available from: https://ijrps.com/home/article/view/762 [Last accessed on 4 Jul 2024].

48. Pande SD, Chetty MS. Linear bezier curve geometrical feature descriptor for image recognition. Recent Adv Comput Sci Commun 2020;13:930-41.

49. Hazarika BB, Gupta D. 1-norm random vector functional link networks for classification problems. Complex Intell Syst 2022;8:3505-21.

50. Doppala BP, Bhattacharyya D, Janarthanan M, Baik N. A reliable machine intelligence model for accurate identification of cardiovascular diseases using ensemble techniques. J Healthc Eng 2022;2022:2585235.

51. Priyanka Chandra C, Thirupathi Rao N, Debnath B, Tai-hoon K. Segmentation of natural images with K-means and hierarchical algorithm based on mixture of pearson distributions. J Sci Indust Res 2021;80:707-15.

52. Krishna PR, Rajarajeswari P. EapGAFS: microarray dataset for ensemble classification for diseases prediction. Int J Recent Innov Trends Comput Commun 2022;10:1-15.

53. Rani S, Lakhwani K, Kumar S. Three dimensional objects recognition & pattern recognition technique; related challenges: a review. Multimed Tools Appl 2022;81:17303-46.

54. Saikumar K, Rajesh V. A machine intelligence technique for predicting cardiovascular disease (CVD) using radiology dataset. Int J Syst Assur Eng Manag 2024;15:135-51.

55. Nyemeesha V, Ismail BM. Implementation of noise and hair removals from dermoscopy images using hybrid Gaussian filter. Netw Model Anal Health Inform Bioinfor 2021;10:49.

56. Tripathy R, Nayak RK, Das P, Mishra D, Patnaik S. Cellular cholesterol prediction of mammalian ATP-binding cassette (ABC) proteins based on fuzzy c-means with support vector machine algorithms. J Intell Fuzzy Syst 2020;39:1611-8.

57. Vijayalakshmi A, Ghali VS, Chandrasekhar Yadav GVP, Gopitilak V, Parvez M M. Machine learning based automatic defect detection in non-stationary thermal wave imaging. ARPN J Eng Appl Sci 2020;15:172-8. Available from: https://www.arpnjournals.org/jeas/research_papers/rp_2020/jeas_0120_8082.pdf [Last accessed on 4 Jul 2024].

58. Brahmane AV, Krishna CB. Rider chaotic biography optimization-driven deep stacked auto-encoder for big data classification using spark architecture: rider chaotic biography optimization. Int J Web Serv Res 2021;18:42-62.

59. Inthiyaz S, Ahammad SH, Sai Krishna A, Bhargavi V, Govardhan D, Rajesh V. YOLO (you only look once) making object detection work in medical imaging on convolution detection system. Int J Pharm Res 2020;12:312-26.

60. Swathi K, Kodukula S. XGBoost classifier with hyperband optimization for cancer prediction based on geneselection by using machine learning techniques. Revue Intell Artif 2022;36:665-70.

61. Gowroju S, Aarti, Kumar S. Review on secure traditional and machine learning algorithms for age prediction using IRIS image. Multimed Tools Appl 2022;81:35503-31.

62. Dakshina Murthy AS, Karthikeyan T, Omkar Lakshmi Jagan B. Clinical model machine learning for gait observation cardiovascular disease diagnosis. Int J Pharm Res 2024;16:3373-8. Available from: http://www.ijpronline.com/ViewArticleDetail.aspx?ID=18315 [Last accessed on 4 Jul 2024].

63. Katragadda T, Srinivas M, Prakash KB, Kumar TP. Heart disease diagnosis using ANN, RNN and CNN. Int J Adv Sci Technol 2020;29:2232-9. Available from: http://sersc.org/journals/index.php/IJAST/article/view/8427 [Last accessed on 4 Jul 2024].

64. Siva Kumar P, Anbazhaghan N, Razia S, Sivani M, Pravalika S, Harshini AS. Prediction of cardiovascular disease using classification techniques with high accuracy. J Adv Res Dyn Control Syst 2020;12:1134-9. Available from: https://www.jardcs.org/abstract.php?id=4370 [Last accessed on 4 Jul 2024].

65. Velliangiri S, Pandiaraj S, Joseph S IT, Muthubalaji S. Multiclass recognition of AD neurological diseases using a bag of deep reduced features coupled with gradient descent optimized twin support vector machine classifier for early diagnosis. Concurr Comput 2022;34:e7099.

66. Noi PT, Kappas M. Comparison of random forest, k-nearest neighbor, and support vector machine classifiers for land cover classification using sentinel-2 imagery. Sensors 2017;18:18.

67. Tzotsos A, Argialas D. Support vector machine classification for object-based image analysis. In: Blaschke T, Lang S, Hay GJ, editors. Object-based image analysis. Berlin Heidelberg: Springer; 2008. pp. 663-77.

68. Ghezelbash R, Maghsoudi A, Carranza EJM. Performance evaluation of RBF- and SVM-based machine learning algorithms for predictive mineral prospectivity modeling: integration of S-A multifractal model and mineralization controls. Earth Sci Inform 2019;12:277-93.

69. Zhou F, Huang S, Xing Y. Deep semantic dictionary learning for multi-label image classification. In: the thirty-fifth AAAI conference on artificial intelligence (AAAI-21). Available from: https://cdn.aaai.org/ojs/16472/16472-13-19966-1-2-20210518.pdf [Last accessed on 4 Jul 2024].

70. ECHO. 2022. Available from: https://www.kaggle.com/c/echo2022 [Last accessed on 4 Jul 2024].

71. EchoNet-dynamic. Available from: https://stanfordaimi.azurewebsites.net/datasets/834e1cd1-92f7-4268-9daa-d359198b310a [Last accessed on 4 Jul 2024].

Cite This Article

Export citation file: BibTeX | EndNote | RIS

OAE Style

Canqui-Flores B, Melgarejo-Bolivar RP, Tumi-Figueroa A, Thirukumaran S, Devi GM, Sengan S. Echocardiographic cardiac views classification using whale optimization and weighted support vector machine. Vessel Plus 2024;8:29. http://dx.doi.org/10.20517/2574-1209.2023.140

AMA Style

Canqui-Flores B, Melgarejo-Bolivar RP, Tumi-Figueroa A, Thirukumaran S, Devi GM, Sengan S. Echocardiographic cardiac views classification using whale optimization and weighted support vector machine. Vessel Plus. 2024; 8: 29. http://dx.doi.org/10.20517/2574-1209.2023.140

Chicago/Turabian Style

Bernabe Canqui-Flores, Romel P. Melgarejo-Bolivar, Alfredo Tumi-Figueroa, S. Thirukumaran, G. Meena Devi, Sudhakar Sengan. 2024. "Echocardiographic cardiac views classification using whale optimization and weighted support vector machine" Vessel Plus. 8: 29. http://dx.doi.org/10.20517/2574-1209.2023.140

ACS Style

Canqui-Flores, B.; Melgarejo-Bolivar RP.; Tumi-Figueroa A.; Thirukumaran S.; Devi GM.; Sengan S. Echocardiographic cardiac views classification using whale optimization and weighted support vector machine. Vessel Plus. 2024, 8, 29. http://dx.doi.org/10.20517/2574-1209.2023.140

About This Article

Special Issue

This article belongs to the Special Issue Artificial Intelligence in Cardiology: A New Era Has Come
© The Author(s) 2024. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, sharing, adaptation, distribution and reproduction in any medium or format, for any purpose, even commercially, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Data & Comments

Data

Views
162
Downloads
26
Citations
0
Comments
0
1

Comments

Comments must be written in English. Spam, offensive content, impersonation, and private information will not be permitted. If any comment is reported and identified as inappropriate content by OAE staff, the comment will be removed without notice. If you have any queries or need any help, please contact us at support@oaepublish.com.

0
Download PDF
Share This Article
Scan the QR code for reading!
See Updates
Contents
Figures
Related
Vessel Plus
ISSN 2574-1209 (Online)
Follow Us

Portico

All published articles are preserved here permanently:

https://www.portico.org/publishers/oae/

Portico

All published articles are preserved here permanently:

https://www.portico.org/publishers/oae/