
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 12 Issue: 05 | May 2025 www.irjet.net p-ISSN: 2395-0072
![]()

International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 12 Issue: 05 | May 2025 www.irjet.net p-ISSN: 2395-0072
Sandesh Dandge1 , Pranav Indore2 , Vinayak Vairat3 , Mahesh Wagh4 , R. B. Murumkar5
1Department of Information Technology, SCTR's PICT Dhankawadi, Pune
2 Department of Information Technology, SCTR's PICT Dhankawadi, Pune
3 Department of Information Technology, SCTR's PICT Dhankawadi, Pune
4 Department of Information Technology, SCTR's PICT Dhankawadi, Pune
5 Assistant Professor, Department of Information Technology, SCTR's PICT Dhankawadi, Pune
***
Abstract - This survey paper presents a comprehensive analysis of recent advancements in Indian Sign Language (ISL) detection systems, emphasizing the role of various machine learning techniquesforbothstaticandgesture-based sign recognition. It investigates the performance of multiple model architectures,includingConvolutionalNeuralNetworks (CNNs), Long Short-Term Memory (LSTM) networks, Support Vector Machines (SVM), Inception models, Recurrent Neural Networks (RNN), Hidden MarkovModels(HMM),andRandom Forest classifiers, in accurately classifying sign gestures from video data. The study highlights preprocessing methodologies such as key point extraction and dataset preparation to enhance model training effectiveness. Additionally, the paper discusses the evaluation metrics employed to measure model performance and underscores the importance of selecting models tailored to the unique characteristics of ISL gestures. The findings highlight the necessity for improved recognition systems that are culturally relevant and capable of accommodating the distinctive features of Indian Sign Language, ultimately paving the way for more accessible communication solutions for the Deaf and Hard-of-Hearing (DHH) community in India.
Key Words: Indian Sign Language, Gesture Recognition, Machine Learning, Convolutional Neural Networks, Long Short-Term Memory, Random Forest, Hidden Markov Models
Communication is a fundamental human right, yet individuals within the Deaf and Hard-of-Hearing (DHH) communityoftenfacesignificantbarriersinaccessingand utilizing technology tailored to their needs. Indian Sign Language (ISL) serves as the primary mode of communication for millions in India, yet existing technological solutions frequently overlook the nuances inherentinthislanguage.Recentadvancementsinmachine learning have opened new avenues for enhancing ISL recognition systems, facilitating more effective communicationbetweentheDHHcommunityandbroader society.
Thissurveypaperaimstosynthesizefindingsfromseven recent research papers focused on developing and evaluatingISLdetectionsystems.Weexploremethodologies employed in recognizing both static signs, which involve fixed hand shapes, and dynamic gestures that convey meaningthroughmovement.Byexaminingthestrengthsand limitationsofvariousmachinelearningmodels including CNNs, LSTMs, SVMs, Inception models, RNNs, HMMs, and Random Forest classifiers we provide insights into the effectiveness of different approaches in accurately recognizingsignlanguagegestures.
Moreover,thispaperhighlights essential preprocessing techniques for preparing datasets for model training and emphasizes the importance of evaluating model performance through various metrics. Through this comprehensive review, we aim to underscore the significanceofdevelopingculturallyrelevantandaccessible technology that bridges communication gaps for the DHH communityinIndia.Ourresearchnotonlycontributestothe existing body of knowledge in the field but also paves the way for future advancements prioritizing inclusivity and accessibilityintechnologyforallindividuals.
2.1 Indian Sign Language Recognition Using Random Forest Classifier [1]
Thispaperintroducesasystemtobridgethecommunication gap for speech-impaired individuals by recognizing Indian Sign Language (ISL) gestures. The system uses sensor equipped gloves integrated with various sensors (flex sensors,IMU,touchsensors)tocapturehandgestures.The machine learning algorithm, specifically a Random Forest classifier, processes this data to recognize gestures and convertthemintospeechthroughamobile app.
⢠Models/Requirements:
Hardware: Arduino Nano microcontrollers, RF, Bluetoothmodules,Flexsensors,andIMUsensors. MachineLearningModel:RandomForestClassifier.

International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 12 Issue: 05 | May 2025 www.irjet.net p-ISSN: 2395-0072
TheRandomForestclassifierwaschosenbecauseof itsabilitytohandlemissingdata,quicktraining,and highaccuracyingestureclassification.
⢠Workflow:
The system uses sensor gloves equipped with various sensors that capture hand and finger movements. Flex sensors detect the bending of fingers,whiletouchsensorsdetectwhetherafinger isfullybent.IMUsensorscaptureangularvelocity andaccelerationalongX,Y,andZaxes.Thecollected sensordataissenttotheRandomForestclassifier, which processes and classifies the gestures. The classification results are sent via Blue- tooth to a mobile app, where the recognized gesture is convertedtospeech.
⢠DatasetUsedandAccuracyAchieved:
Thedatasetconsistsof10basicgestureswithdata collected by performing each gesture 50 times to create a training set. The model achieved an accuracy of 96.66%, proving its effectiveness in recognizing different gestures despite their similarities.
⢠FutureDirectionandImprovements:
Futureworkcouldfocusonincreasingthedatasetto include more gestures, words, and phrases to improvethesystemāsgeneralization.
Additionally,improvingthehardwaredesignofthe gloves by making them lighter and integrating printed circuit boards (PCBs) could improve usability. Two-way communication could be established toallow gesture-to-speechconversion fortheimpaireduserandspeech-to-textconversion forthenormaluser.
2.2 Hybrid InceptionNet Based Enhanced Architecture for Isolated Sign Language Recognition [2]
Thispaperpresentsahybriddeeplearningarchitectureusing InceptionV4 and Vanilla CNN for recognizing isolated sign languagegesturesfromvideoframes.Thegoalistoimprove recognition accuracy by combining ensemble learning and deeplearningtechniques.ThesystemistestedontheIISL2020datasetofisolatedIndianSignLanguagegestures.
⢠Models/Requirements:
Ensemble Model: Combining Vanilla CNN and InceptionV4.InceptionV4usesconvolutionalfilters of differentsizes(1x1,3x3,5x5)tocapture multiscale features, while the Vanilla CNN provides
baselineefficiency.Ensemblelearningisappliedto improveoverallpredictionaccuracy.
⢠Workflow:
Inputframesofgesturesareprocessedthroughboth VanillaCNNandInceptionV4modelsinparallel.The InceptionV4 model captures complex spatial patternsusingmulti-scalefilters,whileVanillaCNN focusesonextractingsimplerfeatures.Theoutputs of both models are combined using ensemble learning with a weighted average to improve accuracy.Softmaxactivationisappliedtoclassifythe gestureintooneofthepredefinedsignclasses.
⢠Dataset Used and Accuracy Achieved: ThedatasetusedistheIISL-2020dataset,consisting of11IndiansigngesturessuchasāHelloā,āGoodā, āMorningā, etc., with 15 videos per class. The proposedensemblemodelachievedanaccuracyof 98.46%, outperforming other state-of-the-art modelsforisolatedsignlanguagerecognition.
⢠ModelSizeOptimization:
The proposed model is computationally heavy. Reducing model size and computational cost can makeitsuitableforreal-timeapplications.Dataset Expansion: Adding more gestures, including dynamicgestures,couldimprovethesystemāsrealworldapplicability.DataAugmentation:Techniques couldhelptogeneralizethemodeltounseensigns orgestures.
ThispaperfocusesondevelopingaSignLanguageDetection System using Convolutional Neural Networks (CNN) to recognize gestures and convert them into text and image outputs.Theresearchanalyzesdifferentinputdataformats images and CSV files to evaluate their impact on model efficiency and accuracy. The goal is to improve communication for people with hearing and speech impairments by providing an effective, real-time sign languagerecognitionsystem.
⢠Models/Requirements:
CNN is the primary model used due to its strong image classification capabilities. The CNN architecture includes convolutional layers for featureextraction,poolinglayersfordimensionality reduction, and fully connected layers for classification. A VGG16-based model is used for featureextractionbecauseofitsefficiencyinobject recognition. RMSprop optimizer is applied for trainingontheimagedataset,andAdamoptimizer fortheCSVdataset.

International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 12 Issue: 05 | May 2025 www.irjet.net p-ISSN: 2395-0072
⢠Workflow:
Thesystemusestwotypesofdatasets:aCSVdataset with alphabet-based sign language data from the SignLanguageMNISTdatasetandanimagedataset withnumber-basedsignlanguageimages.TheCNN istrainedseparatelyonbothdatasets.Fortheimage dataset,RGBsegmentationandcontour-basedimage segmentation are applied to improve feature extraction.Themodelhasasequentialarchitecture withthreeconvolutionallayersfollowedbypooling andfullyconnectedlayers.Aftertraining,thesystem predicts sign gestures from a live camera feed, displayingoutputasbothtextandimages.
⢠Dataset Used and Accuracy
Achieved: TheCSVdatasetcontains27,455trainingand7,172 testingsamplesrepresentingalphabets(exceptāJā andāZāwhichrequiremotion)inCSVformatfrom grayscale 28x28 pixel images, achieving 93.8% accuracy.Theimagedatasetincludes1,500training and 300 testing JPEG images of sign language numbers (0-9) converted to RGB, with 87% accuracy.TheCSVdatasetoutperformedtheimage dataset in accuracy and training time, achieving a validation accuracy of 98.77%, while the image datasetfacedchallengesduetoclassimbalanceand lowerresolution.
⢠Future Direction and Improvements: Futureworkinvolvesincorporatingdynamicgesture recognition using models like LSTM to handle motion-based gestures such as āJā and āZ,ā integrating facial expression analysis to capture emotionsforenhancedcommunication,expanding the dataset to include more gestures and expressions using data augmentation to improve generalization,andoptimizingthemodelforfaster real-timeprocessingthroughpruning,quantization, orefficientarchitectureslikeMobileNetV2.
Using TensorFlow and Keras in Python [4]
This paper presents a system for real-time American Sign Language (ASL) detection using TensorFlow and Keras, aimed at enhancing communication for individuals with hearing impairments by detecting ASL gestures and converting them into textual and spoken language. The systememploysconvolutionalneuralnetworks(CNNs)for real-timerecognitionandintegratesatext-to-speech(TTS) systemforvocaloutput.
⢠Models/Requirements:
A CNN architecture built with Keras processes thresholded images of hand gestures to predict correspondingASLsigns.ThesystemusesPyttsx3,a text-to-speech library, to convert recognized text intospeech.Themodel isimplementedinPython,
utilizing OpenCV for image processing and TensorFlowasthebackendfordeeplearning.
⢠Workflow:
Real-time video input is captured frame-by-frame and preprocessed by thresholding to remove the background.Imageaugmentationtechniquessuchas flipping and blurring are applied to increase data variability. The thresholded images are split into trainingandtestingdatasets,andtheCNNmodelis trainedtorecognizedifferentASLsigns.Inrealtime, thesystempredictsASLsignsfrominputframesand converts predictions into speech using Pyttsx3 to improveaccessibility.
⢠Dataset Used and Accuracy Achieved: Thestudyusesacustomdatasetcapturingimagesof individuals performing various ASL signs. Data augmentation through flipping and blurring enhancestrainingdiversity.Themodelachievesan accuracyof97%,demonstratinghigheffectiveness inreal-timeASLgesturerecognition.
⢠Future Direction and Improvements: Future work includes integrating LSTM to enable dynamic gesture recognition by processing the temporal sequence of hand movements. Emotion and expression detection can be added through facial expression recognition using a sequential model combining LSTM for motion capture, Gaussian Hidden Markov Model (GHMM) for modeling gesture and expression transitions, and RandomForestforfinalclassification.Thisapproach improves accuracy and context-awareness by combining gesture and facial emotion data. Expandingthedatasetwithmorecomplexgestures andfacial expressionswill enhancegeneralization fordynamic,real-worldscenarios.
This paper presents a system for pose detection using OpenCV and Media Pipe, designed to assist users in maintainingcorrect postureduring exercises by providing real-time feedback. The system identifies bodylandmarks, computes joint angles, and corrects posture to prevent injuries, with applications extending to healthcare for monitoringhandmovements.
⢠Models/Requirements:
OpenCVisusedforimageprocessingandreal-time webcam feed handling. Media Pipe provides pose estimation to detect body landmarks. A k-NN algorithm classifies posture based on joint coordinates.

International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 12 Issue: 05 | May 2025 www.irjet.net p-ISSN: 2395-0072
⢠Workflow: Livevideoinputiscapturedviawebcamwhileusers performexercises.MediaPipeanalyzesvideoframes to detect key body joints and computes angles between them. The k-NN algorithm is trained to recognize correct postures from joint coordinates and provides real-time feedback to the user. The system displays the number of repetitions completed along with any posture corrections needed.
⢠Dataset Used and Accuracy Achieved: Datawascollectedfrom10volunteersperforming pull-ups, push-ups, and squats, with each exercise repeated10times.Accuracyachievedwas92%for pull-ups, 83% for push-ups, and 78% for squats, with lower accuracy in push-ups and squats attributedtogreatervariationinusersāpostures.
⢠Future Direction and Improvements: FutureenhancementsincludeintegratingLSTMfor dynamicposerecognitiontobetterhandletemporal variations in exercises, incorporating emotion recognitionviafacialexpressionsforcontext-aware feedbackusingasequentialmodel(LSTMāGHMM ā Random Forest), expanding exercise variety to include lunges and jumping jacks for versatility, enablingintegrationwithfitnesstrackingdevicesfor comprehensivemonitoring,andimprovingaccuracy by applying face super-resolution techniques to enhance pose estimation under low-light or lowresolutionconditions.
2.6 Sign Language Detection Using LSTM Deep Learning Model and Media Pipe Holistic Approach [6]
ThispaperproposesasystemcombiningMediaPipeforhand trackingwithanLSTMdeeplearningmodeltorecognizesign language gestures, aiming to improve communication accessibility for deaf and hard-of-hearing individuals by accuratelydetectingsignsrepresentedbylettersfromāAāto āZā.
⢠Models/Requirements:
MediaPipeisusedforhandtrackingandextracting 21keypalmcoordinates.TheLSTMmodelprocesses sequentialdatatocapturelong-termdependencies in hand movement. NumPy handles data representation and processing of extracted coordinates.
⢠Workflow:
FramesarecapturedviaOpenCVfromavideofeed of a person signing. Media Pipe detects hand movementsandextractskeypointsconvertedintox, y,zcoordinates.Thesecoordinatesaretransformed intoaNumPyarrayandinputintotheLSTMmodel.
The model predicts the corresponding sign, outputting an array of predicted values. The systemās accuracy is evaluated against other methodsusingasignlanguagedataset.
⢠Dataset Used and Accuracy Achieved: Althoughspecific datasetdetailsarenotprovided, theapproachachievesanoverallaccuracyof99%in recognizingsignlanguagegestures,demonstrating highperformanceandreliability.
⢠Future Direction and Improvements: Future work includes optimizing the LSTM architecture and hyperparameters for better performance, evaluating the model on larger and morediversedatasetsforgeneralization,integrating additionalmodalitieslikefacialexpressionsorbody posture to enhance accuracy, and deploying the system in real-world scenarios to assess usability andrefineitbasedonuserfeedback.
Thispaperpresentsasystemtobridgethecommunication gap between speech-impaired individuals and the general population by recognizing both static and dynamic Indian SignLanguage(ISL)gesturesandtranslatingthemintotext using Media Pipe and LSTM networks, achieving 100% accuracyfor26ISLmotions.
⢠Models/Requirements:
MediaPipeisusedforhandgesturerecognitionand extracting keycoordinates. The RNN-LSTM model processes sequential data to learn long-term dependenciesinsigngestures.OpenCVisutilizedfor capturingvideoandimageprocessing.
⢠Workflow:
Sign language gestures are collected via OpenCV videocapture.MediaPipedetectshandmovements and extracts 21 key points. The x, y, and z coordinatesareconvertedintoaNumPyarrayand input into the LSTM model to predict signs. Performanceisevaluatedusingaconfusionmatrix andstatisticalanalysis.
⢠Dataset Used and Accuracy Achieved: Thoughdatasetspecificsarenotdetailed,themodel achieves 100% accuracy in classifying 26 ISL gestures, effectively recognizing both static and dynamicsigns.
⢠Future Direction and Improvements: Future work may integrate GHMM and Random Forest with LSTM to improve classification and sequencehandling.Expandingthedatasetforbetter generalization,incorporatingfacialexpressionand

International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 12 Issue: 05 | May 2025 www.irjet.net p-ISSN: 2395-0072
emotionrecognitionforcontext-awaretranslation, and optimizing the system for real-time, userfriendly applications aimed at deaf and hard-ofhearingindividualsaresuggested.
2.8 Vision-Based Continuous Sign Language Spotting Using Gaussian Hidden Markov Model [8]
Thispapertacklesthechallengesofcontinuoussignlanguage recognition (SLR), particularly handling movement epenthesis (me) non-sign motions between meaningful signs by using a vision-based system with a Gaussian Hidden Markov Model (HMM) to accurately spot signs in continuoussequences,achievingan83%spottingrate.
⢠Models/Requirements: GaussianHiddenMarkovModel(HMM)isemployed for decoding and recognizing sign sequences. H.264/AVCcompressionisusedforefficientvideo feature extraction. Face and hand detection is performedwithColobasedsegmentationtechniques toisolatehandmovementsbyremovingthesignerās face.
⢠Workflow: SignsequencesarerecordedviaanRGBcamera.The signerās face is detected and removed to focus on handmovement.Skincoloursegmentationisolates hands.Spatialandtemporalfeaturesareextracted and used to train the HMM.The Viterbi algorithm decodesthestatesequence,allowingthesystemto spot signs by distinguishing start/end frames and separatingsignsfrommovementepenthesis.
⢠Dataset Used and Accuracy Achieved: TheAmericanSignLanguageLexiconVideoDataset (ASLLVD) from Boston University, containing thousands of ASL signs recorded from various angles,isused.Thesystemachievesan83%spotting rate, outperforming methods without PCA (78% accuracy).
⢠Future Direction and Improvements: Future research could integrate LSTM models for improved sequential data handling and accuracy. Incorporatingcontextualinformationsuchasfacial expressionsmayenhancerobustness.Extendingthe systemtorecognizefullsignlanguagesentencesand real-world deployment for feedback and iterative improvementsisalsosuggestedtobenefitdeafand hard-of-hearingusers.
This survey paper synthesizes key findings from the analysis of seven research papers focused on Indian Sign Language(ISL)detectionandrecognition.Theadvancements in machine learning and computer vision have provided
significant opportunities for improving communication withinthedeafandhard-of-hearing(DHH)community.The use of various models, including Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) networks,demonstratesthepotentialofthesetechnologiesin effectivelyrecognizingbothstaticanddynamicsigns.
Thecomparativeanalysisofdifferentmodelsrevealedthat CNNs are well-suited for static sign recognition, achieving high accuracy, while LSTMs excel in recognizing gesturebasedsignsduetotheirabilitytoprocesstemporaldata.This highlights the importance of selecting appropriate models based on the characteristics of the sign language gestures being analyzed. Furthermore, the integration of preprocessing techniques and feature extraction methodologies,suchasMediaPipeandOpenCV,hasenhanced theaccuracyandreliabilityofthesystemsdeveloped.
Overall, this research underscores the critical need for technologythataccommodatestheuniquerequirementsof theDHHcommunityinIndia.Byleveragingmachinelearning, computervision,anduser-centricdesign,thesesystemscan significantly improve accessibility and communication for individualswhorelyonsignlanguage.
Despitethepromisingadvancementsoutlinedinthissurvey, severalareasrequirefurtherresearchanddevelopmentto enhance the effectiveness of sign language recognition systems:
⢠Augmented and Virtual Reality (AR/VR) Applications:
Future studiesshould explore the potential ofAR and VR technologies to create immersive learning environmentsforISL.Thiscouldfacilitateenhanced trainingandpracticeopportunitiesforusers,making theacquisitionofsignlanguagemoreinteractiveand engaging.
⢠AlgorithmImprovement:
Research could focus on developing advanced machinelearningalgorithmstailoredspecificallyfor ISL. This includes optimizing current models to improve accuracy and robustness, particularly in recognizinglesscommonsignsandaccommodating regionalvariationswithinthelanguage.
⢠DiverseDataCollection:
Expandingthedatasetsusedfortrainingmodelsis crucial. Future research should prioritize the collectionofacomprehensiveanddiverserangeof sign language gestures, ensuring that the systems aretrainedonarepresentativesamplethatincludes variousdialectsandregionaldifferences.

International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 12 Issue: 05 | May 2025 www.irjet.net p-ISSN: 2395-0072
⢠User-CentricDesign:
Investigatingthesocialandculturalfactorsaffecting the DHH communityās adoption of technology is essential. Understanding user perceptions, preferences, and barriers can inform the development of more accessible and relevant systems.
⢠Real-TimeRecognitionSystems:
Developing real-time sign language recognition systemsthatcanseamlesslyintegrateintoeveryday applications,suchascommunicationapps,isvital. This would allow for practical use in various contexts, enhancing the utility of sign language technology.
⢠Two-WayCommunication:
Furtherresearchshouldfocusoncreatingsystems thatsupportbi-directionalcommunication,enabling bothspeech-to-signandsign-to-speechtranslation. This is crucial for facilitating comprehensive interactionsbetweentheDHHcommunityandthe hearingpopulation.
⢠Cross-LanguageRecognition:
Investigating the feasibility of adapting sign languagerecognitionsystemsgloballycanenhance the accessibility of communication for the DHH communityworldwide.Bypursuingtheseavenues, futureresearchcanbuilduponthefindingsofthis surveyandcontributetotheongoingdevelopment of innovative and effective solutions for sign language recognition, ultimately improving the qualityoflifeforindividualsintheDHHcommunity.
[1] A.S,A.Potluri,S.M.George,G.RandA.S,"IndianSign LanguageRecognitionUsingRandomForestClassifier," 2021 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT), Bangalore, India, 2021, pp. 1-6, doi: 10.1109/CONECCT52877.2021.9622672.
[2] D. R. Kothadiya, C. M. Bhatt, H. Kharwa and F. Albu, "HybridInceptionNetBasedEnhancedArchitecturefor IsolatedSignLanguageRecognition,"inIEEEAccess,vol. 12, pp. 90889-90899, 2024, doi: 10.1109/ACCESS.2024.3420776.
[3] Nikitha, J., Keerthana, S., Balakrishnan, Sahithya, & Sathya,S.(2022).ComparativeAnalysisonDatasetsfor Sign Language Detection System. 1652-1657. 10.1109/ICSCDS53736.2022.9761026.
[4] A. M, H. S. Sree, Jayashre, K. Muthamizhvalavan, N. Gummaraju and P. S, "American Sign Language Real TimeDetectionUsingTensorFlowandKerasinPython," 2024 3rd International Conference for Innovation in Technology(INOCON),Bangalore,India,2024,pp.1-6, doi:10.1109/INOCON60754.2024.10511469.
[5] D.Rai,Anjali,A.KumarandA.Baghel,"PoseDetection Using OpenCV and Media Pipe," 2024 International ConferenceonIntegratedCircuits,Communication,and ComputingSystems(ICIC3S),Una,India,2024,pp.1-6, doi:10.1109/ICIC3S61846.2024.10603040.
[6] M. Deshpande et al., "Sign Language Detection using LSTM Deep Learning Model and Media Pipe Holistic Approach,"2023InternationalConferenceonArtificial IntelligenceandSmartCommunication(AISC),Greater Noida, India, 2023, pp. 1072-1075, doi: 10.1109/AISC56616.2023.10085375.
[7] A.Seviappan,K.Ganesan,A.Anbumozhi,A.S.Reddy,B. V. Krishna and D. S. Reddy, "Sign Language to Text Conversion using RNN-LSTM," 2023 International Conference on Data Science, Agents & Artificial Intelligence (ICDSAAI), Chennai, India, 2023, pp. 1-6, doi:10.1109/ICDSAAI59313.2023.10452617.
[8] A. K. Talukdar and M. K. Bhuyan, "Vision-Based Continuous Sign Language Spotting Using Gaussian HiddenMarkovModel,"inIEEESensorsLetters,vol.6, no. 7, pp. 1-4, July 2022, Art no. 6002304, doi: 10.1109/LSENS.2022.3185181.