Recognition of Urdu sign language: a systematic review of the machine learning classification.

Hira Zahid, Munaf Rashid, Samreen Hussain, Fahad Azim, Sidra Abid Syed, Afshan Saad
Author Information
  1. Hira Zahid: Faculty of Engineering Science Technology and Management, Department of Biomedical Engineering and Department of Electrical Engineering, Ziauddin University, Karachi, Pakistan.
  2. Munaf Rashid: Faculty of Engineering Science Technology and Management, Department of Electrical Engineering and Department of Software Engineering, Ziauddin University, Karachi, Pakistan.
  3. Samreen Hussain: HESSA Project, US AID Program, Karachi, Pakistan.
  4. Fahad Azim: Faculty of Engineering Science Technology and Management, Electrical Engineering Department, Ziauddin University, Karachi, Pakistan.
  5. Sidra Abid Syed: Faculty of Engineering Science Technology and Management, Department of Biomedical Engineering, Ziauddin University, Karachi, Pakistan.
  6. Afshan Saad: Computer Science Department, Muhammad Ali Jinnah University, Karachi, Pakistan.

Abstract

Background and Objective: Humans communicate with one another using language systems such as written words or body language (movements), hand motions, head gestures, facial expressions, lip motion, and many more. Comprehending sign language is just as crucial as learning a natural language. Sign language is the primary mode of communication for those who have a deaf or mute impairment or are disabled. Without a translator, people with auditory difficulties have difficulty speaking with other individuals. Studies in automatic recognition of sign language identification utilizing machine learning techniques have recently shown exceptional success and made significant progress. The primary objective of this research is to conduct a literature review on all the work completed on the recognition of Urdu Sign Language through machine learning classifiers to date.
Materials and methods: All the studies have been extracted from databases, i.e., PubMed, IEEE, Science Direct, and Google Scholar, using a structured set of keywords. Each study has gone through proper screening criteria, , exclusion and inclusion criteria. PRISMA guidelines have been followed and implemented adequately throughout this literature review.
Results: This literature review comprised 20 research articles that fulfilled the eligibility requirements. Only those articles were chosen for additional full-text screening that follows eligibility requirements for peer-reviewed and research articles and studies issued in credible journals and conference proceedings until July 2021. After other screenings, only studies based on Urdu Sign language were included. The results of this screening are divided into two parts; (1) a summary of all the datasets available on Urdu Sign Language. (2) a summary of all the machine learning techniques for recognizing Urdu Sign Language.
Conclusion: Our research found that there is only one publicly-available USL sign-based dataset with pictures versus many character-, number-, or sentence-based publicly available datasets. It was also concluded that besides SVM and Neural Network, no unique classifier is used more than once. Additionally, no researcher opted for an unsupervised machine learning classifier for detection. To the best of our knowledge, this is the first literature review conducted on machine learning approaches applied to Urdu sign language.

Keywords

References

  1. BMJ. 2009 Jul 21;339:b2700 [PMID: 19622552]
  2. Data Brief. 2020 May 21;31:105749 [PMID: 32490098]
  3. Assist Technol. 2015 Spring;27(1):34-43 [PMID: 26132224]
  4. Cancer Genomics Proteomics. 2018 Jan-Feb;15(1):41-51 [PMID: 29275361]
  5. Math Biosci Eng. 2020 Nov 11;17(6):7958-7979 [PMID: 33378928]
  6. J Med Syst. 2017 Sep 22;41(11):175 [PMID: 28940043]
  7. J Imaging. 2020 Jul 23;6(8): [PMID: 34460688]
  8. Data Brief. 2021 Apr 02;36:107021 [PMID: 33937455]

Word Cloud

Created with Highcharts 10.0.0languagelearningUrdusignSignmachinereviewrecognitionresearchliteratureLanguagestudiesscreeningarticlesoneusingmanyprimarytechniquescriteriaeligibilityrequirementssummarydatasetsavailableclassifierBackgroundObjective:HumanscommunicateanothersystemswrittenwordsbodymovementshandmotionsheadgesturesfacialexpressionslipmotionComprehendingjustcrucialnaturalmodecommunicationdeafmuteimpairmentdisabledWithouttranslatorpeopleauditorydifficultiesdifficultyspeakingindividualsStudiesautomaticidentificationutilizingrecentlyshownexceptionalsuccessmadesignificantprogressobjectiveconductworkcompletedclassifiersdateMaterialsmethods:extracteddatabasesiePubMedIEEEScienceDirectGoogleScholarstructuredsetkeywordsstudygoneproperexclusioninclusionPRISMAguidelinesfollowedimplementedadequatelythroughoutResults:comprised20fulfilledchosenadditionalfull-textfollowspeer-reviewedissuedcrediblejournalsconferenceproceedingsJuly2021screeningsbasedincludedresultsdividedtwoparts12recognizingConclusion:foundpublicly-availableUSLsign-baseddatasetpicturesversuscharacter-number-sentence-basedpubliclyalsoconcludedbesidesSVMNeuralNetworkuniqueusedAdditionallyresearcheroptedunsuperviseddetectionbestknowledgefirstconductedapproachesappliedRecognitionlanguage:systematicclassificationDeepMachinePakistaniPattern

Similar Articles

Cited By