DEEP FACIAL DIAGNOSIS: DEEP TRANSFER LEARNING FROM FACE RECOGNITION TO FACIAL DIAGNOSIS

Page 1

International Research Journal of Engineering and Technology (IRJET) e-ISSN:2395-0056

Volume: 09 Issue: 08 | Aug 2022 www.irjet.net p-ISSN:2395-0072

DEEP FACIAL DIAGNOSIS: DEEP TRANSFER LEARNING FROM FACE RECOGNITION TO FACIAL DIAGNOSIS

1Student, Department of Computer Applications, Madanapalle institute of technology and science, India 2Sr. Asst. Professor, Department of Computer Applications, Madanapalle institute of technology and science, India ***

Abstract - The dating amongface and disorderhas been mentioned from hundreds of years ago, which results in the incidence of facial prognosis. The goal right here is to discovertheopportunityoffiguringoutillnessesfromout of control 2D face pictures with the aid of using deep gaining knowledge of test. In this paper, we suggest the usage of deep switch gaining knowledge of from face reputation to carry out the computer-aided facial prognosis on diverse illnesses. In the observation, we carry out the computer-aided facial prognosis on single (beta- thalassemia) and more than one illnesses (betathalassemia, hyperthyroidism, Down syndrome, and leprosy)withaexceptionallysmallrawdata.Theordinary top-1 accuracy with the aid of using deep switch gaining knowledge of from face reputation can attain over 90% which outperforms the overall performance of each conventional system gaining knowledge of test and clinicians withinside the observation. In practical, accumulatingdisorder-particularfacepicturesiscomplex, pricey and time consuming, and imposes moral obstacles because of non-public facts treatment. Therefore, the raw data of facial prognosis associated studies are non-public andusuallysmallevaluatingwiththoseofdifferentsystem gaining knowledge of software. The fulfillment of deep switch gaining knowledge of packages withinside the facialprognosiswithasmalltextmaywanttoofferalowvalueandnoninvasive mannerfor disorderscreeningand detection.

Key Words: Facialdiagnosis,deep switch learning(DTL), facerecognition,beta-thalassemia,hyperthyroidism,Down syndrome,leprosy.

1.INTRODUCTION

LockedgroupingsofsoftwareonCloudElectrical

"Qi and blood withinside the twelve Channels and three hundredandsixty-five Collateralsall glide tothe faceand infuse into the Kongqiao (the seven orifices at the face)," according to Huangdi Nailing, the foundational text of Chinesemedicine,wassaidtoflowtothefaceandintothe sevenorificeshundredsofyearsago.

It suggests that peculiar changes to the internal organs might be visible in the face features of the relevant locations.

AqualifieddoctorcanusefacialfeaturesinChina's"facial diagnostic" process to recognise a patient's common lesionsandsurroundinglesions.

AncientIndiaandGreecealsocomprehendedconceptsofa similar nature. Locked groupings of software on Cloud Electrical According to Huangdi Nailing, the founding text of Chinese medicine, "Qi and blood withinside the twelve Channels and three hundred and sixty-five Collaterals all glide to the face and infuse into the Kongqiao (the seven orifices at the face)" was said to flow to the face and into the seven orifices hundreds of years ago. It implies that odd alterations to the internal organs may be discernible inthefacialfeaturesofthepertinentplaces.

In China, a licenced physician can identify a patient's typicallesionsandsurroundinglesionsusingfacialtraits.

AqualifieddoctorcanusefacialfeaturesinChina's"facial diagnostic" process to recognise a patient's common lesions and surrounding lesions.Similar ideas were also understood in ancient India and Greece.The term "facial analysis"nowreferstothepractiseofdiagnosingdiseases exclusively from the patient's face.The disadvantage of faceanalysisisthatittakesawhileforittobecomeoverly precise.Duetoalackofclinicalresources,itisstilldifficult for people to get healthcare in many rural and underdevelopedareas,whichfrequentlycausestreatment delays.Limitations still persist, such as high costs, lengthy hospital wait times, and the doctor-patient conflict that causes disputes in the medical industry. Using laptopassisted face diagnostics, we can also carry out noninvasive disorderscreening andidentification rapidly and successfully.Therefore, face analysishasa lotof potential if it can be shown to be reliable with little error. We can use artificial intelligence to quantitatively analyse the connection between face and disorder. Deep learning technology has advanced the state of the art in a number of fields recently, especially in computer vision..Deep learning, which is inspired by the structure of human brains, uses a multi-layer shape to perform nonlinear statistical processing and abstraction for characteristic learning. Within the ImageNet Large Scale Visual Recognition Challenge, it has obtained its best overall performance since 2012. (ILSVRC). As the project developed, a number of well-known deep neural network models emerged, including Alex Net, VGGNet, Resent,

© 2022, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page1586

International Research Journal of Engineering and Technology (IRJET) e-ISSN:2395-0056

Volume: 09 Issue: 08 | Aug 2022 www.irjet.net p-ISSN:2395-0072

Inception-reset, and Setnet. The ILSVRCs' findings have demonstrated that deep learning methods for learning capacities can communicate the data's intrinsic statistics more accurately than synthetic methods. One of the most recent developments in artificial intelligence is deep learning.

2. LITERATURE SURVEY

[1]

8% of the population is affected by syndromic genetic disorders overall1. Numerous syndromes have recognisable facial characteristics2, which could be very instructivetoprofessionalgeneticists.Accordingtorecent studies,faceevaluationtechnologywasjustaseffectiveat identifying syndromes as trained physicians. However, those technologies could only identify a small subset of disease phenotypes, which limited their application in scientificsettingswherealargenumberofdiagnosesmust be taken into account. Here, we present DeepGestalt, a system for evaluating face photos that makes use of computervisionanddeeplearningtechniquestoquantify similarities to numerous disorders. In three preliminary trials aimed at differentiating patients with a target syndromefrompatientswithothersyndromesandonein each of identifying unique genetic subtypes in Noonan syndrome, DeepGestalt outperformed physicians. In the lasttest,whichsimulatedareal-worldscientificplacement challenge, DeepGestalt achieved 91% top-10 accuracy in identifying the appropriate diagnosis on 502 unique images. The version was trained using a dataset of more than 17,000 images covering more than 200 syndromes, which was curated using a community-driven phenotyping platform. Without a doubt, DeepGestalt adds a significant financial burden to phenotypic assessments in scientific genetics, genetic testing, research, and precisionmedicine.

version to our target networks. Then, we adjusted the final, entirely related layers to control the CNN version that was trained on computer vision tasks to perform pulmonary nodule type tasks. The initial CNN threshold weights were then improved using the training data namely, the pulmonary nodule patch images and accompanying labels throughback-propagation in order tobettertakeintoaccountthemodalitiescontainedwithin the pulmonary nodule image dataset. Finally, an SVM classifier has been taught using functions discovered inside the tweaked CNN. The educated SVM's output was utilised for the very last kind. Experimental results show that the proposed approach's overall sensitivity was 87.2% with 0.39 FPs consistent with test, which is better than the 85.4% with 4 FPs consistent with test attained usingadifferentstate-of-the-artapproach.

[3] X. Fang, J. Cui, L. Fei, K. Yan, Y. Chen, and Y. Xu

The widely used supervised characteristic extraction method known as linear discriminant analysis (LDA) has beenextendedtovariousvariants.However,thefollowing issues exist with conventional LDA: 1) LDA is sensitive to noise; 2) LDA is sensitive to the choice of amount of projection directions; and 3) LDA is sensitive to the acquired discriminant projection's genuine interpretability for functions. The challenges mentioned aboveareaddressedinthisstudyviaanovelcharacteristic extractionmethodcalledstrongsparselineardiscriminant evaluation(RSLDA).SpecificallyByincorporatingthel2,1 norm, RSLDA adaptively chooses the most discriminative functionsfor discriminantevaluation.RSLDAcanperform better than other discriminant methods because an orthogonalmatrixandasparsematrixaresimultaneously addedtoensurethattheextractedfunctionscanmaintain the fundamental strength of the unique information and enhance the robustness to noise. Extensive tests on six databases show that the suggested strategy performs aggressively when compared to other ultra-modern feature extraction techniques. The suggested approach is alsorobustagainstnoisyinformation.

[4] Squeeze-and-excitation networks by J. Hu, L. Shen, and G. Sun.

UsingaComputerAidedDetection(CAD)devicetoidentify pulmonary nodules in thoracic Computed Tomography is ofoutstandingsignificance(CT).However,achievingalow FPrateisstill a verydifficulttask dueto thevariationsin nodules'appearanceandsize.Inthisreport,weproposea deepfullyswitchlearningmethodbasedonConvolutional Neural Networks (CNN) for FP discount in CT slice-based pulmonary nodule diagnosis. We employed a help vector machine (SVM) for nodule type and one of the contemporary CNN models, VGG-16, as a characteristic extractor to obtain nodule functions. First, we copied all the layers from an ImageNet VGG-sixteen pre-educated

Convolutional neural networks are built on the convolution process, which derives informative functions bycombiningspatialandchannel-clevercharacteristicsin close-by receptive fields. The benefit of enhancing spatial encoding has been demonstrated in various recent processes to increase the representational power of a network. In this work, we look at the channel dating and propose a single architectural element, which we call the "Squeeze-and-Excitation"(SE)block,thatexplicitlymodels the interdependencies among channels to adaptively adjustchannel-cleverfunctionresponses.Byarrangingthe

©
2022, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page1587
Y. Gurovich, Y. Hanuni, O. Bar, G. Nadav, N. Fleischer, D. Gelbman, L. Basel-Salmon, P. M. Krawitz, S. B. Kuhnhausen, M. Zemke, and others,
[2]K. Suzuki, L. He, Y. Wang, Z. Shi, H. Hao, M. Zhao, Y. Feng, and Y

International Research Journal of Engineering and Technology (IRJET) e-ISSN:2395-0056

Volume: 09 Issue: 08 | Aug 2022 www.irjet.net p-ISSN:2395-0072

blocks in a stack, we may demonstrate thatWe can put together Senet designs that generalise incredibly well across challenging datasets. Importantly, we find that SE blocks significantly improve performance for contemporary ultra-modern deep systems with little additional computational cost. Our ILSVRC 2017 class submission, which achieved first place and significantly reduced the top five errors to 2.251%, was inspired by SENets.Thisresultedina25%relativeimprovementover the winning access of 2016. You may download the code and designs for SENet at https://github.com/hujiefrank/SENet

Deep networks were successfully used to analyse functions that could be transferred for changing fashions from a supply region to a different goal area. The joint distributions of many area-specific layers across domains are aligned using the joint most suggest discrepancy (JMMD) criterion in this paper to investigate a switch network. Joint version networks (JAN) are shown. The distributions of the supply and goal domain names are made more distinct by using the adversarial schooling approach to maximise JMMD. Learning can be accomplished via stochastic gradient descent, with lineartime back-propagation used to calculate the gradients. Experimentsshowthatourversionproducesstate-of-thearteffectsoncurrentdatasets.

3. METHOD IMPLEMENTAION

We discussthe era used inside the procedureinthis part. We occasionallyneeda pre-processing methodtoremove interference elements and produce fractalized faces in order to achieve a greater overall performance in the disorder detection. pictures for the CNN entry with a set length so that face prognosis performance can be enhanced. After receiving the pre-processed input, we employ deep switch mastering techniques. With the help ofthesixty-eightfaciallandmarkswecollected,wedoface alignment using the Affi first-class transformation, which includes a series of improvements such as translation, rotation, and scaling. The frontalized face image is then croppedandshrunktofittheCNNutilised..

4. IMPLEMENTATION OF ALHGORITHM

Convolutional Neural Network

Step1: convolutional operation

Convolutionoperationservesasthefirstbuildingelement ofourattackstrategy.Wemaynowfocusoncharacteristic detectors, which essentially serve as the filters for the neural network. Having mastered the parameters of such maps, the layers of detection, and the manner the

discoveries are mapped out, we can also speak characteristicmaps.Step

(1b): RELU Layer

The Rectified Linear Unit or Relook will be in the second section of this process. We'll discuss Relook layers and learn how linearity functions in relation to convolutional neuralnetworks.

Step 2: Pooling Layer

We'lldiscusspoolinginthissectionandlearnexactlyhow it typically operates. But max pooling will be the central concept in this situation. However, we'll discuss a variety of strategies, including mean (or total) pooling. This section will conclude with an interactive visual example thatwillundoubtedlyclarifytheentireideaforyou.

Step 3: Flattening

This might be a brief explanation of the knocking down technique and how, while using convolutional neural networks,wemovefrompoolingtoflattenedlayers.

Step 4: Full Connection

Theentiretyofwhatweprotectedduringthephasecanbe combined in this section. Knowing this will enable you to explore a more complete picture of how Convolutional Neural Networks operate and how the "neurons" that are ultimatelyformedanalysethecategoryofimages.

4. RESULTS AND DISCUSSIONS

In this section, we conduct experiments on the responsibilities of facial prognosis using deep learning methods, namely fine-tuning (abbreviated as DTL1) and employing CNN as a function extractor (abbreviated as DTL2). For comparison, the in-depth learning trends for object identification and face reputation are chosen. Additionally, we compare the outcomes with traditional machine learning techniques while utilising the custom function known as the Dense Scale Invariant Feature Transform (DSIFT) [28]. Scale Invariant Feature Transform (SIFT) is played on a dense grid of locations in the image at a positive scale and orientation by DSIFT, which isfrequentlyused in item repute. Theclassifierfor Bagof Features (BOF) fashions with DSIFT descriptors uses the SVM set of rules for its proper performance in few-shot learning.Thispaperdesigns twocasesoffacial prognosis. One is the binary class mission of beta-thalassemia detection. The other is the discovery of four diseases beta-thalassemia, hyperthyroidism, Down syndrome, and leprosy and their healthy management. This is a multiclassclassobjectiveandismoredifficult.

© 2022, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page1588

International Research Journal of Engineering and Technology (IRJET) e-ISSN:2395-0056

Volume: 09 Issue: 08 | Aug 2022 www.irjet.net p-ISSN:2395-0072

A. SINGLE DISEASE DETECTION

(BETA-THALASSEMIA): A BINARY CLASSIFICATIONTASK

Practically, we typically want to do illness detection or presentationonasingleparticulardisease.Inthisinstance, we best employ 140 images from the collection, including 70 images of people with beta-thalassemia-specific faces andanother 70images forhealthy manipulation. Thirtyof each kind of picture are for practising, and forty of each kindarefortraining.Itisataskwithabinarycategory.We concludefromanevaluationofallselectedsystemlearning strategies (see TABLE 3) that the best ordinary top-1 accuracies may be achieved by using deep learning techniques to the VGG-Face version (VGG-sixteen pretrained at the VGG-Face dataset). Using DTL2 as an additional tool: According to Fig. 4, CNN, used as a characteristic extractor, mayachievea greater accuracyof 95.0% than DTL1: fine-tuning on this assignment. AccordingtoFig.4,theconfusionmatrixappearstodisplay the expected classes, but the confusion matrix's internal columndisplaystheactualclasses.Ofthethirtytestimages for each kind, fake positives and fake negatives are specifically misclassified through DTL1, resulting in an accuracyof93.3%.ThirtyimagesforDTL2withthekindof beta-thalassemia in true, actual positives are all appropriately labelled. On the other hand, 3 of 30 images had bogus positives, are categorised as having a betathalassemia-specificfacealthoughactuallybelongingtothe healthy manipulate. The receiver working characteristic (ROC) curves of the VGG-Face version through DTL1 and DTL2aredepictedinFigure5.Theoverallperformanceof DTL1 is represented by the blue dotted line, while the overall performance of DTL2 is represented by the pink stable line. The estimated Areas Under ROC Curves (AUC) are0.969and0.978,respectively.

Pretrained deep learning models like Alex Net, VGG16, andresetareusedasacomparison.Additionally,adecision is made regarding trade- _tonal system analysis methods that extract DSIFT capabilities from the face shot and predict using a linear or nonlinear SVM classifier [29]. To evaluate the effectiveness of fashions, five indicators accuracy, precision, sensitivity, specificity, and F1-rating, which is a weighted average of the precision and sensitivity were used. It is envisaged that the FLOP indicator will be used to evaluate the time complexity of fashions.Table three lists the consequences of each conventional system studying techniques and fine-tuning deep studying On this job, fashions were pretrained using the ImageNet and VGG-Face datasets. We learn from the outcomesthattheperformanceoffine-tuning(DTL1)deep learning models that have been trained on ImageNet is close to that of conventional system learning techniques. However, the performance of the deep learning models that have been fine-tuned (DTL1) and pretrained on VGG-

Face is typicallybetter thanthat of models that have been pretrained on ImageNet, which is fair. as compared to ImageNet, the supply region of VGG- Face is closer to the DSFdataset.Table4displaystheoutcomesofusingCNNas a characteristic extractor for the trained deep learning models (DTL2Applying DTL2: CNN as a characteristic extractor can typically perform better than DTL1 and outdated system learning methodologies. On this method, deeplearningmodelspretrainedonVGG-Facedonotseem to perform consistently better than models pretrained on ImageNet. It might be studied in the following experiment inasimilarmanner.

5. CONCLUSION

PretraineddeeplearningmodelslikeAlexNet,VGG16,and reset are used as a comparison. Additionally, a decision is maderegardingtrade-_tonalsystemanalysismethodsthat extract DSIFT capabilities from the face shot and predict usingalinearornonlinearSVMclassifier[29].Toevaluate the effectiveness of fashions, five indicators accuracy, precision, sensitivity, specificity, and F1-rating, which is a weighted average of the precision and sensitivity were used.ItisenvisagedthattheFLOPindicatorwillbeusedto evaluatethetimecomplexityoffashions.Table3showsthe effects on this job of each deep learning model that was pretrained on the ImageNet and VGG-Face datasets and each conventional system studying technique. We learn from the outcomes that the performance of fine-tuning (DTL1) deep learning models that have been trained on ImageNet is close to that of conventional system learning techniques.However,theperformanceofthedeeplearning models that have been fine-tuned (DTL1) and pretrained on VGG-Face is typically better than that of models that havebeenpretrainedonImageNet,whichisfairAsopposed to ImageNet, the supply area of VGG- Face is closer to the DSFdataset.Table4displaystheoutcomesofusingCNNas a characteristic extractor for the trained deep learning models(DTL2).WhenusingDTL2:CNN asa characteristic extractor, performance is typically better than DTL1 and traditionalsystemlearningmethods.Onthismethod,deep learning models pretrained on VGG-Face do not seem to perform consistently better than models pretrained on ImageNet.Itmightbestudiedintheensuingexperimentin asimilarmanner.

FUTURE ENHANCEMENT

We receive theoretical and technical assistance from the Visual Information Security (VIS) Team. The authors are grateful tothe entire VIS team for theircontributions. The authors also acknowledge the valuable ideas provided by ProfessorsUrbinoJoséCarreiraNunes,HeldersJesusArajo, andRuiAlexandreMatosArajo.

© 2022, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page1589

International Research Journal of Engineering and Technology (IRJET) e-ISSN:2395-0056

Volume: 09 Issue: 08 | Aug 2022 www.irjet.net p-ISSN:2395-0072

REFERENCES

[1] P. U. Unschuld, Huang Di Neil Jing Us Wen: Nature, Knowledge, Imagery in an Ancient Chinese Medical Text: With an Appendix: The Doctrine of the Five PeriodsandSix Qiin the HuangDiNeil Jing Us Wen. UnivofCaliforniaPress,2003.

[2] A. Kievsky, I. Subsieve, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in Advances in neural information processingsystems,2012,pp.1097–1105.

[3] C. Szeged, W. Liu, Y. Jia, P. Sermonette, S. Reed, D. Angelo,D.Erhan, V. Anouska, and A. Rabinovitch, “Going deeper with convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition,2015,pp.1–9.

[4] K. Simonian and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,”aridpreprintarXiv:1409.1556,2014.

[5] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learningforimagerecognition,”inProceedingsofthe IEEE conference on computer vision and pattern recognition,2016,pp.770–778.

[6] C. Szeged, S. Joffe, V. Anouska, and A. A. Alemi, “Inception-v4, inception-resent and the impact of residual connections on learning,” in Thirty-first AAAIconferenceonartificialintelligence,2017.

[7] F. Shroff, D. Maleficence, and J. Philbin, “Face net: A unified embed- ding for face recognition and clustering,”inProceedingsoftheIEEEconferenceon computer vision and pattern recognition, 2015, pp. 815– 823. Y. Trainman, M. Yang, M. Renato, and L. Wolf, “Deep face: Closing the gap to human-level performance in face verification,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2014,pp.1701–1708.

[8] Disease-specific faces.” [Online]. J. Liu, Y. Deng, T. Bai, Z. Wei, and C. Huang, mng alti- mate accuracy: Face recognition via deep embedding,” aridpreprintarXiv:1506.07310,2015.

[9] J. Fanapanel, T. Gerang, and P. Proof, “The facephysiognomic ex- pensiveness and human identity,” AnnalsofAnatomy-AnatomizeAnzeiger,vol.188,no. 3,pp.261–266,2006.

[10] B. Zhang, X. Wang, F. Karey, Z. Yang, and D. Zhang, “Computerizedfa-ciaodiagnosisusingbothcolorand

texture features,” Information Sciences,vol. 221, pp. 49–59,2013.

[11] E. S. A. Alhaj, F. N. Hattah, and M. A. Al-Omari, “Cephalometricmeasurementsandfacialdeformities insubjectswith β-thalassemiamajor,” TheEuropean Journal of Orthodontics, vol. 24, no. 1, pp. 9–19, 2002.

[12] P. N. Taylor, D. Albrecht, A. Scholz, G. Gutierrez-Buy, J. H. Lazarus,M. Dayan, and O. E. Okogie, “Global epidemiologyof hyperthyroidism and hypothyroidism,” Nature Reviews Endocrinology, vol. 14,no.5,p.301,2018.

[13] Q. Zhao, K. Rosenbaum, R. Sze, D. Zend, M. Summer, and M. G. Lingerer, “Down syndrome detection from facial photographs using machine learning techniques,” in Medical Imaging 2013: ComputerAidedDiagnosis, vol. 8670. International Society for OpticsandPhotonics,2013,p.867003.

[14] E. Turnoff, B. Richard, O. Acadians, B. Khatri, E. Knoll, andS.Lucas, “Leprosyaffectsfacialnervesin a scatter distribution from the main trunk to all peripheral branches and neurolysis improves muscle function of the face,” The American journal of tropical medicine and hygiene,vol. 68, no. 1, pp. 81–88,2003.

[15] Q. Zhao, K. Okada, K. Rosenbaum, D. J. Zind, R. Sze, M. summer, and M. G. lingerer, “Hierarchical constrained local model using andits application to down syndrome detection,” in International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2013, pp. 222–229.

[16] Q.Zhao,K.Okada,K.Rosenbaum,L.Kehoe,D.J.Zend, R.Sze,M.Sum-mar,andM.G.lingerer,“Digitalfacial dysmorphology for genetic screening: Hierarchical constrained local model using Ica,” Medical image analysis,vol.18,no.5,pp.699–710,2014.

[17] H. J. Schneider, R. P. Kusile, M. Günther, J. Roamer, G. K. Stale,Sievers, M. Reince, J. Schiphol, and R. P. Warts, “A novel approach to the detection of acromegaly: accuracy of diagnosis by automatic face classification,” The Journal of Clinical Endocrinology & Metabolism,vol.96,no.7,pp.2074–2080,2011.

[18] X. Kong, S. Gong, L. Su, N. Howard, and Y. Kong, “Automatic detection of acromegaly from facial photographs using machine learning methods,” Biomedicine,vol.27,pp.94–102,2018.

© 2022, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page1590

International Research Journal of Engineering and Technology (IRJET) e-ISSN:2395-0056 Volume: 09 Issue: 08 | Aug 2022 www.irjet.net p-ISSN:2395-0072

[19] T.Shu,B.Zhang,andY.Y.Tang,“Anextensiveanalysis of various texture feature extractors to detect diabetes mellitus using facial specific regions,” Computers in biology and medicine, vol. 83, pp. 69–83,2017.

[20] GayathriV, Chanda Mona M, BanuChitra S.A survey of data mining techniques on medical diagnosis and research. International Journal of Data Engineering. 2014;6(6):301-310.

[21] V.Srinivasan ‘’Feature Selection Algorithm using Fuzzy Rough Sets for Predicting Cervical Cancer Risks‘’InternationalJournalModernAppliedScience Vol.4Issue910seep2010.

© 2022,
Page1591
IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal |

Turn static files into dynamic content formats.

Create a flipbook