Performance Comparison Analysis for Medical Images Using Deep Learning Approaches

Page 1

International Research Journal of Engineering and Technology (IRJET) e ISSN: 2395 0056

Performance Comparison Analysis for Medical Images Using Deep Learning Approaches

Neha Adawadkar, Roshani Chavan, Chaitra Patwardhan, Advait Thakur, Prof. Dr. Mrs. Suhasini Itkar

Neha Adawadkar, Dept. of Computer Engineering, PES Modern College of Engineering, Maharashtra, India Roshani Chavan, Dept. of Computer Engineering, PES Modern College of Engineering, Maharashtra, India Chaitra Patwardhan, Dept. of Computer Engineering, PES Modern College of Engineering, Maharashtra, India Advait Thakur, Dept. of Computer Engineering, PES Modern College of Engineering, Maharashtra, India Prof. Dr. Mrs. Suhasini Itkar, Dept. of Computer Engineering, PES Modern College of Engineering, Maharashtra, India ***

Abstract - The recent pandemic has enforced a lot of stress on our healthcare industry as well as the increase in population has brought to light that the work bestowed upon the healthcare specialists needs to be reduced. Deep learning models such as neural networks are efficient in finding hidden patterns, which assists the experts in the specified field. Medical images like chest x rays are of utmost importance for the diagnosis of diseases, for instance,braintumors,dementia, pneumonia, COVID 19, cardiovascular, and many more. Collaborating these medical images along with the deep learning techniques have paved the path for enormous applications leading to the reduction in stress given upon the health sector. In this paper, we propose a brief comparison and study of existing technologies for analyzing chest x rays using deep learning.

Key Words: Chest X Ray (CXR), Convolutional Neural Network (CNN), Fully Convolutional Network (FCN), Deep Cascade of Convolutional Neural Network (DCCNN), Lookup based Convolutional neural networks (LCNN), Deep Convolutional Neural Network(DCNN)

1. INTRODUCTION

Image segmentation is the method of dividing the images into various pixels or regions called as segments and processes images accurately. Diagnosis with the aid of imaging can help in better localization and staging of the disease. Often, Medical Images follow a traditional image segmentationlifecycletoanalyzethem.[1]Highlightsvalues ofimaginginthediagnosisofadisease.Theanalysisofthese image segmentation includes pre processing of image followed by feature extraction and perform classification. These processes are accomplished by using various deep learningmethodologies.Thedeeplearningtechniquesfrom thefollowingresearcheshavedisplayedremarkableresults fordiagnosisofmedicalimages.Inthepre processingstage, themostcommonlyusedtechniqueisnormalizationwhich helpsinreducingthedimensionalityoftheimageformaking theimagefitforsegmentation.Classificationofimagescan bedoneusingSupportVectorMachine(SVM)[3].Otherthan

that, the data augmentation [4] is used for increasing the robustnessofdeeplearning.Alongwiththesetechniques,an Up sampling method [4] is implemented which helps the network layers to propagate context data to higher resolutionlayers.

Featureextractionisatechniquewhichisusedtoreducea large input data set into relevant features. The images requiredforsegmentationhavealargenumberofvariables. A lot of computing resources are required for these variables.Therefore,featureextractionshelptoattainbest featurefromthosebigdatasetsbyselectingandintegrating variablesintofeatures,effectivelydecreasingtheamountof data Thefinalstepistheclassification,whereweobtainthe desiredoutput.Thepre processedandselectedfeaturesare provided to the model for training purposes which ultimatelygivesustheappropriateresult.Theclassification taskisasupervisedlearningapproachinmachinelearning domain.ModelssuchasSVMin[3],U netin[4],Resnet18, Resnet50and Xceptionin[8],LCNN andDCNNin[9],and FullyConvolutionalNetwork(FCN)in[15]areusedforthis stepintheexaminedresearch.

2. LITERATURE SURVEY

Over the past few years, the machinelearning framework has immensely helped in analyzing the complex medical images. Tremendous advancements in medical image segmentationanditsapplicationshavemotivatedustostudy such research techniques. The aim of [1] is to concisely present the issue of coronavirus and highlight the importance of medical imaging in its diagnosis. Inception inspired,anoveldeepCNNarchitecturewheredepth wise separable convolutions replace the Inception modules is beingdiscussedin[2].Here,anarchitecturenamedXception which is similar to Inception V3 is developed which efficiently uses the model parameters. Emphasizing automatic tuberculosis detection [3] proposes a binary classifierusingSVMwhichclassifiesaCXRaseithernormal orabnormalachievinganaccuracyof82.1%andareawithin theROCcurveas88.5%.In[4]aU NetNetworkistrained

Volume: 09 Issue: 07 | July 2022 www.irjet.net p ISSN: 2395 0072 © 2022, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page1046

International Research Journal of Engineering and Technology (IRJET) e ISSN: 2395 0056

Volume: 09 Issue: 07 | July 2022 www.irjet.net p ISSN: 2395 0072

with the use of data augmentation by using elastic deformations for effective use of annotated samples. The segmentation results for ISBI cell tracking challenge with averageIOU(“intersectionoverunion”)for“PhC U373”data setis92%andfore“DIC HeLa”datasetIs77.5%.[5]Inthis paper a framework for dynamic sequences of 2 D cardiac Magnetic Resonance Images (MRI) reconstructed from Under sampled data using DCCNN is proposed. With the helpofthismodel,thereisfeasibilityinthereconstructionof images with good quality through a network having interleaveddataconsistencystages.Itcanreconstructmost oftheanatomical structures more precisely by previously learned data. In [6] the proposed system repurposes the networkswhicharetrainedonimageclassification.Semantic segmentationisdonebyusing‘AtrousConvolution’byun sampled filters for feature extraction. We can explicitly control two things: the resolution for feature response computation and effectively amplify the field of view of filters to integrate larger context without the need to increasethenumberofparametersnorthecomputational amount.In[7]thismodelshowsthatconvolutionalnetworks when trained by themselves exceed the existing semantic segmentation.Thepurposeofthisresearchistodevelopa FCNwhichinputthearbitrarysizeandcreateasimilar sized output with efficient inference. This model adapts contemporary classification networks into FCN and then passesontheirlearnedparametersbyfine tuningintothe segmentationblock.In[8]COVID 19andThoraxdiseases, fromchestX Rayimagesarebeingdiagnosedbythesystem using residual networks 18, 50 and Xception model. This Resnet18 model gives an accuracy of 94% while resnet50 givesaccuracyof93%andtheXceptionmodelishavingan accuracy of 95%. In FUIQA [9] LCNN and DCNN are used togetherforanalysiswhereinLCNNgivesoverallaccuracyof 92%andtheDCNNgivesoverallaccuracyofaround97%. Fortheimagequalityassessment[10]CNNmodelisusedto reconstructalltypesofimagesi.e.,assessingthequalityof the image and also reconstructing it in a better way. The model involves a convolutional layer, maximum and minimum pooling followed by two fully connected layers, andanoutputnode.Anoptimizationprocessconsistingof the network structure, feature learning and regression is integrated,thisdirectstoamoreefficientmodelforjudging theimagequality.Inthemodelforimagesegmentation[11] U netisusedforthesegmentationofbiomedicalimages.In [12]forcardiovascularmagneticresonance(CMR)images, anautomatedanalysismethodisbasedonFCN.Thenetwork is trained and tested on an extensive dataset from the UK Biobank.Severaltechnicalmetrics,suchasthemeancontour distance,DicemetricandHausdorffdistance,aswellassome medicalmeasures,includingend systolicvolume(LVESV), LV mass (LVM); left ventricle (LV) end diastolic volume (LVEDV) and right ventricle (RV) end diastolic volume (RVEDV)andend systolicvolume(RVESV)havebeenused toevaluatethesystem.Thesystemgivesanaccuracyof0.93. In [13] convolutional recurrent neural network (CRNN) model is used for image quality assessment. High quality

CMRimagesarereconstructedbyCRNN.Thisisdonewith the help of highly under sampled k space data with exploiting the dependencies of the temporal sequences as well asthe iterativenature ofthe traditional optimization algorithms.Thismodelgivesaminimumtimeof3s.In[14] the authors have proposed the use of VGG16 for Breast CancerDetectiongivinganoverallaccuracyof94.77%.For Braintumoranalysis[15]useofdeepneuralnetworkslike FCN, Res net, U net, Encoder/Decoder, and CNN is done giving an overall accuracy of 75% to 80%. Quality assessmentofEchocardiogramsin[16]regressionmodelis enhancedtoassessthestandardandqualityofechoimages. Astochasticgradientdescentalgorithmisusedtoevaluate thelossfunctionofthesystem.Themodelwasdesignedbya convolutionstage(ConvolutionalandPoolingLayer)anda fully connected stage. In the paper for image quality assessment [17] basic operations are done to eliminate known distortions from images. Distorted and reference signalsarescaledandalignedthenmetricsofdigitalvalues needtoconvertintopixelintensityusingnonlinearpointed transformations. Imagesaredecomposedintochannelsthat areselectiveforspatialandtemporalorientation.Discrete CosineTransformorSeparablewavelettransformmethods are used for quality assessment methods. Errors between decomposedreferencesineachchannelarenormalizedso thatpresenceofoneimagecomponentwilldecreaseanother imagecomponentproximateinorientationconcerningthe spatialortemporallocationandfrequency.

Fig 1:Thegeneralstepsforevaluatingimagesusingdeep learningtechniques

3. RELATED WORKS

Considering the literature review in this section we will focus upon deep learning models SVM [3], U net [4], Resnet18,Resnet50andXception[8],LCNNandCCNN[9] andCNN[14].Intheupcomingfivesections,themodelswill bestudiedindetailandwillthenbecomparedaccordingto theiraccuracydiscussedinthelastsection.

3.1 Abnormal CXR with tuberculosis detection using SVM

The research presents an automated system for the detection of tuberculosis. In [3] the performance of the systemismeasuredontwodatasets:TheMCdatasetfrom the Department of Health and Human Services of Montgomery County (MC), Maryland, and the Shenzhen dataset, from Shenzhen No.3 Hospital in Shenzhen,

2022, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal |

©
Page1047

International Research Journal of Engineering and Technology (IRJET) e ISSN: 2395 0056

Volume: 09 Issue: 07 | July 2022 www.irjet.net p ISSN: 2395 0072

Guangdongprovidence,China.Thesystemusesagraphcut based for lung segmentation and selects training masks according to the horizontal intensity projections of the histogramequalizedimagesandBhattacharyyacoefficient for similarity projections between input and training CXR images.Further,thesubtlestructuresinaCXRarepickedby featurescomputedthroughfeaturecomputationtechniques. ObjectDetectionInspiredFeaturesaretakenfromthe MC DatasetandCBIR basedImageFeaturesaretakenfromthe Shenzhen dataset where, the 594 dimensioned feature vectorswhichisgreaterthanthreetimesthefeaturevector oftheMCdataset.Thecomputationofasetofedge,shape, andtexturefeaturesisthenfedasabinaryclassifierinput. Thus, the SVM classifier system can classify a given input imageintoclassifiedintonormalorabnormal.FortheMC dataset, the classification accuracy is 78.3% and the area under the ROC curve (AUC) is 86.9%, for the Shenzhen datasettheclassificationaccuracyis82.5%andtheAUCis 88%.

Model Dataset Accuracy AUC

SVM MC 78.3 86.9 Shenzhen 82.5 88

Table-1.AbnormalCXRwithtuberculosisdetection

3.2 U Net: Convolution Network for segmentation of image

The proposed architecture contains contracting paths to capturecontextandsymmetricexpandingpathsforprecise localization. This [4] model uses a weighted loss, large weight in loss function allocated by separating labels betweentouchingcells,givingaccuratesegmentation.Data augmentation is used for desired robust properties and invariancewhenahandfuloftrainingsamplesareavailable. The important modification in the architecture is the up samplingmethodwherealargenumberoffeaturechannels arepresentwhichhelpsthenetworktopropagatesensitive information to high resolution layers. As a result, the expansivepathandcontractingpatharenearlysymmetric, producing a u shaped architecture. The proposed model does not contain any fully connected layers and uses only associated parts of each convolution layer which allows smoothsegmentationofarbitrarylargeimageswiththehelp of overlap tile strategy. In this research, the model is evaluatedusingtwodatasets.Thefirstdata set“PhC U373” contains Glioblastoma astrocytoma U373 cells on a polyacrylamide substrate recorded by phase contrast microscopy. Here, the model achieves an average IOU (Intersection Over Union) of 92%, which is significantly better than the prior best algorithm (Sliding window Convolutional Network) with an IOU of 83%. The second data set“DIC HeLa”areHeLacellsonaflatglassrecordedby differentialinterferencecontrast(DIC)microscopy.Herethe

achievedIOUaverageof77.5%whichissignificantinfront ofthepriorbestalgorithmwith46%.

Model Dataset IOU

U Net Phc U373 92%

DIC HeLa 77.5%

Table 2.U Netbasedimagesegmentation

3.3 Image Quality Assessment for fetal ultrasound with Deep Convolutional Neural Networks

This paper [8] proposes the use of LCNN and CCNN for classificationpurposesfortheimagequalityassessmentfor fetal ultrasounds. The performance of the system is evaluatedbasedondatatakenfromShenzhenMaternaland Child Healthcare Hospital from 6 September 2012 to November2013.Therangeoffetalageswerebetween16to 40 weeks. In the proposed paper, uses two deep convolutionalneuralnetworksnamelyLCNNandCCNNwith anaccuracyof96%.TheL CNNpurposeistofindtheregion of interest (ROI) of the fetal abdominal in the image. AccordingtotheROIdiscovered bytheL CNN,theC CNN assessesthequalityofanimagebydeterminingitsgoodness ofdecidingtheessential structuresofthestomachbubble and umbilical vein. Thereafter, for increasing the performance of the L CNN, augmentation of data is done between the input of the neural network containing the featuresoflocalphasewiththeoriginaldatacollected.The experimental evaluations of theschemearecarriedout in three parts. In the first part, the L CNN and C CNN implementationissuesareinvestigated.Inthesecondpart, the performance of the C CNN model before and after training with visualization of the model is illustrated quantitatively.Toillustrate the performanceoftheC CNN and LCNN, a comparison of the results of the region of interest (ROI) identification and the assessment of SB (stomachbubble)andUV(umbilicalvein)structuresfrom FUIQAschemeisdone.Assessmentmetricssuchasaccuracy, sensitivity and specificity are selected for comparing quantitatively.Inthismethod,theoutcomesaccordingto3 doctorsE1E2andE3areconsidered

Table 3.Fetalultrasoundimagequalityassessment

©
| Page1048
2022, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal
Accuracy Accuracy Accuracy
E2
Doctors ROI SB UV
E1 0.93 0.99 0.98
0.91 0.99 0.97 E3 0.92 0.98 0.98

International Research Journal of Engineering and Technology (IRJET) e ISSN: 2395 0056

Volume: 09 Issue: 07 | July 2022 www.irjet.net p ISSN: 2395 0072

3.4 Image Segment based Classification forChest X

Ray

The system [9] uses U net with Xception encoder and Feature Pyramid network with Xception encoder and Resnet50encoderforsegmentationofimagesandXception, Resnet50, ResNet18 for classification purposes with an accuracyof94%,93%,and95%respectively.Thissystemis based on Deep Learning and AI Summer/Winter School (DLAI)HackathonPhase3 MulticlassCOVID 19ChestX ray Challenge dataset and was collected from various sources. The dataset comprises of three classes that are “COVID 19”,“Thorax Diseases”,and“Clear”.Inordertotune with the CNN model, the original RGB X Ray images were converted into monochrome images along with floating pointpixels.Thedatasetwasrandomlysplitintotrainingset and test sets with an 80 20 split. Also, augmentation increased the number of the training set and enabled the modeltobecomerobustandaidsinabettercomprehension uncommonCXRimages.Arandomselectionofoneormore ofthefollowingoperations: randomcroppingofCXR with concentrationonlungarea,horizontalandverticalflipping, 90 degree rotation, and histogram equation were implementedontheoriginalimage.Tosegmentthelungpart adeeplearningmodelforsemanticsegmentationwasused. ACNNmodelwithResNet18,Resnet50,andXceptionwas used. ResNet is residual learning model which skips connectionstoresolvethegradientvanishingproblemalong theincreasingdepthofCNNlayers.Theoverallaccuracywas enhanced by Xception: A CNN model, established upon depth wiseseparableconvolutionlayers.Thecross channel mappingandspatialcorrelationsinthefeaturemapsofCNN canbeentirelydecoupledandwaslinearlyassembledwith ResNetconnections.

3.5 Breast Cancer Medical Image Classification using CNN

Inthissystem[14],CNNModelisusedprimarily.Modelpre processestheimageswithspecifiedparametersofmedical imagesrequiredtofitinthenetwork.Imagesarein224x224 pixel PNG format with rescaled boundaries of images. For normalizing the images, reducing complexity of topology functionandincreasingoptimizationoflearningofmodel, implementationofnewleveloforthogonalitybetweenthe layerisintroduced.GPUperformseliminationofover fitting, zooming, and translation with zero computation cost. For given 15 layered CNN model, input used is VGG16 conventional specifiedimageswithdimension224x224x3 withRGBcolorchannels.Thereisbatchsizeof8isuseddue toGPUrequirement.Themodelwillhaveepochsetto100. Thepreferredoptimizerusedinworkisstochasticgradient descent(SGD)becauseoffasterandfrequentupdates.The activation function used is ReLU as having good computationalefficiencyandabilitytoworkinoverfitting case. The class mode used is Binary i.e., Malignant and benign.Learningrate being0.1andmomentumsetto0.9. Stridesfixedt1pixel.Theconvolutionallayerhas64filters of 3x3 dimension outputting 224x224x64. The fully connectedlayerhas2classoutputanddropoutprobability upto50%toworkproperlywithoverfittinginthemodel.

Model Training Accuracy (%)

Val. Accuracy (%)

Training Loss (%)

Val. Loss (%)

VGG16 94.77 96.33 13.94 10.90

CNN 98.02 98.50 5.84 4.81

Table 5.Brestcancermedicalimageclassification

4. PERFORMANCE COMPARISION

For biomedical images segmentation, the deep learning approach surpasses the accuracy. In section E, the CNN modeloutperformsothermodelsintermsofperformance. The combination of CNN classification and image segmentationproducesafinegraineddetectionmodel.The results show that training for multiple iterations on a designed model, accuracy is obtained are best among all. Implementationofanewleveloforthogonalitybetweenthe layersisintroducedtoincreaselearningintheprocess.The useofzerocomputationalcostGPUforimageaugmentation purposes.InsectionsAandBwespotthattheaccuracyof modelsvariesasthedatasetdiffers.ForsectionA,SVMhas been applied totwodatasets(MCandShenzen), itisseen that from Table 1 the accuracy for the dataset Shenzen is more than that of MC. Similarly in section B, U net has worked on two different datasets. From Table2 we acknowledgethattheaccuracyforthedatasetPhC U373is greaterthanthatforDIC HeLA.Therearemultiplereasons forthechangeinaccuracyondifferentdatasetsirrespective

©
Certified Journal | Page1049
2022, IRJET | Impact Factor value: 7.529 | ISO 9001:2008
Method Cross Entropy Accuracy Precision Recall F1 ResNet18 94 90 89 89 ResNet50 93 87 90 88 Xception 95 94 90 92 Average 94 90.3 89.7 89.7
Table 4.a.ChestX RayClassification(CrossEntropy)
Method Focal Loss Accuracy Precision Recall F1 ResNet18 94 91 91 91 ResNet50 93 87 90 89 Xception 94 91 93 92 Average 93.7 89.7 91.3 90.7
Table 4.b.ChestX RayClassification(FocalLoss)

International Research Journal of Engineering and Technology (IRJET) e ISSN: 2395 0056

of the same technique applied. In section D, Xception encoderwhichisaCNNmodel,usingdepth wiseseparable convolutional layers with cross channel mapping, feature mapsofCNN can beentirelyuncoupledand werestacked linearlywithresidualconnectionscausinganincreaseinthe accuracy. In Section C, LCNN and CCNN for image classificationoffatalultrasoundimagequality.Accordingto theROIdiscoveredbytheL CNN,theC CNNevaluatesthe qualityofanimagebydeterminingitsgoodnessofdepiction for the key structures of the image data. Experiments conductedusingdifferentdeeplearningmodels,Xception, LCNN+CCNN,Resnet18andResnet50achieveperformswell afterCNN.Fine tuningavoidinglossesgivehigheraccuracy whenusedonalargerdataset.

5. EXPERIMENTAL RESULTS

Fig 2. Modelsaccordingtoincreasingaccuracy

From this study, we have deduced that the 15 layer CNN model mosteffectivelyclassifiesandanalyzesthe medical images using relevant CNN classification techniques. XceptionmodelalongwiththeCNNmodelgivesthesecond bestaccuracy.Further,weproposetheimplementationof CNNmodelswithdifferentactivationfunctionsandfocuson removing the noise for better segmentation of the images henceimprovingtheperformance.

6. CONCLUSIONS

MedicalImageanalysisismainlyusedforthedetectionand diagnosis of diseases which helps in proper and accurate treatmentgiventopatients.Inthispaperwehavetakenan overviewofvariousdeeplearningmodelsfocusingonthe classificationtask.Thisstudyhasconcludedthataunionof

traditional analysis approaches along with deep learning frameworkupshotsinaneffectivediagnosisofbraintumor, fatal ultrasound, COVID 19, thorax, etc. Amongst the followingmodelsU net,SVM,Residualnetworks,CNN,and Xception,thehighestaccuracyisachievedbytheCNNmodel of 98.0.2%. The study illuminates that the drawbacks of a singlemodelareovercomebyencouragingacombinationof deeplearningmodels.Infutureworks,thisstudycanassist in enhancing the system using advanced deep learning frameworks.

REFERENCES

[1] Wenjing Yang, Arlene Sirajuddin, Xiaochun Zhang, GuanshuLiu,ZhongzhaoTeng,ShihuaZhao,andMinjieLu. 2020. The role of imaging in 2019 novel coronavirus pneumonia (COVID 19). European Radiology 30, 9 (April 2020), 4874 4882. https://doi.org/10.1007/s00330 020 06827 4

[2] F. Chollet. 2017. Xception: Deep Learning with Depthwise Separable Convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition(CVPR2017).IEEE,Honolulu,HI,1800 1807. https://doi.org/10.1109/CVPR.2017.195.

[3]S.Jaeger,A.Karargyris,S.Candemir,L.Folio,J.Siegelman, F.Callaghan,Z.Xue,K.Palaniappan,R.K.Singh,S.Antani,G. Thoma,Y.Wang,P.Lu,andC.J.McDonald.2014.Automatic Tuberculosis Screening Using Chest Radiographs. IEEE TransactionsonMedicalImaging33,2(Feb2014),233 245.

[4] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. 2015.U net:Convolutionalnetworksforbiomedicalimage segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2015). Springer, Munich, Germany, 234 241. https://doi.org/10.1007/978 3 319 24574 4_28

[5]J.Schlemper,J.Caballero,J.V.Hajnal,A.N.Price,andD. Rueckert,“Adeepcascadeofconvolutionalneuralnetworks fordynamicMRimagereconstruction,”IEEEtransactionson MedicalImaging,vol.37,no.2,pp.491 503,2017

[6]Liang ChiehChen,GeorgePapandreou,IasonasKokkinos, KevinMurphy,andAlanLYuille.2018.Deeplab:Semantic image segmentation with deep convolutional nets, atrous convolution,andfullyconnectedcrfs.IEEEtransactionson patternanalysisandmachineintelligence40,4(April2018), 834 848.

[7]J.Long,E.Shelhamer,andT.Darrell,“Fullyconvolutional networksforsemanticsegmentation,”inProc.CVPR,2015, pp.3431 3440.

[8] Phongsathorn Kittiworapanya, Kitsuchart Pasupa, “An ImageSegmentbasedClassificationforChestXRayImage”in

Volume: 09 Issue: 07 | July 2022 www.irjet.net p ISSN: 2395 0072 © 2022, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page1050

International Research Journal of Engineering and Technology (IRJET) e ISSN: 2395 0056

Volume: 09 Issue: 07 | July 2022 www.irjet.net p ISSN: 2395 0072

ACM November 19, 2020, https://doi.org/10.1145/ 3429210.3429227.

[9]L.Wu,J. Z.Cheng,S.Li,B.Lei,T.Wang,andD.Ni,“FUIQA: Fetal ultrasound image quality assessment with deep convolutionalnetworks,”IEEETransactionsonCybernetics, vol.47,no.5,pp.1336 1349,2017.

[10] L.Kang,P.Ye,Y.Li,andD.Doermann,“Convolutional neuralnetworksforno referenceimagequalityassessment,” inProceedingsoftheIEEEconferenceoncomputervision andpatternrecognition,2014,pp.1733 1740

[11] O. Ronneberger, P. Fischer, and T. Brox, “U net: Convolutionalnetworksforbiomedicalimagesegmentation” inMICCAI.Springer,2015,pp.234 241.

[12] W. Bai, M. Sinclair, G. Tarroni, O. Oktay, M. Rajchl, G. Vaillant, A. M. Lee, N. Aung, E. Lukaschuk, M. M. Sanghvi, “Automated cardiovascular magnetic resonance image analysis with fully convolutional networks” in Journal of Cardiovascular Magnetic Resonance, vol. 20, no. 1, p. 65, 2018.

[13]C.Qin,J.Schlemper,J.Caballero,A.N.Price,J.V.Hajnal, andD.Rueckert,“Convolutionalrecurrentneuralnetworks fordynamicMRimagereconstruction,”IEEEtransactionson medicalimaging,vol.38,no.1,pp.280 290,2018.

[14] Yongbin Yu, Ekong Favour, Pinaki Mazumder, "Convolutional Neural Network Design for Breast Cancer MedicalImageClassification",2020IEEE20thInternational ConferenceonCommunicationTechnology

[15] MAHNOOR ALI, SYED OMER GILANI, ASIM WARIS, KASHANZAFAR,ANDMOHSINJAMIL,"BrainTumourImage Segmentation Using Deep Networks" IEEE Open Access JournalVolume8,2020.

[16]A.H.Abdi,C.Luong,T.Tsang,G.Allan,S.Nouranian,J. Jue,D.Hawley,S.Fleming,K.Gin,J.Swift,etal.,“Automatic qualityassessmentofapicalfour chamberechocardiograms using deep convolutional neural networks,” in Proc. SPIE, vol.10133,2017,pp.101330S 1.

[17]Z.Wang,A.C.Bovik,H.R.Sheikh,andE.P.Simoncelli, “Image quality assessment: From error visibility to structuralsimilarity,”IEEETrans.ImageProcess.,vol.13,no. 4,pp.600 612,Apr.2004.

© 2022, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page1051

Turn static files into dynamic content formats.

Create a flipbook