International Research Journal of Engineering and Technology (IRJET) e ISSN: 2395 0056
![]()
International Research Journal of Engineering and Technology (IRJET) e ISSN: 2395 0056
Neha Adawadkar, Roshani Chavan, Chaitra Patwardhan, Advait Thakur, Prof. Dr. Mrs. Suhasini Itkar
Neha Adawadkar, Dept. of Computer Engineering, PES Modern College of Engineering, Maharashtra, India Roshani Chavan, Dept. of Computer Engineering, PES Modern College of Engineering, Maharashtra, India Chaitra Patwardhan, Dept. of Computer Engineering, PES Modern College of Engineering, Maharashtra, India Advait Thakur, Dept. of Computer Engineering, PES Modern College of Engineering, Maharashtra, India Prof. Dr. Mrs. Suhasini Itkar, Dept. of Computer Engineering, PES Modern College of Engineering, Maharashtra, India ***
Abstract - The recent pandemic has enforced a lot of stress on our healthcare industry as well as the increase in population has brought to light that the work bestowed upon the healthcare specialists needs to be reduced. Deep learning models such as neural networks are efficient in finding hidden patterns, which assists the experts in the specified field. Medical images like chest x rays are of utmost importance for the diagnosis of diseases, for instance,braintumors,dementia, pneumonia, COVID 19, cardiovascular, and many more. Collaborating these medical images along with the deep learning techniques have paved the path for enormous applications leading to the reduction in stress given upon the health sector. In this paper, we propose a brief comparison and study of existing technologies for analyzing chest x rays using deep learning.
Key Words: Chest X Ray (CXR), Convolutional Neural Network (CNN), Fully Convolutional Network (FCN), Deep Cascade of Convolutional Neural Network (DCCNN), Lookup based Convolutional neural networks (LCNN), Deep Convolutional Neural Network(DCNN)
Image segmentation is the method of dividing the images into various pixels or regions called as segments and processes images accurately. Diagnosis with the aid of imaging can help in better localization and staging of the disease. Often, Medical Images follow a traditional image segmentationlifecycletoanalyzethem.[1]Highlightsvalues ofimaginginthediagnosisofadisease.Theanalysisofthese image segmentation includes pre processing of image followed by feature extraction and perform classification. These processes are accomplished by using various deep learningmethodologies.Thedeeplearningtechniquesfrom thefollowingresearcheshavedisplayedremarkableresults fordiagnosisofmedicalimages.Inthepre processingstage, themostcommonlyusedtechniqueisnormalizationwhich helpsinreducingthedimensionalityoftheimageformaking theimagefitforsegmentation.Classificationofimagescan bedoneusingSupportVectorMachine(SVM)[3].Otherthan
that, the data augmentation [4] is used for increasing the robustnessofdeeplearning.Alongwiththesetechniques,an Up sampling method [4] is implemented which helps the network layers to propagate context data to higher resolutionlayers.
Featureextractionisatechniquewhichisusedtoreducea large input data set into relevant features. The images requiredforsegmentationhavealargenumberofvariables. A lot of computing resources are required for these variables.Therefore,featureextractionshelptoattainbest featurefromthosebigdatasetsbyselectingandintegrating variablesintofeatures,effectivelydecreasingtheamountof data Thefinalstepistheclassification,whereweobtainthe desiredoutput.Thepre processedandselectedfeaturesare provided to the model for training purposes which ultimatelygivesustheappropriateresult.Theclassification taskisasupervisedlearningapproachinmachinelearning domain.ModelssuchasSVMin[3],U netin[4],Resnet18, Resnet50and Xceptionin[8],LCNN andDCNNin[9],and FullyConvolutionalNetwork(FCN)in[15]areusedforthis stepintheexaminedresearch.
Over the past few years, the machinelearning framework has immensely helped in analyzing the complex medical images. Tremendous advancements in medical image segmentationanditsapplicationshavemotivatedustostudy such research techniques. The aim of [1] is to concisely present the issue of coronavirus and highlight the importance of medical imaging in its diagnosis. Inception inspired,anoveldeepCNNarchitecturewheredepth wise separable convolutions replace the Inception modules is beingdiscussedin[2].Here,anarchitecturenamedXception which is similar to Inception V3 is developed which efficiently uses the model parameters. Emphasizing automatic tuberculosis detection [3] proposes a binary classifierusingSVMwhichclassifiesaCXRaseithernormal orabnormalachievinganaccuracyof82.1%andareawithin theROCcurveas88.5%.In[4]aU NetNetworkistrained
Volume: 09 Issue: 07 | July 2022 www.irjet.net p ISSN: 2395 0072 © 2022, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page1046
International Research Journal of Engineering and Technology (IRJET) e ISSN: 2395 0056
Volume: 09 Issue: 07 | July 2022 www.irjet.net p ISSN: 2395 0072
with the use of data augmentation by using elastic deformations for effective use of annotated samples. The segmentation results for ISBI cell tracking challenge with averageIOU(âintersectionoverunionâ)forâPhC U373âdata setis92%andforeâDIC HeLaâdatasetIs77.5%.[5]Inthis paper a framework for dynamic sequences of 2 D cardiac Magnetic Resonance Images (MRI) reconstructed from Under sampled data using DCCNN is proposed. With the helpofthismodel,thereisfeasibilityinthereconstructionof images with good quality through a network having interleaveddataconsistencystages.Itcanreconstructmost oftheanatomical structures more precisely by previously learned data. In [6] the proposed system repurposes the networkswhicharetrainedonimageclassification.Semantic segmentationisdonebyusingâAtrousConvolutionâbyun sampled filters for feature extraction. We can explicitly control two things: the resolution for feature response computation and effectively amplify the field of view of filters to integrate larger context without the need to increasethenumberofparametersnorthecomputational amount.In[7]thismodelshowsthatconvolutionalnetworks when trained by themselves exceed the existing semantic segmentation.Thepurposeofthisresearchistodevelopa FCNwhichinputthearbitrarysizeandcreateasimilar sized output with efficient inference. This model adapts contemporary classification networks into FCN and then passesontheirlearnedparametersbyfine tuningintothe segmentationblock.In[8]COVID 19andThoraxdiseases, fromchestX Rayimagesarebeingdiagnosedbythesystem using residual networks 18, 50 and Xception model. This Resnet18 model gives an accuracy of 94% while resnet50 givesaccuracyof93%andtheXceptionmodelishavingan accuracy of 95%. In FUIQA [9] LCNN and DCNN are used togetherforanalysiswhereinLCNNgivesoverallaccuracyof 92%andtheDCNNgivesoverallaccuracyofaround97%. Fortheimagequalityassessment[10]CNNmodelisusedto reconstructalltypesofimagesi.e.,assessingthequalityof the image and also reconstructing it in a better way. The model involves a convolutional layer, maximum and minimum pooling followed by two fully connected layers, andanoutputnode.Anoptimizationprocessconsistingof the network structure, feature learning and regression is integrated,thisdirectstoamoreefficientmodelforjudging theimagequality.Inthemodelforimagesegmentation[11] U netisusedforthesegmentationofbiomedicalimages.In [12]forcardiovascularmagneticresonance(CMR)images, anautomatedanalysismethodisbasedonFCN.Thenetwork is trained and tested on an extensive dataset from the UK Biobank.Severaltechnicalmetrics,suchasthemeancontour distance,DicemetricandHausdorffdistance,aswellassome medicalmeasures,includingend systolicvolume(LVESV), LV mass (LVM); left ventricle (LV) end diastolic volume (LVEDV) and right ventricle (RV) end diastolic volume (RVEDV)andend systolicvolume(RVESV)havebeenused toevaluatethesystem.Thesystemgivesanaccuracyof0.93. In [13] convolutional recurrent neural network (CRNN) model is used for image quality assessment. High quality
CMRimagesarereconstructedbyCRNN.Thisisdonewith the help of highly under sampled k space data with exploiting the dependencies of the temporal sequences as well asthe iterativenature ofthe traditional optimization algorithms.Thismodelgivesaminimumtimeof3s.In[14] the authors have proposed the use of VGG16 for Breast CancerDetectiongivinganoverallaccuracyof94.77%.For Braintumoranalysis[15]useofdeepneuralnetworkslike FCN, Res net, U net, Encoder/Decoder, and CNN is done giving an overall accuracy of 75% to 80%. Quality assessmentofEchocardiogramsin[16]regressionmodelis enhancedtoassessthestandardandqualityofechoimages. Astochasticgradientdescentalgorithmisusedtoevaluate thelossfunctionofthesystem.Themodelwasdesignedbya convolutionstage(ConvolutionalandPoolingLayer)anda fully connected stage. In the paper for image quality assessment [17] basic operations are done to eliminate known distortions from images. Distorted and reference signalsarescaledandalignedthenmetricsofdigitalvalues needtoconvertintopixelintensityusingnonlinearpointed transformations. Imagesaredecomposedintochannelsthat areselectiveforspatialandtemporalorientation.Discrete CosineTransformorSeparablewavelettransformmethods are used for quality assessment methods. Errors between decomposedreferencesineachchannelarenormalizedso thatpresenceofoneimagecomponentwilldecreaseanother imagecomponentproximateinorientationconcerningthe spatialortemporallocationandfrequency.
Fig 1:Thegeneralstepsforevaluatingimagesusingdeep learningtechniques
Considering the literature review in this section we will focus upon deep learning models SVM [3], U net [4], Resnet18,Resnet50andXception[8],LCNNandCCNN[9] andCNN[14].Intheupcomingfivesections,themodelswill bestudiedindetailandwillthenbecomparedaccordingto theiraccuracydiscussedinthelastsection.
The research presents an automated system for the detection of tuberculosis. In [3] the performance of the systemismeasuredontwodatasets:TheMCdatasetfrom the Department of Health and Human Services of Montgomery County (MC), Maryland, and the Shenzhen dataset, from Shenzhen No.3 Hospital in Shenzhen,
2022, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal |
International Research Journal of Engineering and Technology (IRJET) e ISSN: 2395 0056
Volume: 09 Issue: 07 | July 2022 www.irjet.net p ISSN: 2395 0072
Guangdongprovidence,China.Thesystemusesagraphcut based for lung segmentation and selects training masks according to the horizontal intensity projections of the histogramequalizedimagesandBhattacharyyacoefficient for similarity projections between input and training CXR images.Further,thesubtlestructuresinaCXRarepickedby featurescomputedthroughfeaturecomputationtechniques. ObjectDetectionInspiredFeaturesaretakenfromthe MC DatasetandCBIR basedImageFeaturesaretakenfromthe Shenzhen dataset where, the 594 dimensioned feature vectorswhichisgreaterthanthreetimesthefeaturevector oftheMCdataset.Thecomputationofasetofedge,shape, andtexturefeaturesisthenfedasabinaryclassifierinput. Thus, the SVM classifier system can classify a given input imageintoclassifiedintonormalorabnormal.FortheMC dataset, the classification accuracy is 78.3% and the area under the ROC curve (AUC) is 86.9%, for the Shenzhen datasettheclassificationaccuracyis82.5%andtheAUCis 88%.
Model Dataset Accuracy AUC
SVM MC 78.3 86.9 Shenzhen 82.5 88
The proposed architecture contains contracting paths to capturecontextandsymmetricexpandingpathsforprecise localization. This [4] model uses a weighted loss, large weight in loss function allocated by separating labels betweentouchingcells,givingaccuratesegmentation.Data augmentation is used for desired robust properties and invariancewhenahandfuloftrainingsamplesareavailable. The important modification in the architecture is the up samplingmethodwherealargenumberoffeaturechannels arepresentwhichhelpsthenetworktopropagatesensitive information to high resolution layers. As a result, the expansivepathandcontractingpatharenearlysymmetric, producing a u shaped architecture. The proposed model does not contain any fully connected layers and uses only associated parts of each convolution layer which allows smoothsegmentationofarbitrarylargeimageswiththehelp of overlap tile strategy. In this research, the model is evaluatedusingtwodatasets.Thefirstdata setâPhC U373â contains Glioblastoma astrocytoma U373 cells on a polyacrylamide substrate recorded by phase contrast microscopy. Here, the model achieves an average IOU (Intersection Over Union) of 92%, which is significantly better than the prior best algorithm (Sliding window Convolutional Network) with an IOU of 83%. The second data setâDIC HeLaâareHeLacellsonaflatglassrecordedby differentialinterferencecontrast(DIC)microscopy.Herethe
achievedIOUaverageof77.5%whichissignificantinfront ofthepriorbestalgorithmwith46%.
Model Dataset IOU
U Net Phc U373 92%
DIC HeLa 77.5%
This paper [8] proposes the use of LCNN and CCNN for classificationpurposesfortheimagequalityassessmentfor fetal ultrasounds. The performance of the system is evaluatedbasedondatatakenfromShenzhenMaternaland Child Healthcare Hospital from 6 September 2012 to November2013.Therangeoffetalageswerebetween16to 40 weeks. In the proposed paper, uses two deep convolutionalneuralnetworksnamelyLCNNandCCNNwith anaccuracyof96%.TheL CNNpurposeistofindtheregion of interest (ROI) of the fetal abdominal in the image. AccordingtotheROIdiscovered bytheL CNN,theC CNN assessesthequalityofanimagebydeterminingitsgoodness ofdecidingtheessential structuresofthestomachbubble and umbilical vein. Thereafter, for increasing the performance of the L CNN, augmentation of data is done between the input of the neural network containing the featuresoflocalphasewiththeoriginaldatacollected.The experimental evaluations of theschemearecarriedout in three parts. In the first part, the L CNN and C CNN implementationissuesareinvestigated.Inthesecondpart, the performance of the C CNN model before and after training with visualization of the model is illustrated quantitatively.Toillustrate the performanceoftheC CNN and LCNN, a comparison of the results of the region of interest (ROI) identification and the assessment of SB (stomachbubble)andUV(umbilicalvein)structuresfrom FUIQAschemeisdone.Assessmentmetricssuchasaccuracy, sensitivity and specificity are selected for comparing quantitatively.Inthismethod,theoutcomesaccordingto3 doctorsE1E2andE3areconsidered
Table 3.Fetalultrasoundimagequalityassessment
International Research Journal of Engineering and Technology (IRJET) e ISSN: 2395 0056
Volume: 09 Issue: 07 | July 2022 www.irjet.net p ISSN: 2395 0072
The system [9] uses U net with Xception encoder and Feature Pyramid network with Xception encoder and Resnet50encoderforsegmentationofimagesandXception, Resnet50, ResNet18 for classification purposes with an accuracyof94%,93%,and95%respectively.Thissystemis based on Deep Learning and AI Summer/Winter School (DLAI)HackathonPhase3 MulticlassCOVID 19ChestX ray Challenge dataset and was collected from various sources. The dataset comprises of three classes that are âCOVID 19â,âThorax Diseasesâ,andâClearâ.Inordertotune with the CNN model, the original RGB X Ray images were converted into monochrome images along with floating pointpixels.Thedatasetwasrandomlysplitintotrainingset and test sets with an 80 20 split. Also, augmentation increased the number of the training set and enabled the modeltobecomerobustandaidsinabettercomprehension uncommonCXRimages.Arandomselectionofoneormore ofthefollowingoperations: randomcroppingofCXR with concentrationonlungarea,horizontalandverticalflipping, 90 degree rotation, and histogram equation were implementedontheoriginalimage.Tosegmentthelungpart adeeplearningmodelforsemanticsegmentationwasused. ACNNmodelwithResNet18,Resnet50,andXceptionwas used. ResNet is residual learning model which skips connectionstoresolvethegradientvanishingproblemalong theincreasingdepthofCNNlayers.Theoverallaccuracywas enhanced by Xception: A CNN model, established upon depth wiseseparableconvolutionlayers.Thecross channel mappingandspatialcorrelationsinthefeaturemapsofCNN canbeentirelydecoupledandwaslinearlyassembledwith ResNetconnections.
Inthissystem[14],CNNModelisusedprimarily.Modelpre processestheimageswithspecifiedparametersofmedical imagesrequiredtofitinthenetwork.Imagesarein224x224 pixel PNG format with rescaled boundaries of images. For normalizing the images, reducing complexity of topology functionandincreasingoptimizationoflearningofmodel, implementationofnewleveloforthogonalitybetweenthe layerisintroduced.GPUperformseliminationofover fitting, zooming, and translation with zero computation cost. For given 15 layered CNN model, input used is VGG16 conventional specifiedimageswithdimension224x224x3 withRGBcolorchannels.Thereisbatchsizeof8isuseddue toGPUrequirement.Themodelwillhaveepochsetto100. Thepreferredoptimizerusedinworkisstochasticgradient descent(SGD)becauseoffasterandfrequentupdates.The activation function used is ReLU as having good computationalefficiencyandabilitytoworkinoverfitting case. The class mode used is Binary i.e., Malignant and benign.Learningrate being0.1andmomentumsetto0.9. Stridesfixedt1pixel.Theconvolutionallayerhas64filters of 3x3 dimension outputting 224x224x64. The fully connectedlayerhas2classoutputanddropoutprobability upto50%toworkproperlywithoverfittinginthemodel.
Model Training Accuracy (%)
Val. Accuracy (%)
Training Loss (%)
Val. Loss (%)
VGG16 94.77 96.33 13.94 10.90
CNN 98.02 98.50 5.84 4.81
Table 5.Brestcancermedicalimageclassification
For biomedical images segmentation, the deep learning approach surpasses the accuracy. In section E, the CNN modeloutperformsothermodelsintermsofperformance. The combination of CNN classification and image segmentationproducesafinegraineddetectionmodel.The results show that training for multiple iterations on a designed model, accuracy is obtained are best among all. Implementationofanewleveloforthogonalitybetweenthe layersisintroducedtoincreaselearningintheprocess.The useofzerocomputationalcostGPUforimageaugmentation purposes.InsectionsAandBwespotthattheaccuracyof modelsvariesasthedatasetdiffers.ForsectionA,SVMhas been applied totwodatasets(MCandShenzen), itisseen that from Table 1 the accuracy for the dataset Shenzen is more than that of MC. Similarly in section B, U net has worked on two different datasets. From Table2 we acknowledgethattheaccuracyforthedatasetPhC U373is greaterthanthatforDIC HeLA.Therearemultiplereasons forthechangeinaccuracyondifferentdatasetsirrespective
International Research Journal of Engineering and Technology (IRJET) e ISSN: 2395 0056
of the same technique applied. In section D, Xception encoderwhichisaCNNmodel,usingdepth wiseseparable convolutional layers with cross channel mapping, feature mapsofCNN can beentirelyuncoupledand werestacked linearlywithresidualconnectionscausinganincreaseinthe accuracy. In Section C, LCNN and CCNN for image classificationoffatalultrasoundimagequality.Accordingto theROIdiscoveredbytheL CNN,theC CNNevaluatesthe qualityofanimagebydeterminingitsgoodnessofdepiction for the key structures of the image data. Experiments conductedusingdifferentdeeplearningmodels,Xception, LCNN+CCNN,Resnet18andResnet50achieveperformswell afterCNN.Fine tuningavoidinglossesgivehigheraccuracy whenusedonalargerdataset.
From this study, we have deduced that the 15 layer CNN model mosteffectivelyclassifiesandanalyzesthe medical images using relevant CNN classification techniques. XceptionmodelalongwiththeCNNmodelgivesthesecond bestaccuracy.Further,weproposetheimplementationof CNNmodelswithdifferentactivationfunctionsandfocuson removing the noise for better segmentation of the images henceimprovingtheperformance.
MedicalImageanalysisismainlyusedforthedetectionand diagnosis of diseases which helps in proper and accurate treatmentgiventopatients.Inthispaperwehavetakenan overviewofvariousdeeplearningmodelsfocusingonthe classificationtask.Thisstudyhasconcludedthataunionof
traditional analysis approaches along with deep learning frameworkupshotsinaneffectivediagnosisofbraintumor, fatal ultrasound, COVID 19, thorax, etc. Amongst the followingmodelsU net,SVM,Residualnetworks,CNN,and Xception,thehighestaccuracyisachievedbytheCNNmodel of 98.0.2%. The study illuminates that the drawbacks of a singlemodelareovercomebyencouragingacombinationof deeplearningmodels.Infutureworks,thisstudycanassist in enhancing the system using advanced deep learning frameworks.
[1] Wenjing Yang, Arlene Sirajuddin, Xiaochun Zhang, GuanshuLiu,ZhongzhaoTeng,ShihuaZhao,andMinjieLu. 2020. The role of imaging in 2019 novel coronavirus pneumonia (COVID 19). European Radiology 30, 9 (April 2020), 4874 4882. https://doi.org/10.1007/s00330 020 06827 4
[2] F. Chollet. 2017. Xception: Deep Learning with Depthwise Separable Convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition(CVPR2017).IEEE,Honolulu,HI,1800 1807. https://doi.org/10.1109/CVPR.2017.195.
[3]S.Jaeger,A.Karargyris,S.Candemir,L.Folio,J.Siegelman, F.Callaghan,Z.Xue,K.Palaniappan,R.K.Singh,S.Antani,G. Thoma,Y.Wang,P.Lu,andC.J.McDonald.2014.Automatic Tuberculosis Screening Using Chest Radiographs. IEEE TransactionsonMedicalImaging33,2(Feb2014),233 245.
[4] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. 2015.U net:Convolutionalnetworksforbiomedicalimage segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2015). Springer, Munich, Germany, 234 241. https://doi.org/10.1007/978 3 319 24574 4_28
[5]J.Schlemper,J.Caballero,J.V.Hajnal,A.N.Price,andD. Rueckert,âAdeepcascadeofconvolutionalneuralnetworks fordynamicMRimagereconstruction,âIEEEtransactionson MedicalImaging,vol.37,no.2,pp.491 503,2017
[6]Liang ChiehChen,GeorgePapandreou,IasonasKokkinos, KevinMurphy,andAlanLYuille.2018.Deeplab:Semantic image segmentation with deep convolutional nets, atrous convolution,andfullyconnectedcrfs.IEEEtransactionson patternanalysisandmachineintelligence40,4(April2018), 834 848.
[7]J.Long,E.Shelhamer,andT.Darrell,âFullyconvolutional networksforsemanticsegmentation,âinProc.CVPR,2015, pp.3431 3440.
[8] Phongsathorn Kittiworapanya, Kitsuchart Pasupa, âAn ImageSegmentbasedClassificationforChestXRayImageâin
Volume: 09 Issue: 07 | July 2022 www.irjet.net p ISSN: 2395 0072 © 2022, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page1050
International Research Journal of Engineering and Technology (IRJET) e ISSN: 2395 0056
Volume: 09 Issue: 07 | July 2022 www.irjet.net p ISSN: 2395 0072
ACM November 19, 2020, https://doi.org/10.1145/ 3429210.3429227.
[9]L.Wu,J. Z.Cheng,S.Li,B.Lei,T.Wang,andD.Ni,âFUIQA: Fetal ultrasound image quality assessment with deep convolutionalnetworks,âIEEETransactionsonCybernetics, vol.47,no.5,pp.1336 1349,2017.
[10] L.Kang,P.Ye,Y.Li,andD.Doermann,âConvolutional neuralnetworksforno referenceimagequalityassessment,â inProceedingsoftheIEEEconferenceoncomputervision andpatternrecognition,2014,pp.1733 1740
[11] O. Ronneberger, P. Fischer, and T. Brox, âU net: Convolutionalnetworksforbiomedicalimagesegmentationâ inMICCAI.Springer,2015,pp.234 241.
[12] W. Bai, M. Sinclair, G. Tarroni, O. Oktay, M. Rajchl, G. Vaillant, A. M. Lee, N. Aung, E. Lukaschuk, M. M. Sanghvi, âAutomated cardiovascular magnetic resonance image analysis with fully convolutional networksâ in Journal of Cardiovascular Magnetic Resonance, vol. 20, no. 1, p. 65, 2018.
[13]C.Qin,J.Schlemper,J.Caballero,A.N.Price,J.V.Hajnal, andD.Rueckert,âConvolutionalrecurrentneuralnetworks fordynamicMRimagereconstruction,âIEEEtransactionson medicalimaging,vol.38,no.1,pp.280 290,2018.
[14] Yongbin Yu, Ekong Favour, Pinaki Mazumder, "Convolutional Neural Network Design for Breast Cancer MedicalImageClassification",2020IEEE20thInternational ConferenceonCommunicationTechnology
[15] MAHNOOR ALI, SYED OMER GILANI, ASIM WARIS, KASHANZAFAR,ANDMOHSINJAMIL,"BrainTumourImage Segmentation Using Deep Networks" IEEE Open Access JournalVolume8,2020.
[16]A.H.Abdi,C.Luong,T.Tsang,G.Allan,S.Nouranian,J. Jue,D.Hawley,S.Fleming,K.Gin,J.Swift,etal.,âAutomatic qualityassessmentofapicalfour chamberechocardiograms using deep convolutional neural networks,â in Proc. SPIE, vol.10133,2017,pp.101330S 1.
[17]Z.Wang,A.C.Bovik,H.R.Sheikh,andE.P.Simoncelli, âImage quality assessment: From error visibility to structuralsimilarity,âIEEETrans.ImageProcess.,vol.13,no. 4,pp.600 612,Apr.2004.
© 2022, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page1051