Detection of Depressed and Non-Depressed Texts from social media using Transfer Based Architecture o

Page 1


International Research Journal of Engineering and Technology (IRJET) e-ISSN:2395-0056

Volume: 12 Issue: 07 | Jul 2025 www.irjet.net p-ISSN:2395-0072

Detection of Depressed and Non-Depressed Texts from social media using Transfer Based Architecture of BERT

*Research Scholar Department of CSE, Takshshila Institute of Engineering & Technology, Jabalpur, M.P.

**Prof., Department of CSE, Takshshila Institute of Engineering & Technology, Jabalpur, M.P.

Abstract- The core objective is to develop a reliable and accurate system capable of identifying depressed and nondepressed users through natural language processing (NLP) techniques. To this end, we experimented with a range of sequence modelling approaches, including Simple RNN, UnidirectionalandBidirectionalLSTM,GatedRecurrentUnits (GRU),andtheirbidirectionalcounterparts.Furthermore,we integrated a transfer learning approach using BERT (Bidirectional Encoder Representations from Transformers) to analyse complex sentence structures and contextual relationships in user posts. All models were trained and evaluatedonalabelleddatasetconsistingofsocialmediatext entries annotated for depression. The evaluation metric of choicewasRecall,giventhecriticalimportanceofminimizing false negatives in mental health detection. The BERT model achievedthehighestrecall scoreof98.10%, followedclosely by Bidirectional LSTM and Bidirectional GRU models, both scoring97.76%.Theseresultsdemonstratethesuperiorityof bidirectional architecturesincapturingcontextual semantics and underline the importance of transfer-based models like BERTinimprovingmentalhealthclassificationperformance. Our study confirms that bidirectionality and contextual embedding’ssignificantlyboostdetectioncapabilitiesinNLPbased mental health applications. This work contributes to buildingintelligentsystemsforearlydiagnosisofdepression, supporting mental health professionals with enhanced screeningtoolsinonlineenvironments.

Keywords: Depression Detection, Social Media Analysis, Natural Language Processing (NLP), Recurrent Neural Networks (RNN), Long Short-Term Memory (LSTM), Gated RecurrentUnit(GRU),BidirectionalModels,BERT.

I. INTRODUCTION

An uncontrolled growth of brain tissues is a brain Tumor. It producespressureintheskullandinterfereswiththebrain’s natural functioning. Brain Tumor comes in two different types: Benign (non-cancerous) and Malignant (cancerous). Among them, malignant tumours grow quickly in the brain, damage the normal tissues, and may replicate themselves in

other parts of the body [1], [2], [3]. Brain tumors are graded intofourdifferentcategories:

Grade I: These tumors do not spread quickly and develop slowly. These are connected to a higher chance of enhanced order and may be surgically eliminated nearly entirely. One suchtumorisapilocyticastrocytoma.

Grade II: Although they may migrate to surrounding tissues and advance to higher grades, these tumors also grow over time

GradeIII:Thegrowthofthesetumors arequickerthangrade IImalignanciesandspreadtoadjoiningtissues.

GradeIV:Itisdangerousofallandlikelytospreadmalignant tumors are in this category. Might even use blood vessels to speeduptheirgrowth.

Xu et al. [4] developed an inertial microfluidic sorting device that has a 3D-stacked multistage intended for circulating tumor cells (CTCs) with efficient downstream analysis. The initial step includes a trapezoidal cross-section for maximum separation, followed by symmetrical square serpentine channels for more enrichment. This novel design produces rates yields over 80% efficiency and over 90% purity. This approachallowsforquickandintegratedCTCanalysis.

Recent advancements in vision models, such as Vision Language Models (VLLMs) and other emerging technologies, havethepotentialtosignificantlyenhancetheperformanceof braintumordetectionmethods.VLLMs,forinstance,combine vision and language processing, allowing for a more nuanced understanding of both visual data and associated medical narratives. Incorporating these models could improve our system’sabilitytointerpretcomplexMRIscansinthecontext of clinical documentation, patient history, or radiology reports. Furthermore, other emerging technologies, such as self-supervised learning and attention mechanisms, are showing promise in enhancing the ability of models to learn from limited data, which would be particularly beneficial for improvingperformanceonraretumortypes

International Research Journal of Engineering and Technology (IRJET) e-ISSN:2395-0056

Volume: 12 Issue: 07 | Jul 2025 www.irjet.net p-ISSN:2395-0072

Brain tumors at early stage can help prevent tumor growth, exertion dissemination of cancerous growths to different brain regions. Traditionally, radiologists analyse brain tumors by employing some medical imaging techniques to spot abnormalities like brain tumors. Even though MRI, computed tomography (CT), and positron emission tomography (PET) scans are the preferred choice among medical practitioners. The non-invasive characteristics, exceptional depiction of soft tissue and metabolite characteristics, MRI is the most sought-after imaging techniqueby medical professionalsfor tumordetection. MRI playsafundamentalroleincharacterizingthetypeoftumor, which,inturn,guidesphysiciansindevisingtreatment plans andmonitoringtheprogressionofthedisease.Thestructural MRI images, specifically T1, contrast enhanced T1, and T2weighted images, are instrumental in delineating both the healthy anatomical structures and the extent of tumor involvement [5]. Consequently, MRI has attracted significant attentionfromresearchersandscientists.Recently,therehas been a notable rise in interest in MRI-based medical image analysis for brain tumor studies, driven by an increasing need for efficient and unbiased analysis of large amounts of medicaldata.Previously,2Dsegmentedmodelswerecreated specifically for radiologists’. However, achieving high accuracy in tumor boundary segmentation remains a challenge. Identifying, segmenting, and classifying brain tumors is vital but often time-consuming. Conventional methodsarelaboriousandsusceptibletohumanerror.

Various tumor segmentation and classification techniques have been categorized into registration-based, atlas-based, supervised,unsupervised,andhybridalgorithms.Atlas-based algorithms involve applying these techniques in Pipitone et al.’s work [6], utilizing registration techniques outlined by Bauer et al. [7] for unlabelled data, unsupervised techniques are employed to capture inherent characteristics, as discussed in Chander et al.’s work [8], utilizing various clustering methods [9], [10], Gaussian modelling, or other approaches. Supervised techniques are utilized for labelled data, such as decision forests, where multiple decision trees arecombinedtomakepredictions.

High-dimensional and effective algorithms such as support vectormachines(SVM) areemployedforclassificationtasks. Additionally,extremelyrandomizedtreesandself-organizing maps are used for exploratory data analysis and visualization.Allofthementionedmethodsleveragefeatures extracted through feature engineering, encompassing extracting and selecting relevant features. The advent of computerized algorithms has significantly eased the segmentation and classification of brain tumors. Deep learning, in particular, has remarkably impacted tumor detection, segmentation, and classification. In recent years,

deep learning has surpassed traditional machine learning methods in various applications. Specifically, in classifying brain tumor grades, several researchers have employed pretrainedConvolutional

Neural Network (CNN) architectures and fine-tuned them to achieve superior performance. Additionally, some experts have advocated for the use of modern CNN architectures trainedfromthegroundup.WhilethemajorityofCNNsused for this purpose have been 2D-based, there have also been instances where researchers applied CNN architectures in a 3D context. In medical imaging, computerized systems primarily perform two critical tasks: classification and segmentation. Malignant and benign tumors exhibit distinct shape characteristics. For instance, malignant tumors often have irregular and speculated shapes, while benign tumors tend to have smooth, oval, or round boundaries. Radiologists rely on these shape features for clinical diagnosis, helping them categorize tumors as either benign or malignant. These properties play a vital role in both tumor classification and segmentation. Consequently, the idea of performing these dual tasks via a single neural network to tackle both tasks while sharing relevant features between the two holds significantpromiseandmeritsfurtherexploration.Thisthesis introducesauniqueapproachtodiseasedetectionfocusedon simultaneously addressing the challenges of brain tumor detectionandclassification.

1.2 Brain Tumor

Atumorisanuncontrolledproliferationofcancercellsinany region of the body. Tumors can vary widely in type, characteristics, and treatment approaches. Currently, brain tumorsarecategorizedintoseveraldistincttypes[11].Before delvingintobraintumorsegmentationmethods,itisessential to introduce the MRI pre-processing operations, as they directly influence the quality of the segmentation results. Additionally, there are numerous challenges in engineering solutionsforbraintumortreatment,ashighlightedbyLyonet al. [12]. Malignant brain tumors are among the most devastating forms of cancer, marked by dismal survival rates that have remained largely unchanged over the past six decades.Thisstagnationisprimarilyattributedtothescarcity of available therapies capable of navigating anatomical barriers without causing harm to delicate neuronal tissue. The recent progress in cancer immunotherapies offers a promising frontier for potentially treating these otherwise inoperable brain tumors. However, despite the promising outcomesobservedinothertypesofcancer,limitedheadway has been made in the case of brain tumors. The human brain constitutes a vital organ with exceptional anatomical, physiological, and immunological complexities that impede theachievementofsuccessfultreatments.Theseunparalleled

International Research Journal of Engineering and Technology (IRJET) e-ISSN:2395-0056

Volume: 12 Issue: 07 | Jul 2025 www.irjet.net p-ISSN:2395-0072

anatomical and physiological restrictions give rise to inherent technical challenges when designing therapeutic approaches that must efficiently function within the brain and its tumor immune microenvironment. To date, engineeringstrategiesforimplementingimmunotherapiesin the brain, beyond merely adapting existing extra neural immunotherapy’s, are lacking. Nevertheless, there exist ampleopportunitiesforinnovation.Approachingtherapeutic strategies from an engineering standpoint may allow us to harness a wider array of technologies to advance tailored treatment approaches that can effectively address the specific biological constraints associated with brain cancer treatment, ultimately facilitating improved brain cancer immunotherapies.

Pre-processing of raw MRI images is a fundamental step in achieving accurate brain tumor segmentation. These preprocessing operations encompass tasks such as de-noising, skull-stripping,andintensitynormalization,andtheydirectly influence the quality of brain tumor segmentation results [13].Annotatingmedicalimagesrequiressignificanttimeand expertise, making the identification of a large number of images particularly challenging. Several efforts have been undertaken to address the challenges mentioned. Transfer learning models have proven to be a valuable choice, especiallyinscenarioswith limitedlabelleddata fortraining [14].Theadvancementinreducinghumanerrorandlowering mortality rates relies on the development of accurate and automated classification methods. To this end, automated tumor identification systems based on deep learning have emerged, aimed at diminishing the time and effort spent by radiologists on brain tumor analysis using MRI. Recent studies indicate that automated image analysis, complemented by clinical picture assessment, holds promise as a solution to save time and deliver dependable results. Furthermore, these automated assessments can play a pivotal role in enhancing the therapeutic management of braintumorsbyalleviatingtheburdenondoctorswhowould otherwise manually describe tumor growth. In essence, computer algorithms have the potential to offer robust and quantitative estimations of tumor characteristics [15], [16]. Noteworthy medical issue positioned as the tenth most common cause of mortality in the United States. It is estimated that approximately 700,000 individuals are afflicted by brain tumors, with 80 percent categorized as benign and 20 percent as malignant [17]. The challenges in this domain arise from substantial variations in brain tumor attributes, encompassing size, shape, and intensity, even withinthesametumorcategory,along withresemblancesto manifestationsofotherdiseases.Misclassifyingabraintumor can lead to severe consequences, diminishing a patient’s chances of survival. As a result, there has been a growing interest in the development of automated image processing

technologies to address the limitations of manual diagnosis [18], [19]. Many researchers have delved into various algorithms for the identification and categorization of brain tumors, with a strong emphasis on achieving high performanceandminimizingerrors.

1.5 Motivation

Brain tumors are among the most life-threatening and aggressiveformsofcancer,affectingthousandsofindividuals worldwide each year. Early and accurate diagnosis is critical for effective treatment planning and improving patient survivalrates.Traditionaldiagnosticmethods,suchasmanual MRI image analysis by radiologists, are time-consuming, pronetohumanerror,andrequireahighlevelofexpertise.

Withtherapidadvancementofartificialintelligenceanddeep learning, especially convolutional neural networks (CNNs), there is an unprecedented opportunity to assist medical professionals in diagnosing brain tumors with high accuracy and speed. Among various deep learning models, architectures like DenseNet have shown remarkable performanceduetotheirefficientfeaturereuseandabilityto capturecomplexpatternsinmedicalimagingdata.

Thisresearchisdrivenbythemotivationtocontributetothe healthcare sector by developing an automated, reliable, and cost-effective brain tumor detection system. By leveraging pre-trainedCNNmodelsandreal-worldMRIdatasets,thegoal is to reduce diagnostic time, assist radiologists in clinical decisions,andultimatelyhelpsavelives.

Furthermore,theincreasingaccessibilityofhigh-performance computing and open-source medical datasets makes it feasible and impactful for researchers to innovate in this critical area. The motivation stems not only from a technical standpoint but also from a humanitarian desire to improve medicaloutcomesthroughintelligentsystems.

II. LITERATURE SURVEY

Author’s This work [52] proposing an enhanced model for classifying meningioma, glioma, and pituitary gland tumors, thereby improving precision in brain tumor detection. Trained on a dataset of 5712 images, the model achieves exceptional accuracy (99%) in both training and validation datasets, with a focus on precision. Leveraging techniques such as data augmentation, transfer learning with ResNet50, and regularization ensures stability and generalizability. The work [53] encompasses three distinct classes, including Meningiomas, Glioma and pituitary tumors. Mask alignment technique is used for precise segmentation of brain tumor regions[54].

International Research Journal of Engineering and Technology (IRJET) e-ISSN:2395-0056

Volume: 12 Issue: 07 | Jul 2025 www.irjet.net p-ISSN:2395-0072

This research work [55] solves the problem by identifying and categorizing brain tumors in difficult cases using deep learning (DL) methods. Dilated Convolution and YOLOv8 FeatureExtractionNetwork(DC-YOLOv8FEN)isproposedto improve tumor detection accuracy. Based on the experimental results, the proposed method achieved precision of 95.2%, recall rate of 90.5%, accuracy of 99.5% andF1-scoreof96.5%.

The paper in [56] uses different traditional and hybrid ML models to classify the brain tumor images without any human intervention. Along with these, 16 different transfer learning models were also analysed to identify the best transfer learning model to classify brain tumors based on neural networks. Finally, using different state-of-the-art technologies, a stacked classifier was proposed which outperformsalltheotherdevelopedmodels.

The work in [57] proposed a deep convolutional neural network(CNN)EfficientNet-B0basemodelisfine-tunedwith our proposed layers to efficiently classify and detect brain tumor images. The image enhancement techniques are used by applying various filters to enhance the quality of the images. Data augmentation methods are applied to increase the data samples for better training. EfficientNet-B0 outperforms other CNN models by achieving the highest classification accuracy of 98.87% in terms of classification anddetection.

The work in [58] proposed module method and accuracy of brain tumor detection. The first module, named the Image Enhancement Technique, utilizes a trio of machine learning and imaging strategies adaptive Wiener filtering, neural networks.ThesecondphaseusesSupportVectorMachinesto perform tumor segmentation and classification. Applied to various types of brain tumors, including Meningiomas and pituitary tumors, the model exhibited significant improvements in contrast and classification efficiency. It achieved an average sensitivity and specificity of 0.991, accuracyof0.989,andaDicescore(DSC)of0.981.

Author’s in [59] presents a novel brain tumor detection framework, the Patch Base Vision Transformer (PBVit). The work in [60] proposes a novel framework that integrates fuzzy logic-based segmentation with deep learning (DL) techniques to enhance brain tumor detection and classificationinmagneticresonanceimaging(MRI)scans.

In the study [7], the substantial influence of artificial intelligence, particularly deep learning, on the advancement ofmedicalimageprocessingandanalysisisinvestigated. The as illness detection, categorization, and anatomical structure segmentation. The overview begins with foundational principlesandthenmovesontocutting-edgemodelssuchas

convolutional neural networks, recurrent neural networks, andgenerativeadversarial networks(GANs).Several medical imaging application fields are comprehensively addressed, including neurology, pulmonary imaging, cardiac imaging, breastimaging,digitalpathology,andretinalanalysis.

InresearchbyGenerexu[8],isintroducedtoaccuratelydetect and classify intracranial hemorrhage (ICH) subtypes in CT scans.Thisapproachaddressesthecomplexitiesposedbythe intricate brain anatomy and the diverse appearances of haemorrhages. The proposed method combines a 2-D convolutionalneural

Results from experiments conducted on three benchmark datasets, namely RSNA, CQ500, and PhysioNet, showcase the system’shighperformanceandgeneralizability.Incorporating the multi-head attention mechanism notably reduces the weightedmultilabelbinarycross-entropywithlogitlossscore ontheRSNAdataset.TheCQ500datasetexhibitscompetitive results, achieving 0.959, 0.974, 0.958, and 0.977 of accuracy, sensitivity, specificity, and precision. Impressive 0.9869 AUC scores, 0.9797, and 0.9778 are attained on the RSNA, CQ500, and PhysioNet datasets. These outcomes indicate the potential deployment ofthe proposed model asanintelligent assistive tool for radiologists to efficiently diagnose ICH. Nevertheless, it is crucial to acknowledge challenges associated with real-world clinical scenarios, such as variations in CT scanner parameters and patient demographics.

S. Chen et al. worked on RNA adenosine modifications, However, the expression of these genes and their prognostic significance in osteosarcoma (OS) remain unknown [9]. One of the primary goals of genome research is to anticipate phenotypesandidentifykeygenebiomarkers.However,three major issues arise when analysing genomics data to predict phenotypesandchoosegenemarkers[10].

In this article [11], the authors describe a machine learning approach for detecting tumor tissue origins using gene length-normalized somatic mutation sequencing data. This datawasobtainedfromtheInternationalMalignancyGenome Consortium’s(ICGC)DataPortal(Release28),whichincluded 4909 samples from 13 different cancers. The model achieved anaverageaccuracyofsequencingandsophisticatedmachine learningalgorithms.

The research [12] focuses onthe pressing global health issue ofbraintumors,emphasizingthecrucialneedforpromptand accurate diagnosis through MRI. It recognizes the shortcomings and time-consuming nature of radiologists’ manualassessments,andthereisaneedforthedevelopment of computer-aided classification models. However, existing explainability, leading to skepticism among physicians. The

International Research Journal of Engineering and Technology (IRJET) e-ISSN:2395-0056

Volume: 12 Issue: 07 | Jul 2025 www.irjet.net p-ISSN:2395-0072

studyintroducesaninnovativeclassificationandlocalization model, utilizing pre-trained models. The experimental techniques for both classification accuracy and visualization results. This approach showcases the potential to reduce diagnostic uncertainty and improve the validation of brain tumorclassification.

Ref. Algorithm Used Augmentation Techniques

[52] ResNet50 Range of transformations, including rotation, shifting, shearing,zooming,andhorizontal flipping.

[53] YOLOv7 Converting .mat files to .png files. Polygonal segmentation coordinatesareextracted.

[54] 3D-CNN Pre-processing techniques including filtration, intensity correction, and skull stripping is being used to maintain the originalvisualcharacteristics.

[55] YOLOv8 First,thevisiontransformerblock maximizes the feature map, and the Self Shuffle Attention (SSA) improves the feature extraction network(FEN).

[56] CNN,SVM,RF,DT, NB,andKNN. Black portions of each of the images were removed. Contours are identified on the top, bottom, left, and right direction based on thepresenceoftheblackregions.

[57] EfficientB0 Albumenatations to enhance the size of the dataset by creating a new set of images via various transformation methods such as randomrotation(90◦ ,180◦ ,270◦ ), horizontal, and vertical flips, andtransposition

III. PROPOSED WORK

The architecture of the proposed model is shown below in figurebelow:

Figure3.1:proposedworksteps.

Proposed Algorithm

1. SetupandInitialization

 Load the BERT Base Uncased model and tokenizer using theHuggingFacelibrary(oranequivalentlibrary).

 The tokenizer converts raw text into a format that the model can understand (i.e., tokenizing words, handling punctuation,etc.).

 Themodelispre-trainedonvasttextcorpora,soitalready hasgeneralizedknowledgeoflanguage.

2. Pre-processInputData

For each input sentence or sequence, tokenize the text using theBERTtokenizer.

International Research Journal of Engineering and Technology (IRJET) e-ISSN:2395-0056

Volume: 12 Issue: 07 | Jul 2025 www.irjet.net p-ISSN:2395-0072

Tokenizationincludes: Convertingwordstotokens.

Adding special tokens like [CLS] (classification) at the beginningand[SEP](separator)betweensentencesoratthe end.

Padding or truncating to ensure that the input is of the correctsize(basedonBERT’sinputlengthlimit).

The tokenized input is transformed into token IDs that are passedtothemodel.

 Convert Text to Model Input:

Convertthetokenizedtextintoinputtensors(e.g.,tokenIDs, attentionmasks).

Attention masks are used to indicate which tokens are real wordsandwhicharepadding(incasesofpaddedinputs).

3. Fine-TuningtheModel

Collectandpreprocessalabeleddatasetforyourspecifictask (e.g.,sentimentlabelsforclassification,entitylabelsforNER).

Initialize the pre-trained model for your specific task (e.g., BERT for sequence classification, BERT for token classification forNER).

Train the model on your task-specific dataset by adjusting the model’s parameters to minimize the task-specific loss function(e.g.,cross-entropylossforclassification).

Fine-tuning typically involves a smaller learning rate and may require fewer epochs than training a model from scratch.

4. ModelInference(Prediction)

 Feed Input Data to the Fine-Tuned Model:

Foreachnewinputsentence,tokenizeitandconvertittothe appropriateinputformat.

Passthetokenizedinputthroughthemodel.

 Generate Predictions:

If the task is text classification (e.g., sentiment analysis), output the class with the highest probability (based on logits).Ifthetaskis NER,outputthepredictedlabelsforeach tokeninthesentence.

If the task is question answering, output the start and end positionsoftheanswerspanintheprovidedcontext.

5. Post-ProcessingtheOutput

 Interpret Predictions:

Fortextclassification:Convertthemodel’srawoutput(logits) into probabilities (using SoftMax for multi-class classification orsigmoidforbinaryclassification).

For NER: Map the predicted token labels back to their correspondingwords.

For question answering: Convert the predicted span into an actualtextanswerfromthecontext.

 Evaluate the Model (Optional):

If you have a validation or test set, evaluate the model’s performanceusingmetricssuchasaccuracy,precision,recall, F1score,etc.

IV. RESULTS

A classification report is a comprehensive summary of key evaluation metrics for a classification model's performance. Generated using the classification report function from the scikit-learn library, it provides insights into how well your modelpredictseachclassbypresentingthefollowingmetrics:

Precision: Indicates the accuracy of positive predictions for each class. It is calculated as the ratio of true positive predictions to the total number of positive predictions (true positives + false positives). A higher precision means fewer falsepositives.

Recall (Sensitivity): Measures the ability of the model to capture all relevant instances for each class. It is the ratio of true positive predictions to the total actual instances of that class (true positives + false negatives). High recall indicates fewerfalsenegatives.

F1-Score: The harmonic mean of precision and recall, providing a single metric that balances both concerns. It is especiallyusefulwhenyouneedtobalancetheimportanceof precisionandrecall.

Support: Represents the number of actual occurrences of each class in the dataset. This helps in understanding the distribution of classes and assessing the reliability of the calculatedmetrics.

These metrics are calculated for each class individually, allowing you to assess the model's performance across different categories. Additionally, the report includes averagessuchas:

International Research Journal of Engineering and Technology (IRJET) e-ISSN:2395-0056

Volume: 12 Issue: 07 | Jul 2025 www.irjet.net p-ISSN:2395-0072

Macro Average:Calculatesthearithmeticmeanofprecision, recall, and F1-score across all classes, treating each class equallyregardlessofitssupport.

Weighted Average:Computesthemeanofmetrics,weighted bythesupport(numberoftrueinstances)ofeachclass.This accounts for class imbalance by giving more influence to classeswithmoreinstances.

4.1 Bert Base Uncased

Figure4.1:TrainingAccuracyofBERT.

Figure4.2:TestingAccuracyofBERTModel.

Figure4.3:ConfusionmatrixofBERTModel.

Table4.1:AccuracyComparison.

Figure4.4:AccuracyComparisons.

Table4.2PrecisionComparison.

International Research Journal of Engineering and Technology (IRJET) e-ISSN:2395-0056

Volume: 12 Issue: 07 | Jul 2025 www.irjet.net p-ISSN:2395-0072

Figure4.5:PrecisionComparisons.

Table4.3:RecallComparison.

Figure4.6:RecallComparisons.

V. CONCLUSION

This work focuses on developing accurate models for early depression detection using NLP techniques. We conducted a comparative study of various RNNs and Transfer models variants with social media text data, finding that they all performed similarly in accuracy. However, the BERT model variant showed slight advantages in handling large datasets and complex phrases. Notably, Bidirectional LSTM and GRU models slightly outperformed their unidirectional counterparts, indicating that all NLP tasks require bidirectionalanalysis.Fromthiswork,wecansafelyconclude that this work contributes to more reliable and accurate depression diagnosis models through NLP techniques, with practicalapplicationsformentalhealthsupport.

VI. REFERENCES

1. World Health Org. (2022). Mental Disorders. Accessed: Jun. 6, 2022. [Online]. Available: https://www.who.int/newsroom/factsheets/detail/mental-disorders

2. Z.N.Vasha,B.Sharma,I.J.Esha,J.AlNahian,andJ.A.Polin, ‘‘Depression detection in social media comments data using machine learning algorithms,’ Bull. Electr. Eng. Informat. vol. 12, no. 2, pp. 987–996, Apr. 2023. [Online]. Available: https://beei.org/index.php/EEI/article/view/4182/3162

3. H. Kour and M. K. Gupta, ‘‘An hybrid deep learning approach for depression prediction from user tweets using feature-rich CNN and bi-directional LSTM,’’ Multimedia Tools Appl., vol. 81, no. 17, pp. 23649–23685, Jul. 2022, doi: 10.1007/s11042-022-12648-y.

4. S. R. Narayanan, S. Babu, and A. Thandayantavida, ‘‘Detection of depression from social media using deep learning,’’ J. Positive School Psychol., vol. 6, no. 4, pp. 4909–4915,2022.

5. C. Lin, P. Hu, H. Su, S. Li, J. Mei, J. Zhou, and H. Leung, ‘‘SenseMood:Depressiondetectiononsocial media,’’in Proc. Int. Conf. Multimedia Retr., New York, NY, USA, Jun. 2020, p. 407,doi:10.1145/3372278.3391932.

6. C. Y. Chiu, H. Y. Lane, J. L. Koh, and A. L. P. Chen, “Multimodal depression detection on Instagram considering timeintervalofposts,’J.Intell.Inf.Syst.,vol.56,no.1,pp.25–47,Feb.2021,doi:10.1007/s10844-020-00599-5.

7. A. Zafar, D. Aftab, R. Qureshi, Y. Wang, and H. Yan, ‘‘Multiexplainable temporal Net: An interpretable multimodal approachusingtemporalconvolutionalnetworkforuser-level depression detection,’’ in Proc. IEEE/CVF Conf. Compute. Vis.

International Research Journal of Engineering and Technology (IRJET) e-ISSN:2395-0056

Volume: 12 Issue: 07 | Jul 2025 www.irjet.net p-ISSN:2395-0072

Pattern Recognition. Workshops (CVPRW), Jun. 2024, pp. 2258–2265.

8. Dean, B. (2021). How many people use social media in 2021? (65+ statistics). Https: //backlinko.com/social-mediausers.(Accessedon29June2021).

9. Stat Counter (2021). Social media stats worldwide | stat counter global stats. Https: //gs.statcounter.com/socialmedia-stats.(Accessedon29June2021).

10.Martinsen, E. W. (2008). Physical activity in the prevention and treatment of anxiety and depression. Nordic JournalofPsychiatry,62,25–29.

11.Berryman, C., Ferguson, C. J., & Negy, C. (2018). Social mediauseandmentalhealthamongyoungadults.Psychiatric Quarterly,89,307–314.

12.Mumu, T. F., Munni, I. J., & Das, A. K. (2021). Depressed detection from Bangla social media status using LSTM and CNN approach. Journal of Engineering Advancements, 2, 41–47

13.Billah,M.,&Hassan,E.(2019).Depressiondetectionfrom Bangla Facebook status using ml approach. International JournalofComputerApplications,975,8887.BNLP(2021a).

14.Uddin, A. H., Bapery, D., & Arif, A. S. M. (2019a). Depression analysis from social media data in Bangla language using long, short-term memory (LSTM) recurrent neural network technique. In 2019 international conference on computer, communication, chemical, materials and electronicengineering(pp.1–4).IEEE.

15.Peng,Z.,Hu,Q.,&Dang,J.(2019).Multi-kernelsvmbased depressionrecognitionusingsocialmediadata.International JournalofMachineLearningandCybernetics,10,43–57.

16.M.A.Abbas,K.Munir,A.Raza,N.A.Samee,M.M.Jamjoom and Z. Ullah, "Novel Transformer Based Contextualized Embedding and Probabilistic Features for Depression Detection from Social Media," inIEEE Access, vol. 12, pp. 54087-54100,2024,doi:10.1109/ACCESS.2024.3387695.

17.M. Rizwan, M. F. Mushtaq, U. Akram, A. Mehmood, I. Ashraf and B. Sahelices, "Depression Classification from Tweets Using Small Deep Transfer Learning Language Models," inIEEE Access, vol. 10, pp. 129176-129189, 2022, doi:10.1109/ACCESS.2022.3223049.

18.T.Angskun,S.Tipprasert,A.ThippongtornandJ.Angskun, "D₂X: Depression Detection System through X Using Hybrid

Machine Learning," inIEEE Access, vol. 12, pp. 172820172831,2024,doi:10.1109/ACCESS.2024.3502241.

19.Joel Philip Thekkekara, Sira Yongchareon, Veronica Liesaputra, “An attention-based CNN-BiLSTM model for depression detection on social media text”, Expert Systems with Applications, Volume 249, Part C, 2024, 123834, ISSN 0957-4174,https://doi.org/10.1016/j.eswa.2024.123834.

20.Sayani Ghosal, Amita Jain, “Depression and Suicide Risk Detection on Social Media using fast Text Embedding and XGBoost Classifier”, Procedia Computer Science, Volume 218, 2023, Pages 1631-1639, ISSN 1877-0509, https://doi.org/10.1016/j.procs.2023.01.141

© 2025, IRJET | Impact Factor value: 8.315 | ISO 9001:2008

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.