International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 11 Issue: 05 | May 2024
www.irjet.net
p-ISSN: 2395-0072
SoulSound: Enhancing Musical Therapy through Facial Expression Recognition with Machine Learning Padmavati K1, Amruth Kumar D K2, Ganavi S M3, Keerthi K C4, Sourabha Y N5, 1
Assistant Professor, Dept. of Computer Science & Engineering, Mandya, Karnataka, India Student, Dept. of Computer Science & Engineering, GMIT, Mandya, Karnataka, India ---------------------------------------------------------------------***--------------------------------------------------------------------2,3,4,5
Abstract - Musical therapy is a type of treatment that uses
using various observing techniques that didn't involve any computer machinery or specific intelligent systems. After determining the patient's mood, the therapist picks songs that could calm their mood and emotional state. This process was laborious, more time consuming, and an individual often encountered the problem of arriving at a suitable song list. Even if they identify the user's mood, their choice of songs for creating a playlist simply selects songs that reflect the user's current feelings and does not try to improve their mood in any way.
music to help people improve their physical, emotional, cognitive, and social functioning. It is a non-invasive treatment that has been shown to have beneficial effects on mental health. Facial recognition technology has grown in prominence as a tool for analyzing human emotions and behavior’s in recent years. We are investigating the possibility of facial recognition technology in musical therapy. In this project, we propose approach to musical therapy that uses facial recognition technology to create personalized playlists for individuals based on their emotional state. We use a convolution neural network (CNN) to analyze facial expressions and classify them into different emotional states, such as happiness, sadness, or anger. Based on the emotional state detected, we recommend music that is likely to have a positive effect on the individual's emotional well-being. A voice assistant is also integrated into the system to provide additional support and guidance to the users. The system is designed to be accessible to people of all ages and abilities, including those with disabilities. Our approach has the potential to improve the effectiveness of musical therapy by providing personalized recommendations based on the individual's emotional state. Moreover, our project provides a convenient and accessible platform for individuals to receive musical therapy without the need for in-person visits to a therapist.
Therefore, if the user is sad, they are presented with a list of songs with sad emotions that could worsen their mood and potentially lead to sadness. So, the system proposed in this paper can identify the user's emotions from their facial expressions. It will then offer the user a playlist of songs, ensuring that the user may feel better.
2. LITREATURE SURVEY 1.”Smart Music Player Integrating Facial Emotion Recognition and Music Mood Recommendation” Authors: Shlok Gilda, Husain Zafar, Chintan Soni, Kshitija Waghurdekar. Methodology: The methodology encompasses the utilization of deep learning algorithms for facial emotion recognition and the extraction of acoustic features for music classification. These processes are integrated to map the user's mood to recommended songs, enhancing the personalized music listening experience. The study leverages datasets comprising facial expressions and labeled songs to train artificial neural networks for accurate emotion identification and music mood classification. The evaluation of the system's performance involves testing on a substantial dataset of images and songs to achieve high accuracy levels. Future enhancements include exploring the system's performance with all seven basic emotions, incorporating songs from diverse languages and regions, and integrating user preferences through collaborative filtering for further refinement of the recommendation system.
Key Words: Musical Therapy using Facial expression, Emotion Based Music Detection, Terapeutic Music Intervention, Music Driven Facial Analysis, Mood Enhancement.
1. INTRODUCTION The humans being has natural intelligences to watch a face and guess their emotional state. This intelligences, if copied by an electronic computer - desktop, robotic automaton or a movable machinery - will have useful applications in the real world. Music is seen as a good medium of emotional communique. Emotional expression is seen as an important treatment idea. In every form and technical literature, it is linked to mental welfare when inhibition of feelings seems to have a role in different diseases, along with physical sicknesses.
2.”Smart Space with Music Selection Feature Based on Face and Speech Emotion and Expression Recognition” Authors: Jose Martin Z. Maningo, Argel A. Bandala, Ryan Rhay P. Vicerra, Elmer P. Dadios, Karla Andrea L. Bedoya.
Using traditional music therapies, where the therapist previously needed to manually analyze the patient’s mood
© 2024, IRJET
|
Impact Factor value: 8.226
|
ISO 9001:2008 Certified Journal
|
Page 2047