Skip to main content

MUSIC RECOMMENDATION SYSTEM USING FACIAL EXPRESSION

Page 1

International Research Journal of Engineering and Technology (IRJET)

e-ISSN: 2395-0056

Volume: 11 Issue: 05 | May 2024

p-ISSN: 2395-0072

www.irjet.net

MUSIC RECOMMENDATION SYSTEM USING FACIAL EXPRESSION Divanshi Tiwari1, Harish Kumar2, Shiva Garg3 1B.Tech(CSE) IV Year, H.R. Institute of Technology, Ghaziabad, India 2,3Professor CSE, H.R. Institute of Technology, Ghaziabad, India

---------------------------------------------------------------------***---------------------------------------------------------------------

Abstract -In human-computer interaction, individualized user experiences are growing in importance. This study develops a real-time emotion-aware music recommender system that uses facial expression detection to improve user interaction and amusement. A webcam-based program records users' facial emotions in real time. The method locates and isolates facial regions from the video feed using Haar Cascade for face detection. The DeepFace library uses deep learning techniques to reliably identify and classify the user's primary emotional state from facial expressions. The system dynamically recommends and plays music based on the user's emotion, such as happiness, sadness, rage, or surprise. This mood-matched music selection may improve the user's listening experience and mood. The system's capacity to adjust to real-time emotional feedback makes it more personalized and engaging than music recommendation systems that rely simply on user inputs or predefined playlists. This research shows promise in human-computer interaction and entertainment using computer vision and emotion analysis. This innovative system creates responsive and adaptive digital environments by combining emotional recognition and multimedia content delivery. To improve system responsiveness and user experience, further work may increase the range of observable emotions, improve algorithmic accuracy, and add sensors. Key Words: Emotion detection, Music recommendation, Computer vision, DeepFace, Haar Cascade, Pywhatkit, Music.

1. INTRODUCTION Music recommendation systems have gained significant importance in the digital age, offering users personalized music choices based on various factors such as listening history, preferences, and contextual data. Traditional systems often rely on explicit user inputs or past behavior to suggest music, which might not always align with the user's current emotional state. This gap presents an opportunity to enhance recommendation systems by incorporating real-time emotion detection. Aligning music recommendations with user emotions is a complex challenge due to the subjective nature of emotions and the technical intricacies of accurately detecting and interpreting facial expressions in real-time. Existing systems lack the capability to dynamically adjust to the emotional state of the user, often resulting in a less engaging experience. The goal of this research is to develop a music recommendation system that uses facial expressions to recommend music. By leveraging computer vision and emotion analysis technologies, the system aims to provide a personalized and emotionally attuned music recommendation experience. This paper discusses the development and implementation of a real-time emotion detection system using facial expressions to recommend music. It covers the methodologies used for emotion detection, the system architecture, the integration of music recommendation, and the evaluation of the system's effectiveness.

2. LITERATURE REVIEW The integration of emotional intelligence into human-computer interaction has seen significant advancements. Various studies have contributed to the understanding and implementation of facial expression recognition and emotion-based systems.

2.1 Emotion Detection from Facial Expressions Ekman and Friesen (1978) laid the foundation for the study of facial expressions as indicators of emotion, creating the Facial Action Coding System (FACS) to categorize physical expressions corresponding to emotions [1]. Building on this, Viola and Jones (2001) developed the Haar Cascade classifier, a significant advancement in real-time object detection used for face recognition [2].Zeng et al. (2009) conducted a comprehensive survey on affect recognition methods, emphasizing the importance of integrating audio, visual, and spontaneous expressions for accurate emotion detection [3]. Fasel and Luettin (2003) reviewed automatic facial expression analysis, highlighting the challenges and potential of various recognition techniques [4].Mollahosseini, Hasani, and Mahoor (2017) introduced AffectNet, a large database for facial expression, valence, and arousal computing, demonstrating the utility of extensive datasets in improving emotion recognition accuracy [5]. Sariyanidi, Gunes, and Cavallaro (2015) discussed the advancements in automatic facial affect analysis, covering the registration, representation, and recognition of facial expressions [6].

© 2024, IRJET

|

Impact Factor value: 8.226

|

ISO 9001:2008 Certified Journal

|

Page 2145


Turn static files into dynamic content formats.

Create a flipbook
MUSIC RECOMMENDATION SYSTEM USING FACIAL EXPRESSION by IRJET Journal - Issuu