We suggest a method for defining the hand movements or Mudras of Bharatanatyam. This requires a pre-processing
stage in which a skin-based segmentation is performed to obtain the boundary of the hand. The extraction process of the
function involves obtaining the chaincode of the entire contour of the hand followed by normalization. It also involves extracting
the Euclidean distance from the centroid over 360 degrees to the outermost limits of the side. Extracted features from the
training images are used to create four Naïve Bayes, KNN, Logistic Regression, Multiclass SVM recognition models. The
method displays an accuracy of 88.47%, 87.06%, 89.83%, 92.3% using Naïve Bayes, KNN, Multiclass SVM, Logistic Regression,
respectively. We proposed here the framework for identifying other mudras and to incorporate facial expression recognition and
posture recognition, and to integrate the modules. Using enhanced noise reduction technique can also increase device efficiency