Показать сокращенную информацию

dc.contributor.authorSatybaldina, Dina
dc.contributor.authorKalymova, Gulzia
dc.date.accessioned2024-12-09T06:58:37Z
dc.date.available2024-12-09T06:58:37Z
dc.date.issued2021
dc.identifier.issn2502-4752
dc.identifier.otherDOI: 10.11591/ijeecs.v21.i1.pp398-405
dc.identifier.urihttp://rep.enu.kz/handle/enu/19970
dc.description.abstractHand gesture recognition becomes a popular topic of deep learning and provides many application fields for bridging the human-computer barrier and has a positive impact on our daily life. The primary idea of our project is a static gesture acquisition from depth camera and to process the input images to train the deep convolutional neural network pre-trained on ImageNet dataset. Proposed system consists of gesture capture device (Intel® RealSense™ depth camera D435), pre-processing and image segmentation algorithms, feature extraction algorithm and object classification. For preprocessing and image segmentation algorithms computer vision methods from the OpenCV and Intel Real Sense libraries are used. The subsystem for features extracting and gestures classification is based on the modified VGG16 by using the TensorFlow&Keras deep learning framework. Performance of the static gestures recognition system is evaluated using maching learning metrics. Experimental results show that the proposed model, trained on a database of 2000 images, provides high recognition accuracy both at the training and testing stages.ru
dc.language.isoenru
dc.publisherIndonesian Journal of Electrical Engineering and Computer Scienceru
dc.relation.ispartofseriesVol. 21, No. 1;
dc.subjectComputer visionru
dc.subjectConvolutional neural networkru
dc.subjectDeep learningru
dc.subjectGesture recognitionru
dc.subjectMachine learningru
dc.subjectVGG-16ru
dc.titleDeep learning based static hand gesture recognitionru
dc.typeArticleru


Файлы в этом документе

Thumbnail

Данный элемент включен в следующие коллекции

Показать сокращенную информацию