Репозиторий Dspace

Recent Advances in Synthesis and Interaction of Speech, Text, and Vision

Показать сокращенную информацию

dc.contributor.author Orynbay, Laura
dc.contributor.author Razakhova, Bibigul
dc.contributor.author Peer, Peter
dc.contributor.author Meden, Blaž
dc.contributor.author Emersic, Ziga
dc.date.accessioned 2024-12-12T05:57:53Z
dc.date.available 2024-12-12T05:57:53Z
dc.date.issued 2024
dc.identifier.citation Orynbay, L.; Razakhova, B.; Peer, P.; Meden, B.; Emeršiˇc, Ž. Recent Advances in Synthesis and Interaction of Speech, Text, and Vision. Electronics 2024, 13, 1726. https://doi.org/ 10.3390/electronics13091726 ru
dc.identifier.issn 1754-1786
dc.identifier.other doi.org/ 10.3390/electronics13091726
dc.identifier.uri http://rep.enu.kz/handle/enu/20145
dc.description.abstract In recent years, there has been increasing interest in the conversion of images into audio descriptions. This is a field that lies at the intersection of Computer Vision (CV) and Natural Language Processing (NLP), and it involves various tasks, including creating textual descriptions of images and converting them directly into auditory representations. Another aspect of this field is the synthesis of natural speech from text. This has significant potential to improve accessibility, user experience, and the applications of Artificial Intelligence (AI). In this article, we reviewed a wide range of imageto-audio conversion techniques. Various aspects of image captioning, speech synthesis, and direct image-to-speech conversion have been explored, from fundamental encoder–decoder architectures to more advanced methods such as transformers and adversarial learning. Although the focus of this review is on synthesizing audio descriptions from visual data, the reverse task of creating visual content from natural language descriptions is also covered. This study provides a comprehensive overview of the techniques and methodologies used in these fields and highlights the strengths and weaknesses of each approach. The study emphasizes the importance of various datasets, such as MS COCO, LibriTTS, and VizWiz Captions, which play a critical role in training models, evaluating them, promoting inclusivity, and solving real-world problems. The implications for the future suggest the potential of generating more natural and contextualized audio descriptions, whereas direct image-to-speech tasks provide opportunities for intuitive auditory representations of visual content. ru
dc.language.iso en ru
dc.publisher Electronics ru
dc.relation.ispartofseries 13, 1726;
dc.subject text-free image ru
dc.subject audio description ru
dc.subject image captioning ru
dc.subject text-to-speech ru
dc.subject image-to-speech ru
dc.subject text-to-image ru
dc.subject synthesis ru
dc.subject data generation ru
dc.subject Computer Vision ru
dc.subject Natural Language Processing ru
dc.subject Artificial Intelligence ru
dc.title Recent Advances in Synthesis and Interaction of Speech, Text, and Vision ru
dc.type Article ru


Файлы в этом документе

Данный элемент включен в следующие коллекции

Показать сокращенную информацию

Поиск в DSpace


Просмотр

Моя учетная запись