Репозиторий Евразийского национального университета имени Л.Н. Гумилева
Репозиторий Евразийского национального университета имени Л.Н. Гумилева
Репозиторий Евразийского национального университета имени Л.Н. Гумилева
Просмотр элемента 
  •   Главная
  • Научные статьи
  • 01. Публикации в изданиях зарубежных стран
  • Computer Science
  • Просмотр элемента
  •   Главная
  • Научные статьи
  • 01. Публикации в изданиях зарубежных стран
  • Computer Science
  • Просмотр элемента
JavaScript is disabled for your browser. Some features of this site may not work without it.

Recent Advances in Synthesis and Interaction of Speech, Text, and Vision

Thumbnail
Автор
Orynbay, Laura
Razakhova, Bibigul
Peer, Peter
Meden, Blaž
Emeršiˇc, Žiga
Дата
2024
Редактор
Electronics
ISSN
2079-9292
xmlui.dri2xhtml.METS-1.0.item-identifier-citation
Orynbay, L.; Razakhova, B.; Peer, P.; Meden, B.; Emeršiˇc, Ž. Recent Advances in Synthesis and Interaction of Speech, Text, and Vision. Electronics 2024, 13, 1726. https://doi.org/ 10.3390/electronics13091726
Аннотации
In recent years, there has been increasing interest in the conversion of images into audio descriptions. This is a field that lies at the intersection of Computer Vision (CV) and Natural Language Processing (NLP), and it involves various tasks, including creating textual descriptions of images and converting them directly into auditory representations. Another aspect of this field is the synthesis of natural speech from text. This has significant potential to improve accessibility, user experience, and the applications of Artificial Intelligence (AI). In this article, we reviewed a wide range of imageto-audio conversion techniques. Various aspects of image captioning, speech synthesis, and direct image-to-speech conversion have been explored, from fundamental encoder–decoder architectures to more advanced methods such as transformers and adversarial learning. Although the focus of this review is on synthesizing audio descriptions from visual data, the reverse task of creating visual content from natural language descriptions is also covered. This study provides a comprehensive overview of the techniques and methodologies used in these fields and highlights the strengths and weaknesses of each approach. The study emphasizes the importance of various datasets, such as MS COCO, LibriTTS, and VizWiz Captions, which play a critical role in training models, evaluating them, promoting inclusivity, and solving real-world problems. The implications for the future suggest the potential of generating more natural and contextualized audio descriptions, whereas direct image-to-speech tasks provide opportunities for intuitive auditory representations of visual content.
URI
http://repository.enu.kz/handle/enu/30626
Открыть
RECENT~1.PDF (385.5Kb)
Collections
  • Computer Science[445]
Показать полную информацию
CORE Recommender

Евразийский национальный университет имени Л.Н. Гумилева | Научная библиотека | Контакты
Яндекс.Метрика
Научная библиотека | Контакты
 

Просмотр

Весь DSpaceСообщества и коллекцииДата публикацииАвторыНазванияТематикаЭта коллекцияДата публикацииАвторыНазванияТематика

Моя учетная запись

ВойтиРегистрация

Евразийский национальный университет имени Л.Н. Гумилева | Научная библиотека | Контакты
Яндекс.Метрика
Научная библиотека | Контакты