Аннотации:
In recent years, there has been increasing interest in the conversion of images into audio
descriptions. This is a field that lies at the intersection of Computer Vision (CV) and Natural Language
Processing (NLP), and it involves various tasks, including creating textual descriptions of images and
converting them directly into auditory representations. Another aspect of this field is the synthesis
of natural speech from text. This has significant potential to improve accessibility, user experience,
and the applications of Artificial Intelligence (AI). In this article, we reviewed a wide range of imageto-audio conversion techniques. Various aspects of image captioning, speech synthesis, and direct
image-to-speech conversion have been explored, from fundamental encoder–decoder architectures
to more advanced methods such as transformers and adversarial learning. Although the focus of
this review is on synthesizing audio descriptions from visual data, the reverse task of creating visual
content from natural language descriptions is also covered. This study provides a comprehensive
overview of the techniques and methodologies used in these fields and highlights the strengths and
weaknesses of each approach. The study emphasizes the importance of various datasets, such as MS
COCO, LibriTTS, and VizWiz Captions, which play a critical role in training models, evaluating them,
promoting inclusivity, and solving real-world problems. The implications for the future suggest
the potential of generating more natural and contextualized audio descriptions, whereas direct
image-to-speech tasks provide opportunities for intuitive auditory representations of visual content.