<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE root>
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ali="http://www.niso.org/schemas/ali/1.0/" article-type="research-article" dtd-version="1.2" xml:lang="en"><front><journal-meta><journal-id journal-id-type="publisher-id">Discrete and Continuous Models and Applied Computational Science</journal-id><journal-title-group><journal-title xml:lang="en">Discrete and Continuous Models and Applied Computational Science</journal-title><trans-title-group xml:lang="ru"><trans-title>Discrete and Continuous Models and Applied Computational Science</trans-title></trans-title-group></journal-title-group><issn publication-format="print">2658-4670</issn><issn publication-format="electronic">2658-7149</issn><publisher><publisher-name xml:lang="en">Peoples' Friendship University of Russia named after Patrice Lumumba (RUDN University)</publisher-name></publisher></journal-meta><article-meta><article-id pub-id-type="publisher-id">46740</article-id><article-id pub-id-type="doi">10.22363/2658-4670-2025-33-3-309-326</article-id><article-id pub-id-type="edn">HEVOIT</article-id><article-categories><subj-group subj-group-type="toc-heading" xml:lang="en"><subject>Letters to the Editor</subject></subj-group><subj-group subj-group-type="toc-heading" xml:lang="ru"><subject>Письма в редакцию</subject></subj-group><subj-group subj-group-type="article-type"><subject>Research Article</subject></subj-group></article-categories><title-group><article-title xml:lang="en">Research of hieroglyphic signs using audiovisual digital analysis methods</article-title><trans-title-group xml:lang="ru"><trans-title>Исследование иероглифов с помощью методов аудиовизуального цифрового анализа</trans-title></trans-title-group></title-group><contrib-group><contrib contrib-type="author"><contrib-id contrib-id-type="orcid">https://orcid.org/0000-0003-2931-8330</contrib-id><name-alternatives><name xml:lang="en"><surname>Egorova</surname><given-names>Maia A.</given-names></name><name xml:lang="ru"><surname>Егорова</surname><given-names>М. А.</given-names></name></name-alternatives><bio xml:lang="en"><p>Candidate of Political Sciences, Associate Professor at the Department of Foreign Languages of the Faculty of Humanities and Social Sciences</p></bio><email>Меу1@list.ru</email><xref ref-type="aff" rid="aff1"/></contrib><contrib contrib-type="author"><contrib-id contrib-id-type="orcid">https://orcid.org/0000-0002-1999-3810</contrib-id><name-alternatives><name xml:lang="en"><surname>Egorov</surname><given-names>Alexander A.</given-names></name><name xml:lang="ru"><surname>Егоров</surname><given-names>А. А.</given-names></name></name-alternatives><bio xml:lang="en"><p>Doctor of Physical and Mathematical Sciences, Consulting Professor</p></bio><email>alexandr_egorov@mail.ru</email><xref ref-type="aff" rid="aff1"/></contrib></contrib-group><aff-alternatives id="aff1"><aff><institution xml:lang="en">RUDN University</institution></aff><aff><institution xml:lang="ru">Российский университет дружбы народов</institution></aff></aff-alternatives><pub-date date-type="pub" iso-8601-date="2025-10-15" publication-format="electronic"><day>15</day><month>10</month><year>2025</year></pub-date><volume>33</volume><issue>3</issue><issue-title xml:lang="en">VOL 33, NO3 (2025)</issue-title><issue-title xml:lang="ru">ТОМ 33, №3 (2025)</issue-title><fpage>309</fpage><lpage>326</lpage><history><date date-type="received" iso-8601-date="2025-10-28"><day>28</day><month>10</month><year>2025</year></date></history><permissions><copyright-statement xml:lang="en">Copyright ©; 2025, Egorova M.A., Egorov A.A.</copyright-statement><copyright-statement xml:lang="ru">Copyright ©; 2025, Егорова М.А., Егоров А.А.</copyright-statement><copyright-year>2025</copyright-year><copyright-holder xml:lang="en">Egorova M.A., Egorov A.A.</copyright-holder><copyright-holder xml:lang="ru">Егорова М.А., Егоров А.А.</copyright-holder><ali:free_to_read xmlns:ali="http://www.niso.org/schemas/ali/1.0/"/><license><ali:license_ref xmlns:ali="http://www.niso.org/schemas/ali/1.0/">https://creativecommons.org/licenses/by-nc/4.0</ali:license_ref></license></permissions><self-uri xlink:href="https://journals.rudn.ru/miph/article/view/46740">https://journals.rudn.ru/miph/article/view/46740</self-uri><abstract xml:lang="en"><p>A study of ancient written texts and signs showed that the hieroglyphs and structure of the archaic sentence have much in common with the modern Chinese language. In the context of the history and evolution of the Chinese language, its characteristic tonality and melody are emphasized. The main focus of the work is on the study of the sound properties of hieroglyphs (keys / Chinese radicals) found simultaneously in ancient inscriptions as well as in modern text messages. The article uses modern digital methods of sound analysis with their simultaneous visualization. To characterize the sound of hieroglyphs (in accordance with the Pinyin phonetic transcription adopted in China), two (FI, FII), three (FI, FII, FIII) or four (FS, FI, FII, FIII) formants are used, which create a characteristic F-pattern. Our proposed model of four formants for typical hieroglyphs is called the basic one “F-model”, it’s new and original. To visualize the formants, digital audio signal processing programs were used. The data obtained were compared with the corresponding spectrograms for Mandarin (standard) Chinese. Their correspondence to each other has been established. When analyzing F-patterns, an original model was used, which made it possible to characterize spectrograms in the frequency and time domains. The formalized description of basic components of basic “F-model” of a pronunciation of hieroglyphs is given. In conclusion, several areas are noted in which the use of various methods of audiovisual research is promising: advanced innovative technologies (artificial intelligence and virtual reality); television, theatrical video production; evaluation of the quality of audiovisual content; educational process. The present study has shown that described promising research methods can be useful in analyzing similar ancient hieroglyphs.</p></abstract><trans-abstract xml:lang="ru"><p>Проведённое исследование античных письменных текстов и знаков показало, что иероглифы и строение архаического предложения имеют много общего с современным китайским языком. В контексте истории и эволюции китайского языка подчёркнуты его характерные тональность и мелодичность. Основное внимание в работе уделено исследованию звуковых свойств иероглифов (ключей), встречающихся одновременно в древнейших надписях, а также в современных текстовых сообщениях. В статье использованы современные цифровые методы анализа звуков с одновременной их визуализацией. Для характеристики звучания иероглифов (в соответствии с принятой в Китае фонетической транскрипцией Пиньинь) использованы две (FI, FII), три (FI, FII, FIII) или четыре форманты (FS, FI, FII, FIII), которые создают характерную F-картину. Предложенная нами модель четырёх формант для типовых иероглифов (ключей) названа базовой «F-моделью», она является новой и оригинальной. Для визуализации формант применены программы цифровой обработки звуковых сигналов. Полученные данные сравнивались с соответствующими спектрограммами для мандаринского (стандартного) диалекта китайского языка. Установлено их соответствие друг другу. При анализе F-картин использовалась оригинальная модель, которая позволила охарактеризовать спектрограммы в частотной и временной областях. Дано формализованное описание основных компонентов базовой «F-модели» произношения иероглифов. В заключение отмечено несколько областей, в которых перспективно использование различных методов аудиовизуального исследования: передовые инновационные технологии (искусственный интеллект и виртуальная реальность); телевидение, театральное видеопроизводство; определение качества аудиовизуального контента; образовательный процесс. Проведённое исследование показало, что описанные перспективные методы исследования могут быть полезны при анализе подобных античных иероглифов.</p></trans-abstract><kwd-group xml:lang="en"><kwd>speech analyzing</kwd><kwd>Chinese hieroglyphs</kwd><kwd>spectrogram</kwd><kwd>formants</kwd><kwd>subharmonics</kwd><kwd>basic “F-model”</kwd><kwd>linguistics</kwd><kwd>data processing</kwd><kwd>computer modeling</kwd></kwd-group><kwd-group xml:lang="ru"><kwd>анализ речи</kwd><kwd>китайские иероглифы</kwd><kwd>спектрограмма</kwd><kwd>форманты</kwd><kwd>субгармоники</kwd><kwd>базовая «F-модель»</kwd><kwd>лингвистика</kwd><kwd>обработка данных</kwd><kwd>компьютерное моделирование</kwd></kwd-group><funding-group><award-group><funding-source><institution-wrap><institution xml:lang="en">The publication was prepared with the partial support of the RUDN “Strategic Academic Leadership Program”.</institution></institution-wrap><institution-wrap><institution xml:lang="ru">The publication was prepared with the partial support of the RUDN “Strategic Academic Leadership Program”.</institution></institution-wrap></funding-source></award-group></funding-group></article-meta><fn-group/></front><body></body><back><ref-list><ref id="B1"><label>1.</label><mixed-citation>Rocchesso, D. Introduction to sound processing (Phasar Srl, Firenze, 2003).</mixed-citation></ref><ref id="B2"><label>2.</label><mixed-citation>Bondarko, L. V., Verbitskaya, L. A. &amp; Gordina, M. V. Fundamentals of general phonetics 4th (Academy, St. Petersburg, 2004).</mixed-citation></ref><ref id="B3"><label>3.</label><mixed-citation>Yakhontov, S. Y. Ancient Chinese language (Nauka, Moscow, 1965).</mixed-citation></ref><ref id="B4"><label>4.</label><mixed-citation>Vasiliev, L. S. Ancient China: in 3 volumes (Oriental Literature, Moscow, 1995; 2000; 2006).</mixed-citation></ref><ref id="B5"><label>5.</label><mixed-citation>Atlas of the languages of the world. The origin and development of languages around the world (Lik press, Moscow, 1998).</mixed-citation></ref><ref id="B6"><label>6.</label><mixed-citation>The peopling of East Asia: putting together archaeology, linguistics and genetics (eds Blench, R., Sagart, L. &amp; Sanchez-Mazas, A.) (Routledge Curzon, London, 2005).</mixed-citation></ref><ref id="B7"><label>7.</label><mixed-citation>Kryukov, M. V. &amp; Kh., S.-I. Ancient Chinese (Vostochnaya kniga, Moscow, 2020).</mixed-citation></ref><ref id="B8"><label>8.</label><mixed-citation>Egorova, M. A., Egorov, A. A. &amp; Solovieva, T. V. Modeling the distribution and modification of writing in proto-Chinese language communities. ADML 54, 92-104 (2020).</mixed-citation></ref><ref id="B9"><label>9.</label><mixed-citation>Egorova, M. A., Egorov, A. A. &amp; Solovieva, T. M. Features of archaic writing of ancient Chinese in comparison with modern: historical context. Voprosy Istorii, 189-207 (2021).</mixed-citation></ref><ref id="B10"><label>10.</label><mixed-citation>Zinder, L. R. General phonetics 2nd (Higher school, Moscow, 1979).</mixed-citation></ref><ref id="B11"><label>11.</label><mixed-citation>Lee, W.-S. An articulatory and acoustical analysis of the syllable-initial sibilants and approximant in Beijing Mandarin in Proceedings of the 14th International Congress of Phonetic Sciences 413416 (San Francisco, 1999), 413-416.</mixed-citation></ref><ref id="B12"><label>12.</label><mixed-citation>Kodzasov, S. V. &amp; Krivnova, O. F. General phonetics (RGGU, Moscow, 2001).</mixed-citation></ref><ref id="B13"><label>13.</label><mixed-citation>Musical encyclopedia (Soviet Encyclopedia, Moscow, 1978).</mixed-citation></ref><ref id="B14"><label>14.</label><mixed-citation>Shironosov, V. G. Resonance in physics, chemistry and biology (Publishing House “Udmurt University”, Izhevsk, 2000).</mixed-citation></ref><ref id="B15"><label>15.</label><mixed-citation>Chion, M. Audio-Vision. Sound on screen (Columbia University Press, NY, 1994).</mixed-citation></ref><ref id="B16"><label>16.</label><mixed-citation>Egorova, M. A., Egorov, A. A., Orlova, T. G. &amp; Trifonova, E. D. Methods of research of hieroglyphs on the oldest artifacts - introduction to problem: history, archeology, linguistics. Voprosy Istorii, 20-39 (2022).</mixed-citation></ref><ref id="B17"><label>17.</label><mixed-citation>Keightley, D. N. Sources of Shang history: the oracle-bone inscriptions of Bronze Age China (Berkeley, London, 1985).</mixed-citation></ref><ref id="B18"><label>18.</label><mixed-citation>Hieroglyph 家 “house, family” https://en.wiktionary.org/wiki/.</mixed-citation></ref><ref id="B19"><label>19.</label><mixed-citation>Hieroglyph 立 “stand” https://en.wiktionary.org/wiki/.</mixed-citation></ref><ref id="B20"><label>20.</label><mixed-citation>Hieroglyph 交 “exchange, transfer, give” https://en.wiktionary.org/wiki/.</mixed-citation></ref><ref id="B21"><label>21.</label><mixed-citation>Pinson, M. H., Ingram, W. &amp; Webster, A. Audiovisual quality components. IEEE Signal processing magazine, 60-67 (2011).</mixed-citation></ref><ref id="B22"><label>22.</label><mixed-citation>Urazova, S. L., Gromova, E. B., Kuzmenkova, К. Е. &amp; Mitkovskaya, Y. P. Audiovisual media in the universities of Russia: Typology and analysis of the content. RUDN Journal of Studies in Literature and Journalism 27, 808-822 (2022).</mixed-citation></ref><ref id="B23"><label>23.</label><mixed-citation>Carlson, R. Models of speech synthesis. Proc. Natl. Acad. Sci. USA 92, 9932-9937 (1995).</mixed-citation></ref><ref id="B24"><label>24.</label><mixed-citation>Arai, T. How physical models of the human vocal tract contribute to the field of speech communication. Acoust. Sci. &amp; Tech. 41, 90-93 (2020).</mixed-citation></ref><ref id="B25"><label>25.</label><mixed-citation>Story, B. H. &amp; Bunton, K. A model of speech production based on the acoustic relativity of the vocal tract. J. Acoust. Soc. Am. 146, 2522-2528 (2019).</mixed-citation></ref><ref id="B26"><label>26.</label><mixed-citation>Teixeira, A. J. S., Martinez, R. &amp; Silva, L. N. Simulation of human speech production applied to the study and synthesis of European Portuguese. EURASIP Journal on Applied Signal Processing 9, 1435-1448 (2005).</mixed-citation></ref><ref id="B27"><label>27.</label><mixed-citation>Kinahan, S. P., Liss, J. M. &amp; Berisha, V. TorchDIVA: An extensible computational model of speech production built on an opensource machine learning library. PLOS ONE. doi:10.1371/journal. pone.0281306 (2023).</mixed-citation></ref><ref id="B28"><label>28.</label><mixed-citation>Maurerlehner, P., Schoder, S. &amp; Freidhager, C. Efficient numerical simulation of the human voice. Elektrotechnik &amp; Informationstechnik 138/3, 219-228 (2021).</mixed-citation></ref></ref-list></back></article>
