Factors Influencing the Formation of Augmented Reality Systems

Cover Page

Cite item

Full Text

Abstract

This study identifies and analyzes the factors influencing the formation, functioning, usability, and efficiency of augmented reality (AR) systems. The first group of factors is determined by the technical characteristics of the system and the information infrastructure. The quality of the sensors determines the degree of detail and reliability of the initial data; devices with low accuracy can cause delays, drift, or jitter, which negatively affect the stability of the virtual object. Accurate positioning and tracking depend on GPS signals, visual-inertial odometry, or marker-based systems that can be affected by multichannel interference, signal jamming, or sensor noise, resulting in a mismatch between the physical and virtual worlds. Network bandwidth limitations affect real-time data streaming, cloud rendering, and multi-user synchronization, and unreliable connections result in skips or delays. The second group of factors refers to environmental conditions. Fluctuations in lighting can cause noise, decrease contrast, and disrupt object detection algorithms, which require the use of reliable computer vision techniques. This study proposes a solution to problems that improves the quality of augmented reality content and is key to the creation and development of AR systems.

Full Text

Introduction In the rapidly advancing field of augmented reality (AR), the seamless integration of virtual elements into the physical world relies on more than just sophisticated software; it is also governed by a range of different factors. These factors affect every stage of AR model formation, from data capture and processing to rendering and user interaction, ultimately determining the system’s accuracy, responsiveness, and overall user expe-rience [1; 2]. Sensor quality and calibration are the back-bone of reliable data acquisition. High-resolution cameras, wide dynamic-range sensors, and precise inertial measurement units (IMUs) deliver detailed visual and motion data, whereas low-fidelity devices can introduce latency, drift, and jitter. Regular calibration and sensor fusion techniques are essential for minimizing these errors and maintaining stable virtual overlays [3; 4]. Positioning and tracking accuracy depend on technologies such as GPS, visual-inertial odometry, and marker-based systems, all of which are sus-ceptible to environmental interferences, such as multipath signal reflections, occlusions, and sensor noise. Ensuring precise alignment between the real and virtual worlds requires continuous refinement of Simultaneous Localization and Mapping (SLAM) algorithms and redundancy in tracking methods[5; 6]. Network performance is another critical factor, especially for cloud-assisted AR and multi-user experiences. Bandwidth limitations and latency can cause frame drops, synchronization issues, and delayed rendering, thereby undermining immersion. Leveraging edge computing and efficient data compression strategies help alleviate these network constraints and enable low-latency, high-fidelity AR interactions [7]. Furthermore, the complexity of physical environments, which are characterized by reflective surfaces, dynamic objects, and clutter, poses significant challenges for scene understanding and object recognition. Advanced environment-aware algorithms that adapt to changing conditions are necessary to maintain accurate and stable virtual content placement [8]. This study explores the key factors shaping the formation of AR systems, including technological advancements and environmental conditions [9]. 1. Methods 1.1. Technological Advancements Technological advancements are perhaps the most critical factor influencing AR (Figure 1). The evolution of hardware and software has dramatically enhanced AR capabilities. High-performance mobile devices, such as smartphones and tablets, have made AR accessible to a wider audience. The integration of advanced sensors (e.g., accelero-meters, gyroscopes, and depth sensors) allows for more accurate tracking and interaction with the physical environment. Надпись: Figure 1. Technological advancements S o u r c e: by F.K. Ceesay. Moreover, dedicated AR hardware, such as smart glasses and head-mounted displays (HMDs), has further enriched user experiences. Devices such as Microsoft HoloLens and Magic Leap One enable hands-free interaction, making AR applications more practical in fields such as healthcare, education, and manufacturing. On the software side, innovations in computer vision and machine learning have improved object recognition and tracking, which are essential for creating realistic AR experiences. Development platforms such as ARKit (Apple) and ARCore (Google) have simplified the creation of AR applications, fostering innovation and lowering barriers for developers [10; 11]. User adoption of AR applications affects user attention and behavior [12]. 1.1.1. Hardware Improvements Hardware improvements are key to advancing AR model formation, as they directly affect data quality, processing speed, user comfort, and overall system robustness. Major areas of enhancement include [13]: Sensor Upgrades. Higher-resolution cameras and global-shutter imagers reduce motion blur and capture finer scene details, thereby improving feature detection. Wide dynamic range (WDR) and High dynamic range (HDR) sensors handle extreme lighting contrasts (sunlit exteriors vs. dim interiors) more reliably. Integrated depth sensors (time-of-flight, structured light, or LiDAR) provide real-time 3D geometry, boosting SLAM accuracy in feature-poor or textureless environments. Miniaturized, low-noise IMUs (accelerometers, gyroscopes, and magnetometers) with on-chip temperature compensation reduce drift and jitter in motion tracking. Display and Optics. Next-generation wave-guide optics (diffractive or holographic) and pancake lenses enable slimmer and lighter AR glasses with wider fields of view (FOV) and improved bright-ness uniformity. Micro-LED and OLED microdisplays offer higher pixel density, better contrast ratios, and lower latency than conventional LCDs, leading to sharper and more stable virtual overlays [14]. Adaptive focus and varifocal modules reduce eye strain by dynamically adjusting the focal distances to match the virtual content depth [15]. Compute and Power. Dedicated vision and AI accelerators (NPUs, VPUs) on edge devices accelerate neural network inference for object recognition, semantic segmentation, and visual-inertial odometry, reducing the reliance on cloud processing. Heterogeneous multicore Systems on a Chip (SoCs) combine CPUs, GPUs, DSPs, and NPUs to balance real-time rendering, physics simulation, and computer vision tasks efficiently. Advanced power-management Integrated Circuits (ICs) and high-capacity, fast-charging batteries extend the operation time without com-promising temperature control. Improved thermal designs (vapor chambers and graphite shields) maintain performance under sustained loads. Connectivity. Integrated 5G/6G modems with multi-gigabit throughput and ultra-low latency support cloud-assisted AR, large-scale multiplayer synchronization, and the remote streaming of complex 3D assets. Wi-Fi 6/6E and Bluetooth LE Audio modules enable robust, low-latency local networking for device clusters, peripherals, and audio/video offload. Form Factor and Ergonomics. The use of lightweight composite materials (carbon fiber and magnesium alloys) and balanced weight distribution reduce user fatigue, allowing longer sessions. Modular hardware architectures allow deve-lopers to swap sensors, batteries, or compute modules to suit specific use cases, from industrial inspections to medical training [16]. Improved user interfaces (eye tracking, hand tracking, and voice control) offload mechanical controls, streamlining interactions and reducing accessory bulk [17]. Advancements across these hardware do-mains, including the integration of high-fidelity sensors, advanced display technologies, dedicated accelerators, next-generation connectivity, and ergonomic design, enable AR systems to achieve more accurate environment mapping, faster and more reliable rendering, and a substantially improved user experience. 1.1.2. Software Development In modern software development for AR,a developer must integrate multiple subsystems, including real-time rendering, sensor processing, networking, and user interaction, into a cohesive, high-performance application [18]. The key con-siderations and best practices are as follows: ¡ Modular System Architecture. Layers such as rendering, sensor fusion, user interfaces (UI), and networking are isolated into distinct modules or services. Modern AR systems are configured using a plugin-based design, in which components (e.g., tracking engines) can be swapped out without rewriting the core logic. ¡ Choice of Engines and SDKs: Cross-platform frameworks: Unity and Unreal Engine offer extensive AR toolkits (AR Foundation, ARCore, and ARKit) for mobile and headset targets. Native SDKs: when performance is critical, services can tap directly into ARKit (iOS) or ARCore (Android) APIs, bypassing engine overhead. ¡ Sensor Fusion and Real-Time Data Handling: Time synchronization: Linear interpolation or hard-ware triggers can be used to align IMU, camera, and depth sensor timestamps. Filtering: Extended Kalman filters (EKF) or complementary filters are employed to merge noisy streams and minimize drift. ¡ Advanced Computer Vision Pipelines: Feature extraction: The combination of classical algorithms (ORB, FAST) with lightweight neural networks (MobileNet-based) provides robust detection under varied conditions. Semantic understanding: Using on-device in-ferencing to recognize objects and surfaces enables context-aware interactions. ¡ Graphics and Rendering Optimizations: Level-of-Detail (LOD): Dynamically adjusting mesh complexity based on distance and screen size maintains 60-90 FPS on mobile GPUs. GPU instancing and batching: Reducing draw calls by grouping identical objects and materials enables to leverage compute shaders for offloading physics or lighting tasks. ¡ Networking and Distributed Experiences: Low-latency protocols: UDP-based transports (e.g., WebRTC and QUIC) with forward-error correction can be used for real-time multi-user synchronization. State reconciliation: The implementation of client-side prediction and server-authoritative state snapshots helps to hide interpolation delays. ¡ Continuous Integration / Continuous Deploy-ment (CI/CD): Automated testing: Device farms or emulators to run unit, integration, and performance tests should be created for every commit. Build automation: Script builds for iOS, Android, and XR headsets, packaging assets, and running acceptance tests should be fulfilled before release. ¡ User Experience (UX), Accessibility, and Safety: User comfort: Monitoring and reducing latency (motion-to-photon < 20 ms) and frame drops can be useful to prevent motion sickness [19]. Natural inputs: Fallback controls (touch, gamepad) when gesture or eye-tracking fails, and interfaces should meet accessibility guidelines. ¡ Security, Privacy, and Compliance: Data encryption: Sensor streams and user metadata in transit (TLS) and at rest (AES) should be protected from hacking. Consent and anonymity: The implementation of opt-in mechanisms and anonymized spatial maps should adhere to GDPR/CCPA. ¡ Debugging and Profiling: Real-time dashboards: Telemetry (frame time, memory usage, and sensor latency) should be exposed in-app or via remote logging. Offline analysis: Sensor streams and render logs should be recorded for post-mortem analysis when tracking issues arise in the field. By following these principles and leveraging the right combination of engines, SDKs, and toolchains, software developers can deliver robust and high-fidelity AR applications that perform well across devices and use cases. 1.1.3. User Adoption User adoption of augmented reality depends on perceived value, ease of use, trust, and support. The key factors and strategies include [20]: ¡ Perceived Usefulness and Relevance Tailored AR experiences to real user needs can be used in training, maintenance, retail visualization, and navigation. Return on investment (ROI) can be demon-strated through case studies and pilot programs before full deployment. ¡ Ease of Use and Onboarding. Intuitive user interfaces with familiar gestures and minimal steps improve user efficiency. Step-by-step tutorials, contextual tooltips, and guided tours on the first use ensure successful operation. ¡ Trust and Reliability. Stable tracking, fast load times, and accurate overlays build user’s con-fidence. Users should be clearly informed about data usage and privacy policies and be able to opt-in controls for location and camera access. ¡ Learning Curve and Training. Micro-learning modules and in-app help for new users to delve into the subject. Gamification (e.g., badges, leaderboards, and achievement tracking) motivates users’ exploration activities. ¡ Accessibility and Inclusivity. Hardware and Software should support multiple input methods (touch, voice, gestures) and adapt to different physical users’ abilities. Localized languages and cultural assets advance people’s communication. ¡ Social Proof and Network Effects Sharing AR content on social media and collaboration features for teams encourage the further development of AR technologies. Early adopters and influencers can generate word-of-mouth referrals to popularize AR. ¡ Incentives and Business Models Freemium or trial models can decrease entry barriers. Discounts, loyalty rewards, or exclusive virtual content activate users’ interest and attention. ¡ Technical Support and Community Maintaining active support channels, including forums, chatbots, and help desks, facilitates user engagement and promotes the adoption of AR technologies. Developer and user communities, where tips, custom assets, and extensions can be shared, help users exchange information. ¡ Performance Monitoring and Iteration Monitoring metrics, such as session duration, feature utilization, task completion rates, and drop-off points, provides a foundation for evaluating the effectiveness of AR systems. A/B testing can be used to refine onboarding flows, UI layouts, and content relevance. ¡ Long-Term Engagement Regularly updated AR content should be implemented according to user feedback and emerging needs. Seasonal or thematic experiences can be organized to re-engage dormant users. By addressing factors such as efficiency, pro-ductivity, usability, and continuous improvement, researchers can promote higher adoption rates and sustained use of AR solutions. 1.2. Environmental Conditions Environmental conditions play a critical role in the performance of AR systems (Figure 2). In particular, fluctuations in illumination can significantly degrade the quality of the input data by introducing sensor noise, reducing image contrast, and impairing the robustness of object detection and recognition algorithms. Variations in lighting may arise from natural sources (e.g., sun-light intensity changes and shadows) or artificial sources (e.g., indoor lighting flicker and reflections), and these factors directly affect the accuracy of feature extraction and tracking. To mitigate these effects, AR systems employ reliable computer vision techniques, such as ad-aptive histogram equalization, denoising filters, and photometric normalization. Furthermore, multi-sensor fusion strategies that combine visual data with inertial or depth sensors are increasingly adopted to ensure robust performance under varying environmental conditions. These approaches enable AR systems to maintain stability, accuracy, and usability in real-world scenarios. Figure 2. Environmental conditions S o u r c e: by F.K. Ceesay. 2. Formation of the AugmentedReality Systems The algorithm for AR system formation is based on the integration of multiple computational and perceptual processes that ensure robust, context-aware augmentation of the physical environment. The formation process can be divided into four major stages: - processing environmental conditions, include-ing improving the images with methods of denoising and contrast enhancement; - creating a model of the surrounding space; - generating digital content; - placing digital content in the modelled space. 2.1. Processing Environmental Conditions The Processing Environmental Conditions algorithm is designed to enhance and normalize visual data captured from vehicle-mounted cameras or other environmental sensors operating under varying lighting and weather conditions (Figures 3 and 4). Its primary objective is to improve the image quality, visibility, and reliability for downstream applications such as AR spatial modelling, autono-mous navigation, and environmental visualization. Figure 3. Algorithm: processing environmental conditions S o u r c e: by F.K. Ceesay. Algorithm: Processing Environmental Conditions Figure 5. Algorithm: creatinga model of the surrounding space S o u r c e: by F.K. Ceesay. Figure 4. Result of algorithm: processing environmental conditions S o u r c e: by F.K. Ceesay. This algorithm performs a sequence of image-processing operations to remove unwanted noise, improve contrast, balance illumination, and pre-serve essential visual features, such as edges and textures. By fusing enhanced color and edge infor-mation, the algorithm produces a high-quality image output suitable for accurate spatial interpretation and object recognition in dynamic environments. 2.2. Creating a Modelof the Surrounding Space This algorithm simulates the process of creat-ing a 3D spatial model of an environment from a single image (as if captured by a vehicle-mounted camera or drone) (Figures 5 and 6). It extracts features, generates depth, and visua-lizes the 3D point cloud. Figure 6. Result of the algorithm: creating a model of the surrounding space S o u r c e: by F.K. Ceesay. 2.3. Generating Digital Content The algorithm is a simulation that generates and visualizes digital content (such as AR objects) placed in a 3D spatial environment (Figures 7, 8). It uses environmental data (weather, light, and temperature) to decide what kind of digital content to place and where. Finally, everything is rendered in a 3D plot. 2.4. Placing Digital Contentin the Modelled Space The algorithm generates a 3D terrain, detects flat areas suitable for placing digital content, selects content based on environmental conditions (such as weather), assigns the content to those flat regions, and visualizes everything in a 3D plot (Figures 9, 10). Figure 7. Algorithm: generating digital content S o u r c e: by Fafa K. Ceesay. Figure 8. Result of the Algorithm: Generating digital content S o u r c e: by F.K. Ceesay. Figure 9. Algorithm:Place digital content in modelled 3D space S o u r c e: by F.K. Ceesay. Figure 10. Result of the algorithm:place digital content in modelled 3D space S o u r c e: by Fafa K. Ceesay. 3. Results The analysis shows that the formation of Aug-mented Reality systems is not driven by a single determinant but emerges from the interaction of eight classes of factors. First, hardware constraints (processor per-formance, display type, battery capacity, and sensor fidelity) establish the physical and computational limits of the system. Second, tracking and registration accuracy, governed by the tracking strategy, environmental cues, and latency, directly affects the stability and utility of the augmentations. Third, software architecture and platform choices, including framework selection, operating system support, and cloud/on-device distribution, shape the system’s extensibility and real-time be-havior. Fourth, human - computer interaction factors, such as the field of view, interaction modality, ergonomics, perceptual comfort, and cognitive load, determine how augmentation can be safely and effectively consumed by users. Fifth, the content and experience design requirements (realism level, scene understanding depth, dynamicity, and multi-user support) influence both the data pipelines and runtime complexity. Sixth, environmental conditions, particularly lighting, texture richness, and outdoor variability, mediate the reliability of perception and alignment. Seventh, business and ecosystem forces, inclu-ding the target industry, integration cost, and market maturity, constrain viable design trade-offs. Finally, considerations related to privacy, security, and ethics, particularly continuous sensing and bystander exposure, establish normative and regulatory constraints on the development of AR systems. Collectively, these factors shape the tech-nical architecture, define the boundaries of usability, and influence the feasibility of AR deployment. 4. Discussion The program’s performance and efficiency confirm the effectiveness of the proposed method for processing environmental data in combination with the fused data from object-mounted sensors. The results of the developed AR system con-firm the feasibility of using a combined pipeline of image preprocessing, environment modeling, digital content generation, and spatial placement for effective augmented reality applications. Several aspects merit further discussion. 1. Robustness under environmental conditions: The system demonstrated resilience to fluctuations in illumination and noise through preprocessing methods such as denoising and Contrast Limited Adaptive Histogram Equalization (CLAHE). This confirms earlier findings that reliable preprocess-ing is essential for object detection and AR overlay stability. 2. Accuracy of environment modeling: The in-tegration of vision-based reconstruction with multi-sensor fusion yielded a more stable and accurate digital model of the surrounding space. This sup-ports the claims in the literature that combining camera vision with IMU or depth data enhances spatial consistency. 3. Real-time performance: Experimental results show that the pipeline can operate in real time, although the computational demands increase with higher resolution inputs and more complex over-lays. This balance between speed and fidelity is a common challenge in AR development. 4. User-centered functionality: The program’s ergonomic and accessibility features make it adapt-able to various user groups. However, large-scale usability studies are required to fully validate these claims. 5. Limitations: Despite its robustness, the system faces challenges in highly dynamic environ-ments, where rapid motion and occlusions reduce feature-matching accuracy. Additionally, sensor calibration remains critical for maintaining spatial alignment. 6. Future Directions: Future work should focus on optimizing computational efficiency using GPU acceleration, expanding multi-sensor integration (e.g., LiDAR), and improving the semantic under-standing of the environment for context-aware AR overlays. Conclusion The formation of AR systems is shaped by a multifaceted array of factors that collectively in-fluence their development, adoption and effective-ness. Technological advancements serve as primary drivers, enabling more sophisticated hardware and software solutions that enhance user experience and broaden the applicability of AR across various sectors. Understanding these factors is essential for stakeholders in the AR ecosystem, including sci-entists, developers, and programmers. By recog-nizing and addressing these influences, they can create more effective, user-centered AR solutions that not only meet market demands but also foster broader acceptance and integration of augmented reality into everyday life. As AR continues to evolve, the interplay of these various factors will remain critical in shaping its future trajectory and potential impact on society. This study has presented the development and evaluation of an algorithm for the formation ofAR systems. The proposed pipeline, consisting of environmental preprocessing, modeling of the surrounding space, digital content generation, and spatial placement, demonstrated its effectiveness in creating a stable and functional AR environment. Key Findings: Robustness to environmental conditions: Pre-processing techniques, such as denoising and CLAHE, significantly improved visual clarity, ensuring reliable performance under fluctuating lighting and noise. Improved environmental modeling: The integration of computer vision techniques with multisensor fusion enabled the creation of accurateand stable digital representations of the physical environment. Real-time capability: The system successfully achieved real-time performance, making it suitable for interactive applications. Usability and ergonomics: The program was designed using user-centered principles to enhance ease of use, accessibility, and safety. Future Work. Despite these achievements, some limitations remain, including challenges in highly dynamic scenes and the need for precise sensor calibration. Future studies should explore GPU acceleration, semantic environment under-standing, and broader usability testing to further enhance the robustness and adaptability of AR systems. In conclusion, the developed AR system con-firms the feasibility of combining advanced com-puter vision with sensor fusion to achieve practical, robust, and user-friendly augmented reality appli-cations.
×

About the authors

Larisa V. Kruglova

RUDN University

Author for correspondence.
Email: kruglova-lv@pfur.ru
ORCID iD: 0000-0002-8824-1241
SPIN-code: 2920-9463

PhD in Technical Sciences, Associate Professor of the Department of Mechanics and Control Processes, Academy of Engineering

6 Miklukho-Maklaya St, Moscow, 117198, Russian Federation

Fafa K. Ceesay

RUDN University

Email: 1042225144@rudn.ru
ORCID iD: 0000-0001-6762-9231

PhD student of the Department of Mechanics and Control Processes, Academy of Engineering

6 Miklukho-Maklaya St, Moscow, 117198, Russian Federation

Rokhaya Samb

RUDN University

Email: 1042225241@rudn.ru
ORCID iD: 0009-0000-6787-3517

PhD student of the Department of Mechanics and Control Processes, Academy of Engineering

6 Miklukho-Maklaya St, Moscow, 117198, Russian Federation

References

  1. Azuma RT. A survey of augmented reality. Presence: Teleoperators and Virtual Environments. 1997; 6(4):355-385. https://doi.org/10.1162/pres.1997.6.4.355
  2. Milgram P, Kishino F. A taxonomy of mixed reality visual displays. IEICE Transactions on Information and Systems. 1994;E77-D(12):1321-1329.
  3. Azuma R, Baillot Y, Behringer R, Feiner S, Julier S, MacIntyre B. Recent advances in augmented reality. IEEE Computer Graphics and Applications. 2001:21(6):34-47. https://doi.org/10.1109/38.963459
  4. Welch G, Bishop G. An introduction to the Kalman filter. Department of Computer Science. University of North Carolina at Chapel Hill. 2006;TR 95-041:1-16.
  5. Mur-Artal R, Tardós JD. ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo, and RGB-D Cameras. IEEE Transactions on Robotics. 2017;33(5): 1255-1262. https://doi.org/10.1109/TRO.2017.2705103
  6. Davison AJ, Reid ID, Molton ND, Stasse O. MonoSLAM: Real-Time Single Camera SLAM. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2007; 29(6):1052-1067. https://doi.org/10.1109/TPAMI.2007.1049
  7. Schmalstieg D, Höllerer T. Augmented Reality: Principles & Practice. Addison-Wesley Professional. 2016. https://books.google.co.in/books?id=Y4r-ngEACAAJ
  8. Billinghurst M, Clark A, Lee G. A survey of augmented reality. Foundations and Trends in Human - Computer Interaction. 2015;8(2-3):73-272. https://doi.org/10.1561/1100000049
  9. Ong SK, Nee AYC. Virtual and Augmented Reality Applications in Manufacturing. Springer London. 2013. https://books.google.ru/books?id=ETnTBwAAQBAJ
  10. Permozer I, Orehovački T. Utilizing Apple’s ARKit 2.0 for Augmented Reality Application Development. 2019 42nd International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO); 2019 May 20-24. Opatija, Croatia. IEEE; 2019. p. 1629-1634. https://doi.org/10.23919/MIPRO.2019. 8756928
  11. Nowacki P, Woda M. Capabilities of ARCore and ARKit Platforms for AR/VR Applications. In: Zamojski W, Mazurkiewicz J, Sugier J, Walkowiak T, Kacprzyk J, editors. Engineering in Dependability of Computer Systems and Net-works. Proceedings of the Fourteenth International Conference on Dependability of Computer Systems DepCoS-RELCOMEX; 2020 Jul 1-5. Cham: Springer; 2020. p. 358-370. https://doi.org/10.1007/978-3-030-19501-4_36
  12. Zhou T. Examining User Adoption of Mobile Augmented Reality Applications. International Journal of E-Adoption. 2018;10(2):37-49. https://doi.org/10.4018/IJEA.2018070103
  13. Zhou F, Duh HB-L, Billinghurst M. Trends in augmented reality tracking, interaction and display: A review of ten years of ISMAR. 2008 7th IEEE/ACM International Symposium on Mixed and Augmented Reality. 2008 Sep 15-18; Cambridge. IEEE; 2008. p. 193-202. https://doi.org/10.1109/ISMAR.2008.4637362
  14. Stauffert JP, Niebling F, Latoschik ME. Latency and Cybersickness: Impact, Causes, and Measures. A Review. Frontiers in Virtual Reality. 2020. https://doi.org/10.3389/FRVIR.2020.582204
  15. Livingston MA, Rosenblum LJ, Brown DG, Schmidt GS, Julier SJ, Baillot Y, et al. Military applications of augmented reality. Handbook of Augmented Reality. Naval Research Laboratory. New York; Springer; 2011. p. 671-706. https://doi.org/10.1007/978-1-4614-0064-6_31
  16. Caudell TP, Mizell DW. Augmented reality: An application of heads-up display technology to manual manufacturing processes. Proceedings of the Twenty-Fifth Hawaii International Conference on System Sciences; 1992 Jan 07-10; Kauai, HI, USA. IEEE; 2002. p. 659-669. https://doi.org/10.1109/HICSS.1992.183317
  17. Kiyokawa K. Head-Mounted Display Technologies for Augmented Reality. Fundamentals of Wearable Computers and Augmented Reality. 2015;59-84. https://doi.org/10.1201/b18703
  18. Billinghurst M, Kato H, Poupyrev I. The Magic Book: A transitional AR interface. Computers & Graphics. 2001;25(5):745-753. https://doi.org/10.1016/S0097-8493 (01)00117-0
  19. Stratulat A, Roussarie V, Vercher J-L, Bourdin C. Improving the realism in motion-based driving simulators by adapting tilt-translation technique to human perception. 2011 IEEE Virtual Reality Conference; 2011 Mar 19-23; Singapore. IEEE; 2011. p. 47-50. https://doi.org/10.1109/VR.2011.5759435
  20. Willemsen P, Colton MB, Creem-Regehr SH, Thompson WB. The effects of head-mounted display mechanics on distance judgments in virtual environments. Proceedings of the 1st Symposium on Applied perception in graphics and visualization (APGV ‘04); 2004 Aug 7-8; Los Angeles, California, USA. New York, NY, United States; 2004. p. 35-38. https://doi.org/10.1145/1012551.1012558

Supplementary files

Supplementary Files
Action
1. JATS XML

Copyright (c) 2026 Kruglova L.V., Ceesay F.K., Samb R.

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.