Public speaking training using virtual reality (VR) 2: Real-time feedback at the right time
In a VR environment, various types of information, such as user actions, gaze, and voice, can be constantly measured, allowing the system to automatically detect inappropriate behavior and provide real-time feedback that immediately informs the user. However, it has been found that providing feedback during cognitively demanding activities such as presentations interferes with training. Therefore, in this study, we modeled the user’s current feedback “tolerance” and estimated it from head movement, body movement, eye gaze, voice, heart rate, and perspiration information, and developed a method to provide feedback at a desirable timing for the user (patent pending). In considering the desirability of feedback in training, it is important that the feedback “does not interfere with training” and “is effective in improving subsequent movements,” and we have modeled the feedback in consideration of these two factors. We believe that this technique can be applied not only to public speaking, but also to various types of training in which real-time feedback is effective, such as surgery
Related achievements:
- Yuichiro Fujimoto, Zhou Hangyu, Taishi Sawabe, Masayuki Kanbara, Hirokazu Kato, “Stop Bad Real-time Feedback!: Estimation of the Timing of Feedback that Negatively Impacts Presenters for Presentation Training in Virtual Reality,” The 22nd IEEE International Symposium on Mixed and Augmented Reality (ISMAR2023), IEEE, Poster, Australia, Sydney, 16 Oct. 2023, DOI: 10.1109/ISMAR-Adjunct60411.2023.00087.
AR Support System to Reduce Anxiety during Conversation
The foundation of social life is communication between people, and face-to-face interaction is the most basic form of communication. On the other hand, many people with ASD, schizophrenia, and other characteristics as well as those with typical developmental traits are uncomfortable or fearful of face-to-face interactions with others. One of the causes of this problem is that they feel excessive fear and anxiety about the gazes and facial expressions of their interlocutors. The main conventional medical treatment for this problem has been to alleviate the disorder by promoting the acquisition of abilities and cognitive change through training, but this requires many years to alleviate the symptoms, and until then, the patient continues to suffer. In this study, as a method with immediate effects, we have conducted research on augmented reality (AR) technology and head-mounted displays (HMDs), which are assumed to be worn on a daily basis to support the person concerned (as of 2024, wearing HMDs on a daily basis is not common, but with future technological development, HMDs will be accepted by society as an alternative information device to smartphones). (As of 2024, it is not common for people to wear HMDs on a daily basis, but with future technological development, we assume that HMDs will be accepted by society as an information device that can replace smartphones.) In our previous research, we have obtained promising results in that the superimposed display of a 3D avatar with a less oppressive animated tone that hides the true facial expression and body of the interlocutor can significantly reduce anxiety in people who tend to be concerned about others’ evaluation of them.
Related achievements:
Juri Yoneyama, Yuichiro Fujimoto, Kosuke Okazaki, Taishi Sawabe, Masayuki Kanbara, Hirokazu Kato, “Augmented Reality Visual Effects for Mitigating Anxiety of In-person Communication for Individuals with Social Anxiety Disorder”, The 15th Asia-Pacific Workshop on Mixed and Augmented Reality (APMAR2023), Taiwan, Taipei, 19 Aug. 2023.
Projection mapping that works in bright places
Most of the existing projection mapping systems are used in dark places, such as outdoors at night or indoors with the lights off. One of the reasons for this is that the projected light looks better in a dark environment, but another technical problem is that it is difficult to position the projector in a bright area (prior preparation involving measurement by a camera, etc.). We thought that solving the latter problem would enable projection in bright environments and potentially expand the range of applications of projection mapping technology. In this study, we construct a projection camera system using an event camera that outputs only light changes instead of an ordinary camera. This is because we focused on the wide dynamic range of this camera and its high contrast sensitivity in the high luminance region. In addition, we proposed structured light that blinks at different frequencies for different locations, which is suitable for event cameras. The proposed projector-event-camera system can stably perform calibration and 3D shape measurement in bright environments where ordinary projector-camera systems cannot. As the second step, we are now working on high-speed shape measurement (~1000fps) for animal bodies.
Related achievements:
Yuichiro Fujimoto, Taishi Sawabe, Masayuki Kanbara, Hirokazu Kato, “Structured Light of Flickering Patterns Having Different Frequencies for a Projector-Event-Camera System,” The IEEE Conference on Virtual Reality and 3D User Interfaces (IEEE VR), Mar. 2022.
Public Speaking Training Using Virtual Reality (VR) 1: Effects of Perspective Change on Training
As part of a research project to broadly explore the possibilities of interpersonal skills training using virtual reality (VR), we focused on presentation (public speaking) training. This training includes both internal control, such as reducing fear during presentations, and skill improvement for successful presentations. However, these methods have two problems: (1) many people do not want to watch their own presentation videos, and (2) it is difficult to quantitatively evaluate oneself due to cognitive bias. In order to solve these problems, we propose a method of reflection by applying a change of viewpoint using VR. Specifically, the system records the body movements, eye direction, and voice during the presentation, and reconstructs the presentation using a 3D avatar in a VR space. The system intentionally does not reflect elements that have little relevance to the quality of the presentation and that tend to cause strong aversion to looking back (e.g., one’s own facial information). After the presentation, the participants wear a head-mounted display (HMD) to observe the presentation from the audience’s viewpoint, which is considered to enable them to look back objectively from a third party’s viewpoint.
Related achievements:
Fumitaka Ueda, Yuichiro Fujimoto, Taishi Sawabe, Masayuki Kanbara, Hirokazu Kato, “The Influence of Perspective on Training Effects in Virtual Reality Public Speaking Training,” 31st IEEE Conference on Virtual Reality and 3D User Interfaces, Poster, IEEE, USA, FL, Orlando, 16 Mar. 2024.
Hangyu Zhou, Yuichiro Fujimoto, Masayuki Kanbara and Hirokazu Kato, “Virtual Reality as a Reflection Technique for Public Speaking Training”, Applied Sciences, 11(9), April 2021, DOI: 10.3390/app11093988.
Guideline and Tool for Designing an Assembly Task Support System Using Augmented Reality
Augmented reality (AR) systems support complex tasks like assembly by overlaying task-related content onto the real world. In recent years, the effort of designing and developing assembly task support systems in AR decreased with the availability of high potential head-mounted displays and provision of integrated development environments. Nevertheless, problems still arise when companies craft an effective AR task support system, particularly in the difficulty of selecting appropriate techniques and information-presentation methods, and the requirements that vary with each use case. In this study, we formulated a corresponding guideline, developed a selection aid tool that incorporates filtering based on the categorization of subtasks and the degree of freedom of available tracking. We envision our guideline and tool to be accessible as an online web page, assisting AR assembly task support system designers/developers worldwide. Our guideline is available here.
Related achievements:
Soshiro Ueda, Keishi Tainaka, Yiming Shen, Shuntaro Ueda, Konstantin Kulik, Yuichiro Fujimoto, Taishi Sawabe, Masayuki Kanbara, Hirokazu Kato, “General Software Platform for Designing and Developing of Augmented Reality Task Support Systems,” SIGGRAPH Asia 2023 XR, Demo, ACM, Australia, Sydney, 12 Dec. 2023, DOI: 10.1145/3610549.3614599
Keishi Tainaka, Yuichiro Fujimoto, Taishi Sawabe, Masayuki Kanbara, and Hirokazu Kato, “Selection framework of visualization methods in designing AR industrial task-support systems,” Computers in Industry, Elsevier, Vol.145, Feb. 2023, DOI: 10.1016/j.compind.2022.103828.
Keishi Tainaka, Yuichiro Fujimoto, Masayuki Kanbara, Hirokazu Kato, Atsunori Moteki, Kensuke Kuraki, Kazuki Osamura, Toshiyuki Yoshitake, and Toshiyuki Fukuoka, “Guideline and Tool for Designing an Assembly Task Support System Using Augmented Reality”, In Proceedings of IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Nov. 2020.
Projection-Mapping for Enhancing the Perceived Deliciousness of Food
The perceived deliciousness of a food item is highly related to its appearance. Image processing has been widely used to make food images more appealing to the public, such as when capturing and posting images on social networking sites. This paper proposes a methodology and a system to enhance the degree of subjective deliciousness perceived by a person based on the appearance of a real food item by changing its appearance in a real environment. First, an online questionnaire survey was conducted to analyze the appearance factors that make food look delicious by using various food images. Based on this knowledge, a prototype system, which projects a computer-generated image onto the food item, was constructed for enhancing its subjective degree of deliciousness based on its appearance at a pixel level. Finally, a user study was conducted in which the subjective degree of deliciousness based on food appearance was compared under various appearance modification conditions. The results show that appropriate chroma and partial-color modifications highly increase this degree of deliciousness, thus implying that the proposed system can successfully be used to improve the appearance of food to make it look more delicious.
Related achievements:
Yuichiro Fujimoto: “Projection Mapping for Enhancing the Perceived Deliciousness of Food”, IEEE Access, Vol.6, No.1, pp.59975-59985, Dec. 2018. (pdf file).
Human Detection in Office Environments
Automatic analysis of work types and communication behavior in offices is expected to improve office efficiency. The position of a person at each time is one of the most basic information for this purpose, and continuous automatic detection is desired. In this study, we propose a new method for continuous human detection based on depth data. To address these problems, we propose (1) a method to approximate the cause of missing data and actively use it, and (2) a method to identify a person by combining multiple doll-like features. Furthermore, in order to cover the entire office environment, we developed a system in which multiple Kinects are placed on the ceiling and operate cooperatively to reintegrate the information as height information from the floor. We combined these systems and applied them to several hundred hours of real office data, and obtained practical accuracy.
Geometrically-Correct Projection-Based Texture Mapping onto a Deformable Object
Projection-based Augmented Reality commonly employs a rigid substrate as the projection surface and does not support scenarios where the substrate can be reshaped. This investigation presents a projection-based AR system that supports deformable substrates that can be bent, twisted or folded. We demonstrate a new invisible marker embedded into a deformable substrate and an algorithm that identifies deformation to project geometrically correct textures onto the deformable object. The geometrically correct projection-based texture mapping onto a deformable marker is conducted using the measurement of the 3D shape through the detection of the retro-reflective marker on the surface. In order to achieve accurate texture mapping, we propose a marker pattern that can be partially recognized and can be registered to an object’s surface. The outcome of this work addresses a fundamental vision recognition challenge that allows the underlying material to change shape and be recognized by the system. Our evaluation demonstrated the system achieved geometrically correct projection under extreme deformation conditions. We envisage the techniques presented are useful for domains including prototype development, design, entertainment and information based AR systems.
Related achievements:
Yuichiro Fujimoto, Ross T. Smith, Takafumi Taketomi, Goshiro Yamamoto, Jun Miyazaki, Hirokazu Kato, and Bruce H. Thomas: “Geometrically-correct projection-based texture mapping onto a deformable object”, IEEE Transactions on Visualization and Computer Graphics (TVCG), Vol.20, No.4, pp.540-549, Mar. 2014. (pdf file).
Relation between Displaying Features of Augmented Reality and User’s Memorization
In this investigation, we verify a hypothesis: “it has positive effects for user’s memorization ability to use features of Augmented Reality (AR)”. The basis of this hypothesis is derived from the following two features. One is a future of AR: “AR can provide information associated with specific locations in the real world”. The other is a future of human memory: “human can easily memorize information if the information
is associated with specific locations”. To verify this hypothesis, we conduct three user studies. As a result, significant differences are found between the situation in which information is associated with the location of the target object in the real world and that in which information is connected with an unrelated location.
Yuichiro Fujimoto, Goshiro Yamamoto, Jun Miyazaki, and Hirokazu Kato: “Relation between Location of Information Displayed by Augmented Reality and User’s Memorization”, In Proceedings of 3rd Augmented Human International Conference (AH2012), pp.93-100, Megeve, France, Mar. 2012.