# Yokoi Sotaro — Portfolio > Personal portfolio of Sotaro Yokoi (横井総太朗), a researcher in Human-Computer Interaction, VR, haptics, robotics, and accessibility at The University of Tokyo. This file gives AI crawlers a plain-text summary of the site, since the visible UI is rendered with client-side internationalization. The site is built with Next.js and uses client-side i18n, so HTML scraped without JavaScript will not contain the displayed copy. The content below is the canonical text for every public section of the site (in English; a Japanese version is linked at the bottom). ## About Sotaro Yokoi is a researcher in Human-Computer Interaction, Robotics, and Accessibility. He is a Master's student (Year 2) at the Kuzuoka-Narumi-Tanikawa Lab, Department of Mechano-Informatics, Graduate School of Information Science and Technology, The University of Tokyo. He works on VR, haptics (redirection / encounter-type devices), impossible objects, and skill-training systems, and is also a research assistant at Sony Computer Science Laboratories Kyoto under Prof. Jun Rekimoto. ### Education - April 2024 - Present — Master's Program, Department of Mechano-Informatics, Graduate School of Information Science and Technology, The University of Tokyo, Kuzuoka-Narumi-Tanikawa Lab - April 2020 - March 2024 — Bachelor's Program, Department of Mechano-Informatics, School of Engineering, The University of Tokyo, Kuzuoka-Narumi Lab - April 2014 - March 2020 — Kaisei High School ### Experience - May 2024 - Present — Sony Computer Science Laboratories Kyoto. Conducting research and development as a research assistant under Professor Rekimoto. ### Accomplishments - 2025 — Selected for IPA MITOU Program 2025 (acceptance rate 9.2%): 'A development platform for modular social robots for self-expression' (Co-creators: Naoki Shitanda, Kento Matsuo) - 2025 — Murata Science Foundation Research Grant, Interdisciplinary Field 2025 (acceptance rate 7%): 'Designing and Evaluating Human-AI Teaming' — sole Master's-level recipient - 2025 — Selected for The University of Tokyo SPRING GX (Green Transformation Overseas Dispatch Program) - May 2024 - Present — Selected as a program member of the IIW: International Graduate Program of Innovation for Intelligent World. - June 2023 — Keyence Foundation 2023 'Ganbare! Japanese University Students' scholarship recipient ### Awards - Best Presentation Award, IEEE VR Workshop VR-HSA 2025 (Jan 2025) - WISS2024 Demo Presentation Award (program committie nomination) (https://www.wiss.org/WISS2024/award.html) - Civic tech challenge 2022 1st Prize (150 participants) (https://ccc2022.code4japan.org/) - Yahoo Digital Hackday 2022 Pretia Award (77 Teams) (https://hackday.yahoo.co.jp/) - Yahoo Digital Hackday 2021 MONET Technologies Award (71 Teams) (https://hackday.yahoo.co.jp/history/digital2021/) ## Works / Projects ### Tangible Impossible Object (触れる不可能立体) The Penrose stairs, also known as the infinite staircase, is a fascinating optical illusion that returns to its starting point despite continuous ascent. This motif, symbolizing infinity, has captivated artists such as Escher. However, traditional illusions are primarily visual, neglecting modalities like touch and proprioception. This project leverages VR to create an experience where users can 'touch' the infinite staircase. Despite feeling like their finger is ascending, they find themselves back at the start after one loop, offering a tactile illusion. - Year: 2024 - Members: Sotaro Yokoi - Contribution: Unity, C#, Fusion360 - Link: https://www.iiiexhibition.com/?workId=Tangible_Impossible_Object - Media: https://partner-web.jp/article/?id=3214 Detail: https://yokoi-sotaro.com/en/works/tangible-impossible ### Prismatic Diary People often forget that there are alternative perspectives when interpreting events. While we record emotions and thoughts in diaries, how would we react if those interpretations changed unknowingly? This project provides an experience of discovering alternative interpretations of past events by writing a diary that evolves in real-time. As users progress, an AI generates various interpretations, broadening the meaning of past occurrences. Discovering these interpretations transforms the way you view the world into a spectrum of colors. - Year: 2024 - Members: Sotaro Yokoi, Yutaro Konishi, Isei Li, Haruma Hirabayashi, Tsubasa Yoshida, Ai Tsuzuki - Contribution: Next.JS, Flask, OpenAI API - Link: https://www.iiiexhibition.com/?workId=prismatic Detail: https://yokoi-sotaro.com/en/works/prismatic-diary ### Jaku-in (寂隠) When acquiring skills, it is crucial not only to mimic the movements of experts but also to understand the intentions behind them. This paper introduces a training system called 'Jaku-in.' This system uses depth cameras to record and visualize instructors' body movements in 3D, as well as tracking eye and hand movements. It enables learners to gain deeper insight into the intentions of experts. - Year: 2024 - Members: Sotaro Yokoi, Kaishi Amitani, Natsuki Hamanishi, Jun Rekimoto - Contribution: As a research assistant, I was responsible for system implementation (Unity, C#, Python), technology selection, and paper writing. - Link: https://dl.acm.org/doi/10.1145/3681756.3697977 Detail: https://yokoi-sotaro.com/en/works/jaku-in ### TableMorph TableMorph combines "redirection technology" leveraging user illusions and "encounter-type haptic devices" to expand the range of tactile stimuli that can be presented. In this system, a table with a textured shape is mounted on a small robot that moves in physical space. This allows the real table to align with the user's position relative to the virtual table. In demonstrations, it provided an experience of exploring a maze seemingly larger than reality. - Year: 2023 - Members: Amane Yamaguchi*, Sotaro Yokoi*, Keigo Matsumoto, Takuji Narumi - Contribution: Developed the VR system using Unity and C#, managed communication and control systems for Arduino and omni-wheel robots, and contributed to paper writing. - Link: https://dl.acm.org/doi/10.1145/3610541.3614574 Detail: https://yokoi-sotaro.com/en/works/tablemorph ### #NUR01(VJ) As part of activities with the Keio-based circle "Ongaku Ban Senkai Doukoukai," participated in the event "Next Uni Rhythm," gathering DJ circles from the Kanto region, as a VJ. - Year: 2023 - Members: Sotaro Yokoi - Contribution: Performed as a VJ using Resolume, creating projection videos and ensuring real-time synchronization with DJ music. Detail: https://yokoi-sotaro.com/en/works/nur01 ### Tuzukuru With the concept of 'Creating Continuity,' this website is designed to make it easier for high school students and beginner designers to start and continue learning design. By providing recipe-style tutorials for digital tools and a feedback system for works, it offers an accessible and sustainable learning platform. - Year: 2023 - Members: Sotaro Yokoi, Atsushi Maruyama, Shunya Wada - Contribution: Figma, Next.JS, Logo Typography - Link: https://speakerdeck.com/atsumaru1377/tuzukurujie-shuo-butuku Detail: https://yokoi-sotaro.com/en/works/tuzukuru ### Kaigi 2.0 (Kaigi2.0) A groupware allowing users to freely place objects in AR space and communicate. Developed during the hackathon 'Yahoo Digital Hackday 2022,' it provides a way to visually organize and share ideas. Ideas are visualized as bubbles, which can be moved and combined, aiding brainstorming. - Year: 2023 - Members: 横井総太朗, 松尾健登, アネクワット・タナチャイ - Contribution: AR spatial design, feature implementation, UI/UX design - Link: https://www.slideshare.net/slideshow/yahoohackdaypdf Detail: https://yokoi-sotaro.com/en/works/kaigi2 ### Unlimited Corgi As part of the third-year course "Intelligent Machine Information Exercises," developed a system integrating VR and real-world experiences. Created a system where a robot moves in response to VR. Users could experience touching the endlessly long back of a corgi in VR, while a cushion-equipped robot moved according to the user's position, providing the illusion of touching an infinitely long dog back. - Year: 2022 - Members: Sotaro Yokoi - Contribution: Created a VR system using Unity and C#, controlled three Turtlebot3 robots using ROS, and implemented group control with PID tuning. Also fabricated robot enclosures using a lathe for metal processing. Detail: https://yokoi-sotaro.com/en/works/unlimited-corgi ### EEIC/S Web As a member of the circle "designing plus nine," undertook a project to renew the University of Tokyo's EEIC/EEIS website. The project included establishing new typography, tone, and manners for the design, as well as implementing the CMS. - Year: 2022 - Members: designing plus nine サークル構成員 - Contribution: Served as part of the project management team, gathering and managing content for the site, and liaising with clients. Also conducted interviews with alumni and wrote articles based on their stories. - Link: https://www.ee.t.u-tokyo.ac.jp/ Detail: https://yokoi-sotaro.com/en/works/eeic-s ### DP92023KV As a member of the circle "designing plus nine," created the key visual for the year's activities. Each year, designing plus nine produces visuals inspired by Pantone's Color of the Year. For 2023, the color was 'Viva Magenta,' symbolizing vitality and strength. Created graphics using photos of magenta-colored vegetables to align with this theme. - Year: 2022 - Members: Sotaro Yokoi - Contribution: Contributed by developing the concept and performing color adjustments using Photoshop. Detail: https://yokoi-sotaro.com/en/works/dp92023 ### Amarimeshi (あまりめし) At Yahoo Digital Hackday, developed a service concept using the MONET API, a taxi/car dispatching API, to provide leftover meals from restaurants and convenience stores at discounted prices. A key feature was creating a delivery service using driver services, utilizing the API by treating restaurants and users as boarding and drop-off points. - Year: 2022 - Members: Sotaro Yokoi, Tanachai Anakewat, Yuya Hidaka, Kaisei Kikuchi - Contribution: Proposed the concept, designed UI/UX, and developed the frontend using React. Also implemented the delivery service utilizing the MONET API. - Link: https://hackday.yahoo.co.jp/history/digital2021/ Detail: https://yokoi-sotaro.com/en/works/amarimeshi ## Publications ### Journal - Y.-H. Hu, S. Yokoi, Y. Hatada, Y. Hiroi, T. Narumi, and T. Hiraki, "LUIDA: Large-scale Unified Infrastructure for Digital Assessments Based on a Commercial Metaverse Platform," (under review), 2026. (https://arxiv.org/abs/2504.17705) ### Conference - S. Yokoi, K. Matsumoto, T. Narumi, and H. Kuzuoka, "Effects of the Combination of a Mobile Robot for Haptic Feedback and Hand Redirection on Detection Thresholds," in Proceedings of the Augmented Humans International Conference 2025 (AHs '25), pp. 115–123. (https://doi.org/10.1145/3745900.3746068) - S. Yokoi*, R. Ohara*, K. Murayama, K. Nakano, and T. Narumi, "What Does an Angel's Halo Taste Like? Exploring the Structure of Gustatory Comedy Through a Case Analysis of Ajigiri," IFIP–ICEC 2025, LNCS vol. 16042, Springer, 2025. (*equal contribution) (https://doi.org/10.1007/978-3-032-02555-5_27) - K. Murayama, S. Noguchi, S. Yokoi, T. Narumi, H. Kuzuoka, and K. Matsumoto, "Modeling Multisensory Integration in Hand Redirection: A Bayesian Causal Inference Framework for Understanding Individual Variability," ACM Symposium on Applied Perception 2025 (SAP '25), Article 13, pp. 1–11, 2025. (https://doi.org/10.1145/3736702.3744358) ### Workshop - S. Yokoi, K. Matsumoto, and T. Narumi, "Make Impossible Objects Possible in VR by Displaying Distinct 3D Models," VR-HSA 2025 (Workshop in conjunction with IEEE VR 2025), 2025. - S. Yokoi*, S. Tokida*, Y. Hiroi, and T. Hiraki, "Verselyzer: A Statistical Analysis Interface Enabling Non-Experts to Assess Metaverse Branding Effectiveness," 4th Workshop on Seamless Reality (WSR), in conjunction with IEEE VR 2026, 2026. (*equal contribution) (https://yhiroi.github.io/assets/pdf/2026_IEEEVR_WS_KPI_Interface.pdf) - Sotaro Yokoi, "Toward Embodied AI that Promotes Trust Between Visually Impaired and Sighted Individuals," JST ASPIRE Workshop 2026, 2026. (https://koikelab-team.github.io/aspire-workshop2026-page/) ### Poster - A. Yamaguchi*, S. Yokoi*, K. Matsumoto, and T. Narumi, "TableMorph: Haptic Experience with Movable Tables and Redirection," ACM SIGGRAPH Asia 2023 Emerging Technologies, Article 19, pp. 1–2, 2023. (*co-first authors, acceptance rate 24.2%) (https://doi.org/10.1145/3610541.3614574) - S. Yokoi, T. Yoshida, K. Matsumoto, and T. Narumi, "Interactive Impossible Objects: Designing Physical Interaction with Impossible Objects Using Binocular Disparity Adjustment and Redirection," ACM SIGGRAPH 2025 Emerging Technologies, Article 11, pp. 1–2, 2025. (https://doi.org/10.1145/3721257.3734030) - S. Yokoi et al., "Katakko: Embodiment of Modular Robots through Automatic Motion Mapping," ACM SIGGRAPH 2026 Emerging Technologies (accepted). - S. Yokoi and J. Rekimoto, "MermaidLLM: Dataflow Diagrams for Explainable Skill Formalization and Real-time Support with Multimodal LLMs," in Adjunct Proceedings of the 38th Annual ACM Symposium on User Interface Software and Technology (UIST Adjunct '25), Article 103, pp. 1–3, 2025. (https://doi.org/10.1145/3746058.3758449) - S. Yokoi, K. Amitani, N. Hamanishi, and J. Rekimoto, "Jaku-in: A Cultural Skills Training System for Recording and Reproducing Three-Dimensional Body, Eye, and Hand Movements," ACM SIGGRAPH Asia 2024 Posters, Article 23, pp. 1–2, 2024. (https://doi.org/10.1145/3681756.3697977) ### Domestic Conference - 横井総太朗, 松本啓吾, 鳴海拓志.「不可能立体を立体視環境で透視投影を用いて可視化するシステム」, WISS2024, 2-A02, 2024年.(ポスター) - 横井総太朗, 松本啓吾, 鳴海拓志, 葛岡英明.「移動ロボットを用いた動的な触覚刺激がハプティックリターゲティングの知覚閾値へ及ぼす影響」, 第29回日本バーチャルリアリティ学会大会, 1A-11, 2024年. - 中野萌士, 松倉悠, 谷川智洋, 横井総太朗, 大原嶺, 村山皓平, 柳田康幸, 鳴海拓志, 和田有史, 大野雅貴.「天使の輪っかの味を作ろう 〜コンテンツから考える味覚インタフェースの未来〜」, 第30回日本バーチャルリアリティ学会大会, OS1H5, 2025年9月.(オーガナイズドセッション) - 横井総太朗*, 時田聡実*, 廣井裕一, 平木剛史.「メタバース空間における実世界ブランディング効果を促進するKPI設計支援インタフェースの開発と評価」, 第30回日本バーチャルリアリティ学会大会, 3C1-07, 2025年9月. (*equal contribution) - 横井総太朗, 大原嶺, 村山皓平, 中野萌士, 鳴海拓志.「味喜利のビデオ分析を通じた味覚的コメディの実現に向けた検討」, 第34回 香り・味と生体情報研究会(SBR/NSI合同研究会), 東京大学, 2025年2月. - 横井総太朗, 松本啓吾, 鳴海拓志.「インタラクティブな不可能立体を扱うVR体験」, 第19回錯覚ワークショップ「錯覚の解明・モデリング・アート化とその応用」, 明治大学中野キャンパス, 2025年3月. (https://cmma.mims.meiji.ac.jp/events/jointresearch_seminars/index_2024.html) ### Contributed Articles - 横井総太朗.「IEEE VR 2025」参加報告, 日本バーチャルリアリティ学会 Newsletter, Vol. 30, No. 4, 2025年4月. (https://vrsj.org/report/13127/) ### Exhibitions - 横井総太朗.「視覚障害者と晴眼者の身体動作を翻訳するロボット」,「ちょっと先のおもしろい未来 —CHANGE TOMORROW—」, 東京ポートシティ竹芝, 2025年11月2日–3日. - 横井総太朗, ウチダキョウカ.「交差点にて」, 富ヶ谷, 2025年8月. - 横井総太朗.「触れる不可能立体」, 東京大学制作展 Extra 2024 付いて離れて, 2024年11月. (https://www.iiiexhibition.com/) - 横井総太朗, 小西優多郎, 李伊婧, 平林晴馬, 吉田翼, 都築あい.「Prismatic Diary」, 東京大学制作展 2024 付いて離れて, 2024年11月. (https://www.iiiexhibition.com/) - 横井総太朗, 小西優多郎, 李伊婧, 平林晴馬, 吉田翼, 都築あい.「Prismatic Diary」, 東京大学制作展 Extra 2024 なにいう展, 2024年7月. (https://iii-exhibition-2024-web.vercel.app///?workId=3) ### Awards - Best Presentation Award, IEEE VR Workshop VR-HSA 2025, Jan 2025. - 対話発表賞(プログラム委員会推薦), WISS2024, 2024年12月. ### Grants and selected programs - IPA 2025年度 未踏IT人材発掘・育成事業 採択(採択率 9.2%)— 自己表現のためのモジュール型ソーシャルロボットの開発基盤の構築(共同クリエータ:四反田直樹, 松尾健登) - 村田財団 研究助成 文理融合分野 2025年(採択率 7%)— Human-AI Teamingの設計と評価(修士課程学生として唯一の採択) ### Scholarships - 東京大学 SPRING GX(グリーントランスフォーメーション海外派遣プログラム)採択 - 知能社会国際卓越大学院プログラム(IIW)採択, 2024年5月. - 公益財団法人キーエンス財団 2023年度「がんばれ!日本の大学生」応援給付金, 2023年6月. ### Media coverage - Sony XYN Web Portal Site,「SIGGRAPH 2025 | ソニーが提案する、制作プロセスを革新するソリューション群」, 2025年9月17日. — 東京大学と筑波大学による共同展示「Interactive Impossible Objects」が空間再現ディスプレイの活用事例として紹介. (https://xyn.sony.net/ja/news/20250912) ## Links - Top: https://yokoi-sotaro.com/en - About: https://yokoi-sotaro.com/en/about - Works: https://yokoi-sotaro.com/en/works - Publications: https://yokoi-sotaro.com/en/publication - Contact: https://yokoi-sotaro.com/en/contact - Sitemap: https://yokoi-sotaro.com/sitemap.xml - Japanese version: https://yokoi-sotaro.com/ja