I’m currently a third-year PhD student at the University of Tokyo, advised by Prof. Jun Rekimoto. My research interests lie primarily in the topic of voice user interaction, silent speech recognition, and multimodal input technologies. My educational background and work experience span electrical and electronics engineering (B.S., four years at UESTC) and applied computer science (M.S., two years at the University of Tokyo). Since September 2023, I have been visiting HiLab at UCLA under the supervision of Prof. Yang Zhang. In 2024 Summer, I started a Research Scientist Internship at Meta Reality Labs, working on wearable interactions in Augmented Reality.


Email: zxsu [at]

Address: Room A102, Daiwa Ubiquitous Computing Research Building, 7-3-1, Hongo, Bunkyo-ku, Tokyo 113-8654, Japan.


Conference Publications

Watch Your Mouth: Silent Speech Recognition with Depth Sensing

 CHI '24  CHI Conference on Human Factors in Computing Systems  2024

🏅 Best Paper Honorable Mention award, top 5%

Xue Wang, Zixiong Su, Jun Rekimoto, Yang Zhang



LipLearner: Customizable Silent Speech Interactions on Mobile Devices

 CHI '23  CHI Conference on Human Factors in Computing Systems  2023

🏆 Best Paper award, top 1%

Zixiong Su, Shitao Fang, Jun Rekimoto


[GitHub] (Source code)


SSR7000: A Synchronized Corpus of Ultrasound Tongue Imaging for End-to-End Silent Speech Recognition

 LREC'22  The International Conference on Language Resources and Evaluation 2022

Naoki Kimura, Zixiong Su, Takaaki Saeki and Jun Rekimoto.

[GitHub] (Dataset)


Aware: Intuitive Device Activation Using Prosody for Natural Voice Interactions

 CHI '22  CHI Conference on Human Factors in Computing Systems 2022

Zixiong Su*, Xinlei Zhang*, and Jun Rekimoto. (*equally contributed authors)



[GitHub] (Dataset)


SilentSpeller: Towards mobile, hands-free, silent speech text entry using electropalatography

 CHI '22  CHI Conference on Human Factors in Computing Systems 2022

Naoki Kimura, Tan Gemicioglu, Jonathan Womack, Richard Li, Yuhui Zhao, Abdelkareem Bedri, Zixiong Su, Alex Olwal, Jun Rekimoto, and Thad Starner. 




Gaze+Lip: Rapid, Expressive interactions Combining Gaze Input and Silent Speech Commands for Hands-free Smart TV Control

 ETRA '21  2022 ACM Symposium of Eye Tracking Research & Applications

Zixiong Su, Xinlei Zhang, Naoki Kimura, and Jun Rekimoto 



Posters & Workshops

Customizing Silent Speech Commands from Voice Input using One-shot Lipreading

 UIST '22 Poster  Adjunct Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology 2021

Zixiong Su, Shitao Fang, Jun Rekimoto


End-to-End Deep Learning Speech Recognition Model for Silent Speech Challenge

 INTERSPEECH '20  21st Annual Conference of the International Speech Communication Association

Naoki Kimura, Zixiong Su, and Takaaki Saeki.

[GitHub] (Source code)


Awards & Honors

Apr. 2024

Sep. 2023

Aug. 2023

Apr. 2023

Mar. 2023

June 2022 - Mar. 2023

Apr. 2022

Apr. 2022 - Mar. 2025

Mar. 2022

Oct. 2020 - Mar. 2025

Oct. 2020

July 2019

Oct. 2017 - Sept. 2018


Research on ECoG-based brain-computer communication in the Moonshot Research and Development Program

Research on 3D mesh authoring using natural language

Development of an indoor autonomous driving robot

Jan. 2023 - Present

Apr. 2022 - Present

Nov. 2021 - Nov. 2022

July 2020 - July 2021