Eric Wang

(Cheng-Yao Wang)

I’m Eric Wang, a Ph.D. student in Information Science at Cornell University interested in Human-Computer Interaction, Computer-Supported Cooperative Work and Virtual/Mixed Reality.

About me

I’m Eric Wang, a Ph.D. student in Information Science at Cornell University advised by Prof. Andrea Stevenson Won. Previously, I was a research intern at Stanford University, primarily working with Prof. James Landay. I have B.S and M.S. in Computer Science from National Taiwan University advised by Prof. Mike Y. Chen and Prof. Bing-Yu Chen.

My research interests lie at the intersection of Human-Computer Interaction (HCI), Computer-Supported Cooperative Work (CSCW) , and Virtual/Mixed Reality (VR/MR) My current topic is focused on transforming the way people share knowledge and experiences beyond photo/video recordings. More specifically, I’m interested in (1) utilizing deep learning, computer vision and VR to enable sharing experiences through socially reliving experiences in VR; (2) leveraging such novel time-machine-like VR reliving experience to support better remote collaboration and learning; and (3) designing VR privacy-preserving mechanisms for sharing contents in VR without privacy concerns.

Education

  • Ph.D. in Information Science, Cornell University

  • Sep '16 – Current
  • M.S. in Computer Science, National Taiwan University

  • Sep '12 – Jun '14
  • B.S. in Computer Science, National Taiwan University

  • Feb '06 – Aug '10

Experience

  • Research Assistant,VR as a Teaching Tool for Moon Phases and Beyond, Cornell University

  • Fall 2019
  • Teaching Assistant,Introduction to Computing using Matlab, Cornell University

  • Fall 2018
  • Research Assistant,Remote Touch for Telepresence Robots, Cornell University

  • Spring 2018
  • Teaching Assistant,Crowdsourcing and Human Computation, Cornell University

  • Fall 2017
  • Teaching Assistant,Introduction to Computing using Matlab, Cornell University

  • Spring 2017
  • Teaching Assistant,Data Structures and Functional Programming, Cornell University

  • Fall 2016
  • Research Intern,Human-Drone Interaction project, Stanford University

  • Apr '15 – Sep '15
  • Teaching Assistant,Mobile HCI, National Taiwan University

  • Feb '14 – Jun '14

download cv

Transforming how people share knowledge and experiences

projects

Again, Together: Socially Reliving Virtual Reality Experiences When Separated

Cheng-Yao Wang, Mose Sakashita, Jingjin Li, Upol Ehsan, and Andrea Stevenson Won

Under Submission

We describe ReliveInVR, a new time-machine-like VR experience sharing method. ReliveInVR allows multiple users to immerse themselves in the relived experience together and independently view the experience from any perspective. We conducted a 1x3 within-subject study with 26 dyads to compare ReliveInVR with (1) co-watching 360-degree videos on desktop, and (2) co-watching 360-degree videos in VR. Our results suggest that participants reported higher levels of immersion and social presence in ReliveInVR. Participants in ReliveInVR also understood the shared experience better, discovered unnoticed things together and found the sharing experience more fulfilling.

IEEE VR 2019
RelivelnVR: Capturing and Reliving Virtual Reality Experiences Together

Cheng-Yao Wang, Mose Sakashita, Upol Ehsan, Jingjin Li, and Andrea Stevenson Won

Published in IEEE VR 2019 (poster)

We present a new type of sharing VR experience over distance which allows people to relive their recorded experience in VR together. We describe a pilot study examining the user experience when people share their VR experience together remotely. Finally, we discuss the implications for sharing VR experiences over time and space.

IEEE VR 2019
VR-Replay: Capturing and Replaying Avatars in VR for Asynchronous 3D Collaborative Design

Cheng-Yao Wang, Logan Drumm, Christopher Troup, Yingjie Ding, and Andrea Stevenson Won

Published in IEEE VR 2019 (poster)

Distributed teams rely on asynchronous CMC tools to complete collaborative tasks due to the difficulties and costs surrounding scheduling synchronous communications. In this paper, we present VR-Replay, a new communication tool that records and replays avatars with both nonverbal behavior and verbal communication in VR asynchronous collaboration.

IEEE HRI 2019
Drone.io: A Gestural and Visual Interface for Human-Drone Interaction

Jessica R. Cauchard, Alex Tamkin, Cheng-Yao Wang,, Luke Vink, Michelle Park, Tommy Fang, and James A. Landay

Published in HRI 2019 (full paper)

We introduce drone.io, a projected body-centric graphical user interface for human-drone interaction. Using two simple gestures, users can interact with a drone in a natural manner. drone.io is the first human-drone graphical user interface embedded on a drone to provide both input and output capabilities. This paper describes the design process of drone.io. We report drone.io's evaluation in three user studies (N=27) and show that people were able to use the interface with little prior training.

CHI 2018
RoMA: Interactive Fabrication with Augmented Reality and a Robotic 3D Printer

Huaishu Peng, Cheng-Yao Wang*, Jimmy Briggs*, Kevin Guo, Joseph Kider, Stefanie Mueller, Patrick Baudisch, François Guimbretière. (*equal contribution)

Published in CHI 2018 (full paper)

We present the Robotic Modeling Assistant (RoMA), an interactive fabrication system providing a fast, precise, hands-on and in-situ modeling experience with an augmented reality CAD editor and a robotic arm 3D printer. With RoMA, users can integrate real-world constraints into a design rapidly, allowing them to create well-proportioned tangible artifacts. Users can even directly design on and around an existing object, and extending the artifact by in-situ fabrication.

CHI 2017
Teaching Programming with Gamified Semantics

Ian Arawjo, Cheng-Yao Wang, Andrew C. Myers, Erik Andersen, and François Guimbretière

Published in CHI 2017 (full paper)

We present Reduct, an educational game embodying a new, comprehension-first approach to teaching novices core programming concepts which include functions, Booleans, equality, conditionals, and mapping functions over sets. In this novel teaching strategy, the player executes code using reduction-based operational semantics. During gameplay, code representations fade from concrete, block-based graphics to the actual syntax of JavaScript ES2015. Our study result shows that novices demonstrated promising learning of core concepts expressed in actual JavaScript code in a short timeframe with our Reduct game.

MobileHCI 2015
PalmType: Using Palms as Keyboards for Smart Glasses

Cheng-Yao Wang,Wei-Chen Chu, Po-Tsung Chiu, Min-Chieh Hsiu, Yih-Harn Chiang, Mike Y. Chen

Published in MobileHCI 2015 (full paper)

We present PalmType, which uses palms as interactive keyboards for smart wearable displays like Google Glass. PalmType leverages user's innate ability to pinpoint a specific area of palm and fingers without visual attention (i.e. proprioception) and provides visual feedback via wearable displays. With wrist-worn sensors and wearable displays, PalmType enables typing without requiring users to hold any devices and visual attention to their hands. We conducted design sessions with 6 participants to see how users map QWERTY layout to their hands based on their proprioception.

MobileHCI 2015
PalmGesture: Using Palms as Gesture Interfaces for Eyes-free Input

Cheng-Yao Wang,Min-Chieh Hsiu, Po-Tsung Chiu, Chiao-Hui Chang, Liwei Chan, Bing-Yu Chen, Mike Y. Chen

Published in MobileHCI 2015 (full paper)

With abundant tactile cues and proprioception on palms, the palm can be leveraged as an interface for eyes-free input which decreases visual attention to interfaces and minimizes cognitive/physical effort. We explored eyes-free gesture interactions on palms that enables users to interact with devices by drawing stroke gestures on palms without looking at palms. To understand user behavior when users draw gestures on palms and how gestures on palms are affected by palm characteristics, we conducted two 24-participant user studies. Also, we implemented EyeWrist that turns the palm into a gesture interface by embedding a micro-camera and an IR laser line generator on the wristband, and three interaction techniques that takes advantages of palm characteristics are proposed.

CHI 2014
EverTutor: Automatically Creating Interactive Guided Tutorials on Smartphones by User Demonstration

Cheng-Yao Wang,Wei-Chen Chu, Hou-Ren Chen, Chun-Yen Hsu, Mike Y. Chen

Published in CHI 2014 (full paper)

We present EverTutor that automatically generates interactive tutorials on smartphone from user demonstration. It simplifies the tutorial creation, provides tutorial users with contextual step-by-step guidance and avoids the frequent context switching between tutorials and users' primary tasks. In order to generate tutorials automatically, EverTutor records low-level touch events to detect gestures and identify on-screen targets. When a tutorial is browsed, the system uses vision-based techniques to locate the target regions and overlays the corresponding input prompt contextually. It also identifies the correctness of users' interaction to guide the users step by step.

MM 2013
RealSense: Directional Interaction for Proximate Mobile Sharing

Chien-Pang Lin, Cheng-Yao Wang,Hou-Ren Chen, Wei-Chen Chu, Mike Y. Chen

Published in MM 2013 (short paper)

We present RealSense, a technology that enables users to easily share media files with proximate users by detecting the relative direction of each other only with built-in orientation sensors on smartphones. With premise that users are arranged as a circle and every user is facing the center of that circle, RealSense continuously collects the directional heading of each phone to calculate the virtual position of each user in real time during the sharing.

EyeWrist: Enabling Gesture-Based Interaction on Palm with a Wrist-Worn Sensor

Cheng-Yao Wang,Po-Tsung Chiu, Min-Chieh Hsiu, Chiao-Hui Chang, Mike Y. Chen

Excellent Work,The 8th Acer Long-Term Smile Innovation Contest, 2014

We present EyeWrist, which uses palms as the gesture interface for smart wearable displays such as Google Glass. With abundant tactile cues and proprioception on palms, EyeWrist can also be leveraged for device-less and eyes-free remote for smart TVs. EyeWrist embeds a micro-camera and an IR laser line generator on the wristband and use computer vision algorithms to calculate the finger’s position on the palm. Without requiring the line of sight of users’ fingertips on palms, the camera height could be lower, making the whole device more portable. We also implemented a gesture recognizer to distinguish different symbols, letters or touchscreen gestures(e.g. swipe, pinch) on palms. The recognition result would be sent to smart devices via Wi-Fi for gesture-based interaction.

EyeWatch: Touch Interaction on Back of the Hand for Smart Watches

Cheng-Yao Wang,Po-Tsung Chiu, Min-Chieh Hsiu

2nd Prize,MobileHero – user experience design competition, 2014

We present EyeWatch, which uses back of the hand as gesture interface for smart watches. EyeWatch not only overcomes the Big smartwatch problem: occlusion and fat finger problem, but also enables more powerful and natural interaction such as drawing a symbol quickly to open an application, or intuitively handwriting on back of hand to input message. Our proof-of-concept implementation consists of a micro-camera and an IR laser line generator on the smart watch, and computer vision algorithms are used to calculate the finger’s position on the back of hand.

The Incredible Shrinking Adventure

Cheng-Yao Wang,Min-Chieh Hsiu, Chin-Yu Chien, Shuo Yang

Final Shortlist,ACM UIST 2014 Student Innovation Contest

Imagining you were shrunk, you can explore your big house, play with big pets and family. To fulfill the imagination and provide users with incredible shrinking adventures, we use a robotic car and a google cardboard which turns a smartphone into a VR headset. We build a robotic car and attach a smartphone on the pan/tilt servo bracket. stereo images are generated from smartphone’s camera and are streamed to the other smartphone inside of the google cardboard. When users see the world through the smartphone on robotic car, they feel they were shrunk.

my skills

Computer Science

  • C#, Python, C/C++, JAVA, Javascript
  • Deep learning techniques in Computer Vision
  • Human Digitalization, 3D human pose estimation, 3D reconstruction

Virtual Reality

  • Expertise in Unity3D and VR projects development
  • Solid experiences in VR projects development
  • 3D Avatar creation, rigging, animation

Research Design

  • Mixed-method research design
  • Quantitative analysis using R
  • Questionnaire design and qualitative analysis

publications

RelivelnVR: Capturing and Reliving VirtualReality Experiences Together

IEEE VR 2019, Poster

span class="bold">Cheng-Yao Wang, Mose Sakashita, Upol Ehsan, Jingjin Li, and Andrea Stevenson Won

VR-Replay: Capturing andReplaying Avatars in VR for Asynchronous 3D Collaborative Design

IEEE VR 2019, Poster

Cheng-Yao Wang, Logan Drumm, Christopher Troup, Yingjie Ding, and Andrea Stevenson Won

Drone.io: A Gestural and Visual Interface for Human-DroneInteraction

IEEE HRI 2019, Full Paper

Jessica R. Cauchard, Alex Tamkin, Cheng-Yao Wang, Luke Vink, Michelle Park, Tommy Fang, and James A. Landay

RoMA: Interactive Fabrication with Augmented Reality and a Robotic 3D Printer

ACM CHI 2018, Full Paper

Huaishu Peng, Cheng-Yao Wang*, Jimmy Briggs*, Kevin Guo, Joseph Kider, Stefanie Mueller, Patrick Baudisch, François Guimbretière. (*equal contribution)

Teaching Programming with Gamified Semantics

ACM CHI 2017, Full Paper

Ian Arawjo, Cheng-Yao Wang, Andrew C. Myers, Erik Andersen, and François Guimbretière

PalmType: Using Palms as Keyboards for Smart Glasses

ACM MobileHCI 2015, Full Paper

Cheng-Yao Wang,Wei-Chen Chu, Po-Tsung Chiu, Min-Chieh Hsiu, Yih-Harn Chiang, Mike Y. Chen

PalmGesture: Using Palms as Gesture Interfaces for Eyes-free Input

ACM MobileHCI 2015, Full Paper

Cheng-Yao Wang,Min-Chieh Hsiu, Po-Tsung Chiu, Chiao-Hui Chang, Liwei Chan, Bing-Yu Chen, Mike Y. Chen

EverTutor: Automatically Creating Interactive Guided Tutorials on Smartphones by User Demonstration

ACM CHI 2014, Full Paper

Cheng-Yao Wang,Wei-Chen Chu, Hou-Ren Chen, Chun-Yen Hsu, Mike Y. Chen

RealSense: Directional Interaction for Proximate Mobile Sharing Using Built-in Orientation Sensors

MM 2013, short paper

Chien-Pang Lin, Cheng-Yao Wang,Hou-Ren Chen, Wei-Chen Chu, Mike Y. Chen

awards

RelivelnVR: Capturing and Reliving VirtualReality Experiences Together

Best Poster Honorable Mention,In 2019 IEEE Conference onVirtual Reality and 3D User Interfaces (VR), 2019

Selected poster,The Cornell CIS 20th Anniversary Reception, 2019

EverTutor: Automatically Creating Interactive Guided Tutorials on Smartphones by User Demonstration

1st Prize,The 11th Deep Innovations with Impact, National Taiwan University, 2013

1st Prize,The 11th Y.S. National Innovation Software Application Contest, 2014

MagicWrist - Connect the world

Excellent Work,The 8th Acer Long-Term Smile Innovation Contest, 2014

The Incredible Shrinking Adventure

Final Shortlist,ACM UIST 2014 Student Innovation Contest

EyeWatch: Touch Interaction on Back of the Hand for Smart Watches

2nd Prize,MobileHero – user experience design competition, 2014

EyeWrist - Enabling Gesture-Based Interaction on Palm with a Wrist-Worn Sensor

Final Shortlist,MediaTek Wearable device into IoT world competition, 2014