Eric Wang

(Cheng-Yao Wang)

I’m Eric Wang, a Ph.D. student in Computer Science at Cornell University interested in Human-Computer Interaction, Virtual Reality and Augmented Reality.

About me

I’m Eric Wang, a Ph.D. student in Computer Science at Cornell University advised by Prof. Andrea Won and Prof. François Guimbretière. Previously, I was a research intern at Stanford University, primarily working with Prof. James Landay. I have B.S and M.S. in Computer Science from National Taiwan University advised by Prof. Mike Y. Chen and Prof. Bing-Yu Chen.

My current research interest is Human-Computer Interaction especially in Virtaul Reality(VR) and Augmented Reality(AR). More specifically, I’m interested in utilizing VR and AR technologies to provide a better human-human, human-computer and human-robot interactions in different scenarios. Recently, I have developed a prototype system that allows people to capture all the details of a cherish memories and fully recreate the memory to re-experience it with their friends afterward. Unlike viewing photo or video, it provides a time machine-like experience which brings people back to the memory and people have full agency to view the experience from different perspectives and to paused, slowed down or rewound the experience, or intervene the past as well. I have started 2 projects to explore the potential of “Reliving Experience in VR ” in 2 different research areas including Social Interactions and Technology-Mediated Memory in VR (with Prof. Andrea Won) and Computer-Mediated Communication (with Prof. Susan Fussell).

In terms of the research in AR, I have been working with Dr. Huaishu Peng and Prof. François on the project - RoMA: Interactive Fabrication with Augmented Reality and a Robotic 3D Printer - which published at CHI 2018. I believe AR technologies offer new opportunities for robots to communicate about their intents and the environments they share with human teammates. Thus, in the future, I also plan to explore whether people can build better trust with robot partners through visualizing explanations of robots' behavior in AR.

Education

  • Ph.D. in CS, Cornell University

  • Sep '16 – Current
  • M.S. in CS, National Taiwan University

  • Sep '12 – Jun '14
  • B.S. in CS, National Taiwan University

  • Feb '06 – Aug '10

Experience

  • Research Assistant,Project - Remote Touch for Telepresence Robots, Cornell University

  • Spring 2018
  • Teaching Assistant,CS 5306 Crowdsourcing and Human Computation, Cornell University

  • Fall 2017
  • Teaching Assistant,CS 1112 - Introduction to Computing using Matlab, Cornell University

  • Spring 2017
  • Teaching Assistant,CS 3110 - Data Structures and Functional Programming, Cornell University

  • Fall 2016
  • Research Intern,Human-Drone Interaction project, Stanford University

  • Apr '15 – Sep '15
  • Teaching Assistant,Mobile HCI, National Taiwan University

  • Feb '14 – Jun '14

download cv

Novel Technologies Published in Top Conference and 1st Prize of Competitions

projects

On-going
Capturing and Reliving Experience in VR

Cheng-Yao Wang

we present a prototype system that allows people to capture and relive memories in VR. While reliving the experience in VR with our prototype system, people have full agency to view the experience from different perspectives and to paused, slowed down or rewound the experience as well. More importantly, through our prototype system, we plan to design and conduct 2 user studies to explore the 2 important research questions: RQ1: How does reliving memories in VR affect our emotions and RQ2: What social interactions would occur while people relive experience with others

CHI 2018
RoMA: Interactive Fabrication with Augmented Reality and a Robotic 3D Printer

Huaishu Peng, Cheng-Yao Wang*, Jimmy Briggs*, Kevin Guo, Joseph Kider, Stefanie Mueller, Patrick Baudisch, François Guimbretière. (*equal contribution)

Published in CHI 2018 (full paper)

We present the Robotic Modeling Assistant (RoMA), an interactive fabrication system providing a fast, precise, hands-on and in-situ modeling experience with an augmented reality CAD editor and a robotic arm 3D printer. With RoMA, users can integrate real-world constraints into a design rapidly, allowing them to create well-proportioned tangible artifacts. Users can even directly design on and around an existing object, and extending the artifact by in-situ fabrication.

CHI 2017
Teaching Programming with Gamified Semantics

Ian Arawjo, Cheng-Yao Wang, Andrew C. Myers, Erik Andersen, and François Guimbretière

Published in CHI 2017 (full paper)

We present Reduct, an educational game embodying a new, comprehension-first approach to teaching novices core programming concepts which include functions, Booleans, equality, conditionals, and mapping functions over sets. In this novel teaching strategy, the player executes code using reduction-based operational semantics. During gameplay, code representations fade from concrete, block-based graphics to the actual syntax of JavaScript ES2015. Our study result shows that novices demonstrated promising learning of core concepts expressed in actual JavaScript code in a short timeframe with our Reduct game.

MobileHCI 2015
PalmType: Using Palms as Keyboards for Smart Glasses

Cheng-Yao Wang,Wei-Chen Chu, Po-Tsung Chiu, Min-Chieh Hsiu, Yih-Harn Chiang, Mike Y. Chen

Published in MobileHCI 2015 (full paper)

We present PalmType, which uses palms as interactive keyboards for smart wearable displays like Google Glass. PalmType leverages user's innate ability to pinpoint a specific area of palm and fingers without visual attention (i.e. proprioception) and provides visual feedback via wearable displays. With wrist-worn sensors and wearable displays, PalmType enables typing without requiring users to hold any devices and visual attention to their hands. We conducted design sessions with 6 participants to see how users map QWERTY layout to their hands based on their proprioception.

MobileHCI 2015
PalmGesture: Using Palms as Gesture Interfaces for Eyes-free Input

Cheng-Yao Wang,Min-Chieh Hsiu, Po-Tsung Chiu, Chiao-Hui Chang, Liwei Chan, Bing-Yu Chen, Mike Y. Chen

Published in MobileHCI 2015 (full paper)

With abundant tactile cues and proprioception on palms, the palm can be leveraged as an interface for eyes-free input which decreases visual attention to interfaces and minimizes cognitive/physical effort. We explored eyes-free gesture interactions on palms that enables users to interact with devices by drawing stroke gestures on palms without looking at palms. To understand user behavior when users draw gestures on palms and how gestures on palms are affected by palm characteristics, we conducted two 24-participant user studies. Also, we implemented EyeWrist that turns the palm into a gesture interface by embedding a micro-camera and an IR laser line generator on the wristband, and three interaction techniques that takes advantages of palm characteristics are proposed.

CHI 2014
EverTutor: Automatically Creating Interactive Guided Tutorials on Smartphones by User Demonstration

Cheng-Yao Wang,Wei-Chen Chu, Hou-Ren Chen, Chun-Yen Hsu, Mike Y. Chen

Published in CHI 2014 (full paper)

We present EverTutor that automatically generates interactive tutorials on smartphone from user demonstration. It simplifies the tutorial creation, provides tutorial users with contextual step-by-step guidance and avoids the frequent context switching between tutorials and users' primary tasks. In order to generate tutorials automatically, EverTutor records low-level touch events to detect gestures and identify on-screen targets. When a tutorial is browsed, the system uses vision-based techniques to locate the target regions and overlays the corresponding input prompt contextually. It also identifies the correctness of users' interaction to guide the users step by step.

MM 2013
RealSense: Directional Interaction for Proximate Mobile Sharing

Chien-Pang Lin, Cheng-Yao Wang,Hou-Ren Chen, Wei-Chen Chu, Mike Y. Chen

Published in MM 2013 (short paper)

We present RealSense, a technology that enables users to easily share media files with proximate users by detecting the relative direction of each other only with built-in orientation sensors on smartphones. With premise that users are arranged as a circle and every user is facing the center of that circle, RealSense continuously collects the directional heading of each phone to calculate the virtual position of each user in real time during the sharing.

EyeWrist: Enabling Gesture-Based Interaction on Palm with a Wrist-Worn Sensor

Cheng-Yao Wang,Po-Tsung Chiu, Min-Chieh Hsiu, Chiao-Hui Chang, Mike Y. Chen

Excellent Work,The 8th Acer Long-Term Smile Innovation Contest, 2014

We present EyeWrist, which uses palms as the gesture interface for smart wearable displays such as Google Glass. With abundant tactile cues and proprioception on palms, EyeWrist can also be leveraged for device-less and eyes-free remote for smart TVs. EyeWrist embeds a micro-camera and an IR laser line generator on the wristband and use computer vision algorithms to calculate the finger’s position on the palm. Without requiring the line of sight of users’ fingertips on palms, the camera height could be lower, making the whole device more portable. We also implemented a gesture recognizer to distinguish different symbols, letters or touchscreen gestures(e.g. swipe, pinch) on palms. The recognition result would be sent to smart devices via Wi-Fi for gesture-based interaction.

EyeWatch: Touch Interaction on Back of the Hand for Smart Watches

Cheng-Yao Wang,Po-Tsung Chiu, Min-Chieh Hsiu

2nd Prize,MobileHero – user experience design competition, 2014

We present EyeWatch, which uses back of the hand as gesture interface for smart watches. EyeWatch not only overcomes the Big smartwatch problem: occlusion and fat finger problem, but also enables more powerful and natural interaction such as drawing a symbol quickly to open an application, or intuitively handwriting on back of hand to input message. Our proof-of-concept implementation consists of a micro-camera and an IR laser line generator on the smart watch, and computer vision algorithms are used to calculate the finger’s position on the back of hand.

The Incredible Shrinking Adventure

Cheng-Yao Wang,Min-Chieh Hsiu, Chin-Yu Chien, Shuo Yang

Final Shortlist,ACM UIST 2014 Student Innovation Contest

Imagining you were shrunk, you can explore your big house, play with big pets and family. To fulfill the imagination and provide users with incredible shrinking adventures, we use a robotic car and a google cardboard which turns a smartphone into a VR headset. We build a robotic car and attach a smartphone on the pan/tilt servo bracket. stereo images are generated from smartphone’s camera and are streamed to the other smartphone inside of the google cardboard. When users see the world through the smartphone on robotic car, they feel they were shrunk.

drone.io: Gesture Input and Projected Output for Collocated Human-Drone Interaction

Reserach intern project at Stanford University (2015)

We introduce drone.io, a body-centric interface to facilitate natural HDI where users interact with simple gestures above a projected radial menu. drone.io is a fully embedded input-output system using a depth camera to recognize the position and shape of the user’s hand and a mobile projector to display the interface in real time. We show that it is an easy, effective, and enjoyable way for people to interact with drones.

my skills

Computer Science

  • C#, Unity, Web Languages(JavaScript/CSS/NodeJS), C/C++, JAVA, Python
  • Human-Computer Interaction, Virtual Reality, Augmented Reality, Computer Graphic, Computer Vision

Rapid Prototyping

  • Design and Rapid Prototyping with laser cutter and 3D printing
  • Arduino, Raspberry Pi, Processing

User Study Design

  • Mixed qualitative and quantitative study design
  • Statistical analysis

publications

RoMA: Interactive Fabrication with Augmented Reality and a Robotic 3D Printer

ACM CHI 2018, Full Paper

Huaishu Peng, Cheng-Yao Wang*, Jimmy Briggs*, Kevin Guo, Joseph Kider, Stefanie Mueller, Patrick Baudisch, François Guimbretière. (*equal contribution)

Teaching Programming with Gamified Semantics

ACM CHI 2017, Full Paper

Ian Arawjo, Cheng-Yao Wang, Andrew C. Myers, Erik Andersen, and François Guimbretière

PalmType: Using Palms as Keyboards for Smart Glasses

ACM MobileHCI 2015, Full Paper

Cheng-Yao Wang,Wei-Chen Chu, Po-Tsung Chiu, Min-Chieh Hsiu, Yih-Harn Chiang, Mike Y. Chen

PalmGesture: Using Palms as Gesture Interfaces for Eyes-free Input

ACM MobileHCI 2015, Full Paper

Cheng-Yao Wang,Min-Chieh Hsiu, Po-Tsung Chiu, Chiao-Hui Chang, Liwei Chan, Bing-Yu Chen, Mike Y. Chen

EverTutor: Automatically Creating Interactive Guided Tutorials on Smartphones by User Demonstration

ACM CHI 2014, Full Paper

Cheng-Yao Wang,Wei-Chen Chu, Hou-Ren Chen, Chun-Yen Hsu, Mike Y. Chen

RealSense: Directional Interaction for Proximate Mobile Sharing Using Built-in Orientation Sensors

MM 2013, short paper

Chien-Pang Lin, Cheng-Yao Wang,Hou-Ren Chen, Wei-Chen Chu, Mike Y. Chen

drone.io: Gesture Input and Projected Output for Collocated Human-Drone Interaction

Under review for ACM CHI 2016

awards

EverTutor: Automatically Creating Interactive Guided Tutorials on Smartphones by User Demonstration

1st Prize,The 11th Deep Innovations with Impact, National Taiwan University, 2013

1st Prize,The 11th Y.S. National Innovation Software Application Contest, 2014

MagicWrist - Connect the world

Excellent Work,The 8th Acer Long-Term Smile Innovation Contest, 2014

The Incredible Shrinking Adventure

Final Shortlist,ACM UIST 2014 Student Innovation Contest

EyeWatch: Touch Interaction on Back of the Hand for Smart Watches

2nd Prize,MobileHero – user experience design competition, 2014

EyeWrist - Enabling Gesture-Based Interaction on Palm with a Wrist-Worn Sensor

Final Shortlist,MediaTek Wearable device into IoT world competition, 2014