Research

My research spans all aspects of virtual and augmented environments: spanning 3D user interfaces for social media (Social Street View), 4D video based rendering (Montage4D and Video Fields), foveated rendering (KFR), gestures (VRSurus), and visualization. Previously, I interned at Microsoft Research on Holoportation (4D reconstruction). I also worked at Makeability Lab, HCIL advised by Prof. Jon Froehlich on tangible interfaces (AtmoSPHERE), real-time OCR with haptics feedback (HandSight), and curb ramp recognition (Tohme). I am passionate about inventing future interactive technologies with computer graphics, 3D vision, and HCI. Feel free to visit my ShaderToy page for fun demos!

Selected Publications

Selected Projects

Interactive Poisson Blending on GPU (ShaderToy)

Ruofei Du
Demo

DuEngine: An Efficient Interactive C++ Renderer

Ruofei Du
Github

DuCrawler: Mining Datasets from Google and Bing

Ruofei Du
Github

AngryBirds: Deliberately Planning and Acting for Angry Birds with Refinement Methods

Ruofei Du, Zebao Gao, Zheng Xu

C-Flow: Visualizing Foot Traffic and Profit Data to Make Informative Decisions

Tiffany Chao, Ruofei Du, Jonathan Gluck, Hitesh Maidasani, Kent Wills, Ben Shneiderman

57aMP: Create physical stamps using the thermal power of touch.

Tiffany Chao, Ruofei Du, Jon Froehlich

Cubot: An In-hand or Wearable Input Device Attached onto Everyday Object

Ruofei Du, Fan Du, Jon Froehlich

57fire: Create live fire illusion in holographic display using heat

Ruofei Du, Tiffany Chao, Jon Froehlich

Technical Reports

Stay In Touch