My Projects

These are projects for which I am a primary developer and/or have a large contribution.

Overview of My Work

(this presentation is from 2018 so does not show my recent projects; the rest of this page shows them)

Ongoing Projects

Some Background for Redirected Walking

  • Redirected walking is a locomotion method that allows a user in VR to walk around the 3D virtual environment (VE) with their own two feet. It does so by slightly rotating the world around the user during head, eye, or other body rotation under a certain threshold so the rotational distortion is imperceptible.

  • Much of my work tries to extend its functionality and practical applicability.

Redirected Walking for Visually-Impaired Users with Robotic Mixed Agents

  • Goal: Since redirected walking (RDW) allows users to walk naturally through a virtual environment of any size, it's a good option for VR applications for the visually-impaired. However, since we need to work under perceptual thresholds and VR tracking spaces are fairly small, we need to distract the user somehow. Thus, our goal is adapt RDW for audio-only apps.

  • Current progress: We were able to prove that RDW should be able to work imperceptibly in terms of performance and quality with only audio (See Completed Projects), but we still need to make a distractor that is more natural for the scenario (see following robot dog entries).

Summary of these projects

  • Goal: Formalize the robot dog problem by generalizing the problem of robots in virtual environments and figuring out where the robot fits with respect to other problems in the area of haptic proxies.

  • Current progress: Mostly done and working on the submission. Info at its own site.

  • Goal: We want to create a VR simulation that helps train visually-impaired kids to navigate a large virtual environment like a city. We need to use real walking that permits free roam in a large, open space, thus redirected walking is required. We hypothesize that haptics will improve response time, which we need to study

  • Current progress: See the next two entries, which are sub-projects of this one.

  • Technology that I'm using: Unreal 4 for the entire realtime implementation (Blueprint and C++). ViveTracker2.0 to track the dog and leash. Wireless Vive Pro with Lighthouse 2.0 for head tracking. Blender for editing and optimizing meshes. Leap Motion for hand tracking. Elegoo 4-wheeled Arduino with HC06 bluetooth adapter for the robot.

SUB-STUDY: "A Walk to the Park": Virtual Locomotion with Robot-Guided Haptics and Persistent Distractors

  • Goal: Improve current distraction-based redirected walking systems to allow for a robot tethered to the user, providing haptic feedback, to be feasible for training scenarios.

  • Current progress: We succeeded, and now we need to do the real user study. Check out the current pending preprint below.

A_Walk_to_the_Park__Robot_Based_Active_Haptics_for_Virtual_Navigation_SIGGRAPH.pdf

"Walk a Robot Dog in VR!" -- SIGGRAPH '20 Immersive Pavilion Demo

  • Goal: Improve current distraction-based redirected walking systems to allow for a robot tethered to the user, providing haptic feedback, to be feasible for training scenarios.

  • Current progress: We succeeded, and now we need to do the real user study. Check out the current print below.

walktopark.pdf

SUB-STUDY: Creating a Simulated User for Virtual Locomotion Simulation

  • Goal: create a synthetic user that can be used in simulations of redirected walking, motion compression, etc. to accurate simulate how a user will behave in such a simulation. It's important to simulate randomness of micro (twitchiness) and macro (swaying, overshooting target, etc.) movement AND decision.

  • Current progress: We made 2 models of simulated users--one that tries to calculate the velocity needed to reach a target, and another that tries to accelerate/decelerate to the target instead. Their biomechanics are derived from real observed users (the audio and vision groups from the 2019 paper), and they then complete the same users the real users did. The acceleration model works quite well, and we see some significant resemblance to the real users in many of the scenarios. The velocity model seems better at translation in some cases. We can mix the ideas from each model to create an even better representation.

  • Previous version: In our "Walk to the Park" paper, we successfully simulated a SYNTHETIC user that it good enough to provide results similar to Razzaque's Firedrill Scene that were taken from REAL users, so we're getting there! See the middle part of the video for our simulation.

automated_user_studies_2column (1).pdf

Low-cost VR/AR Applications for Training Surgeons

  • Goal: Many of the logistical difficulties of training surgeons can be overcome in VR and AR, since we can use virtual 3D objects instead of expensive physical alternatives. However, the technology isn't quite there, particular in terms of subjective quality. We seek to improve this.

  • Current progress: We are working on multiple projects that can make virtual surgical training more technically feasible and subjectively good enough to induce skill transfer (See following entries)

Shared Haptics in Multi-User Surgical Training Environments

  • Goal: We hypothesize that we could create a multi-user VR training scenario that is an effective learning tool provided we improve certain features, in particular, haptics and body representation. There are difficulties imposed by the fact that such features must be synchronized between users.

  • Current progress: See the attached videos. I was able to get a basic system working in which two players, a trainee/nurse and doctor, connect to each other through Unreal 4's multiplayer system, and their virtual worlds are synchronized using the assumption that there is a tiny chance of the pitch and roll of the Vive lighthouses being the same. I also implemented multiplayer skeletal reconstruction with the Leap Motion so that users can see each other's finger movements.

  • Technology that I'm using: Unreal 4 for realtime implementation (all Blueprint). Leap Motion for hand tracking. Blender for mesh editing. Unreal 4 RPC calls for multiplayer.

Using AR to Improve Training for Laparoscopic Surgery

  • Co-investigators: Hao Jiang, Andrei State, and Andrei Illie

  • Goal: We hypothesize that having a 3D view of the surgery area through an AR display such as the Hololens will improve accuracy of movement, since the user has the additional stereoscopic depth cues to aid them in localizing the workspace.

  • Current progress: We are able to get a basic laparoscopic training scenario to work in the Hololens, with limitations in tracking and calibration that we are working on. I personally focus on the realtime simulation and user study scenario after the tracking data is received.

  • Technology that I'm using: Unity with C# for the realtime implementation. Hololens as the headset. Blender to build all 3d-printed models. OpenCV+Vuforia for tracking (Hao is using OpenCV to track the cube accurately while I use Vuforia for world calibration)

  • Attached are pictures of a theoretical portable laparoscopic training apparatus and some videos of our process (more or less ordered by most recent to oldest)

laparoposter (1).pdf

SUB-STUDY: Using Vuforia with Small Markers

  • Goal: Find a good method of tracking very small markers (<2cm^2). Vuforia and OpenCV are currently among the best marker-tracking methods, but here I focus specifically on Vuforia because it works with a wider range of markers, uses many features, and seems to have great tracking after acquisition (which seems to take much longer than OpenCV).

  • Current progress: Vuforia seems to fare very well with small markers and at long ranges after the initial acquisition, which is the hard part. Auto-focus and glare can easily break the tracking in these more difficult cases.

  • Technology that I'm using: Unity with C# and Android for realtime implementation. Vuforia for tracking.

  • This was expanded into the project below

  • Goal: Tracking objects in motion usually requires cameras to be synchronized, which most consumer cameras are not capable of, and even research cameras like PointGreys are very expensive or their cheaper alternatives like PiCams are hard to receive images from. This project tries to deal with all of these problems at once by using cheap $7-10 cameras that are "synchronized" with smart threading instead. 2 of them are attached to the dynamic manipulators themselves to deal with occlusion, which requires ViveTrackers as well.

  • Current progress: We can track Vive tech in Hololens space and the entire pipeline is complete, with some naive assumptions being made that should eventually be corrected and some interpolation needs to be done. There is also some delay that needs be be dealt with to have better realtime performance

  • Technology that I'm using: Unity with C# & Vuforia for marker-tracking, UE4 w/ Blueprint for ViveTrackers & pooling, Hololens/Unity/Vuforia/C# for application.

  • Documentation for replicability & source code will be provided after publication.

Short ISMAR paper/poster (more succinct; less technical detail)

Laparo_ISMAR2020_edits_camera.pdf

Longer, older paper draft that's not as well-written but has more technical detail and pipeline figures

ismar20a-sub1208-i7 (1) (1).pdf

Telepresence Demos

  • Goal: Show telepresence demos in the Hololens. Telepresence is when users who are physically far from each other can be represented as being in the same physical space, e.g. if I'm in a video call with 3 other people, telepresence would allow me to represent those 3 people as if we're just sitting at a desk in the same place talking.

  • Current progress: Reconstruction is the main issue, which is what we're trying to show with the current demos. The Hololens no longer has Skype support which is also a bit limiting...

  • Technology that I'm using: Unreal 4 (because of its Web Browser widget), Hololens, and UE4's Hololens streaming system.

Automatically generating massive 3D synthetic cloth motion datasets

  • Goal: Since cloth motion datasets are relatively rare, especially those that use point clouds or 3D models to represent the cloth as it moves, I'm working on a method that can generate synthetic 3D datasets much more easily than before, which would provide more data for CV algorithms to learn on

  • Current progress: I have a pipeline from Maya to Marvelous Designer to Unreal that can go from a human mesh+animation (Maya), to an avatar wearing clothes of specific material parameters and sizes (Marvelous Designer), to a realtime baked implementation for realistic rendering (UE4). Right now, I mostly need to optimize and finalize the pipeline. It's currently on hold.

  • Technology that I'm using: Maya+MEL to generate and animate meshes, Marvelous Designer for cloth simulation and design, UE4 for realistic realtime rendering

House Tours with Intelligent Virtual Distractors and Redirected Walking

  • Goal: House tours would be a great application of VR, since it would allow a potential buyer to walk through the house without physically visiting. However, without being able to physically walk in the VR simulation with something like RDW, it would be difficult to get a sense of scale and breaks immersion. We propose a simulation in which the real estate agent(s) giving the house tour is/are the distractor(s), which use objects of interest in the house to guide the user and decide when and how to distract.

  • Current progress: On hold. This would be a successor to the RDW robot dog project since the behavior of the real estate agent can extend the robot dog's dynamic and persistent behavior

  • Technology that I'm using: Unreal 4 for the simulation. HTC Vive for the VR HMD.

Completed Research Projects (as first author)

These are projects that are 100% complete that I am not working on in any capacity anymore, so this section is more sparse than the actual publication list since many of my first-authored projects still exist in some form.

Stress Test HoloLens 2 Hand and Eye-tracking

  • Goal: exactly as the title states

  • Current progress: They're both great. I made a bunch of videos showing them in action and how to build for the HL2 in UE4 and Unity. However, the lens distortions are not great.

  • Technology that I'm using: UE4, Unity, HoloLens 2

Evaluating the Effectiveness of Redirected Walking with Auditory Distractors for Navigation in Virtual Environments (2019)

In this project, we prove for the first time that RDW with distractors is suitable for audio-only users. We design a distractor that successfully navigates users around obstacles and different types of scenes of various tasks. We also run a 3-group study showing that audio-only immersion and performance are not significantly worse than users with vision. In fact, the audio-only performance had a few metrics that were much better and the distractor was less active for them. See the attached videos for more information.

Evaluating_the_Effectiveness_of_Redirected_Walking_with_Auditory_Distractors_for_Navigation_in_Virtual_Environments__Revision_ (2).pdf

Static Pose Detection of Quadrupeds with TensorFlow and GoogleNet (2017)

This was an experiment done at CUHK with Prof. Kin Hong Wong to see if a CNN like GoogleNet could use basic image recognition to guess the pose of various synthetic animals, and if so, could we interpolate between predictions to get the exact position. We concluded that we could, although there has been much better work on quadruped pose detection since then.

cuhkpostervertical.pdf

Simulating a CAVE in VR (2018)

This was a project with Prof. Mary Whitton in collaboration with HIT Lab New Zealand in which I made a VR simulation that simulated looking at a CAVE display. A CAVE is normally an array of screens that the user stands in the middle of to simulate a VR-like experience, but they're expensive and hard to set up, prompting this alternative. My Unreal 4 program creates a CAVE automatically given system specifications like screen size, angles, etc. The VR implementation is in Unity for compatibility reasons.

Completed Personal (Non-Research) Projects

Learning Blender (2015)

I made some very simple animations in order to learn the basics of Blender. My work has gotten much more advanced since this!

Rendering UNC's Old Well in Blender (2015)

This was a project to learn Blender. I used a famous reference image (see 2nd image) and built the old well mesh from scratch. Then I used built-in Blender foliage + Cycles PBR materials to make the rest look good. Check out the process in the slideshow!

Perspective matching (2015)

This was a project to learn how perspective matching works to make convincing virtual additions (e.g. that glass model on the left) to a real image (e.g. UNC's Pit). Perspective matching is a technique in which you guess the 3D orientation of an object in a 2D image/video, especially something like the floor. This was very useful to learn as I use the method in many of my research videos to line up my visualizers correctly with the real floor in a video. This project was done in Blender, but my perspective matching tasks since then were done in Unreal 4 (See Graphics I Made for more)

Learn Unity and basic VR development (2015)

I used Quill18's Unity tutorials to learn Unity basics, which I then used to make a basic VR demo and my COMP89H course project (See Course Projects below)

Educational VR + Hand Tracking Game (2015)

Made at HackDuke 2015 with Ami Zou in Unity 4. We made an Oculus Dk2 VR game in which you use your hands, tracked by the Leap Motion, to place animal and plant cell components into the appropriate cell. This was my first VR demo ever!

UNC Pit recreation demo (2016-2017)

I built UNC's Pit in Blender and used it to make an Oculus DK2+Leap Motion hand-tracking VR app in Unity 4. I later ported it to Unreal 4 and made it multiplayer to work on the Vive using the tracking space alignment tech from my multi-user surgery project.

Original traffic sim for Maze Day (2016)

The predecessor to my audio redirected walking work, which helped me learn Unreal 4. This was a demo made for the visually-impaired kids at Maze Day in which you must cross the street without getting hit. It's possible to beat it with only the 3D spatialized audio, and in fact, most of the kids were able to complete it despite their visual impairments. This influenced my design of future work.

Implementing the Intelligent Driver Model in Unreal 4 (2016)

I ported code written in Matlab by Weizi Li that executes the Intelligent Driver Model into Unreal 4's Blueprint system so that we could have a realtime game engine implementation that could be combined with other features like spatialized audio and VR. We never got a chance to use it, but I'm planning on finally bringing it back for use in my new work on navigating VR cities.


I plan on recording a video eventually (when I figure out what I did with the project).

3D environment sampler for audio propagation preprocessing (2018)

This project was made for Atul Rungta for us to be able to generate a large number of valid sample points for a 3D environment that can be reached from a point in the scene, giving us a set of valid sound source locations. In Blender, I create icospheres of specific sizes as determined by Atul and get their vertices, which I spherecast around and test for other parameters in Unreal 4 and decide if the points are valid given our requirements (e.g. must be indoors, must be outdoors, must be close to something, limited number of vertices in a navigable path to it etc.). Those points are then used to precompute reverb parameters.

Course Projects

COMP89H masterpiece assignment (2015)

The assignment was to make an animation incorporating multiple principles of computer animation. I made this simple physics animation with Unity 4.

COMP89H final project (2015)

I made a demo with Unity 4, the Google Cardboard, Samsung Galaxy S5, and the Leap Motion in which you use your hands to put some virtual objects in the appropriate bins which are positioned around you. I was able to get the Leap Android SDK (which I think has been abandoned) directly from the developers and use it to get my project to work on mobile.

COMP585 Serious Games (honors) + 872 Virtual Worlds course project (2017)

This is the original version of the multi-user haptics-based surgery project mentioned above. In Serious Games, I focused on design and UX. In Virtual Worlds, I focused on implementation and technical limitations. Two players' 3D scenes are aligned so that the players would be in the right place relative to each other by making assumptions about uniqueness of lighthouse orientation. The Leap Motion hands are networked between players so they can see each others' finger motions. NVIDIA Flex is used for the networked cloth physics to (very inaccurately) approximate skin. I use UE4's built-in RPC networking.

COMP872 Virtual Worlds VR assignment (2017)

The assignment was to make a VR simulation that evokes physiological responses. I made this in Unreal 4. The player walks up a physical staircase that's aligned to the virtual one (so they feel like they're walking up the virtual stairs), then they use their Leap Motion hands to push off the blue blocks, which causes the stairs to fly down into a chasm. The scene is a modified version of Epic Games' Infiltrator demo.

COMP872 Virtual Worlds AR assignment (2017)

I don't remember the exact parameters of the assignment except that we had to make something for the Hololens. I made a virtual agent with Mixamo's animations that walks around the hallway of our building. His footsteps have 3D audio and he stops and stares at the player when they get too close. When he is occluded by a real wall, he is represented as a red wireframe.

COMP495 Independent Research course project (2017)

This was the original RDW work that went into my current RDW projects. I investigated the effectiveness of previous distractors for a blinded user (e.g. bee buzzing around, voice calling). I determined that they were not sufficient for a blinded user because they made it obvious that the world was being distorted. Thus, I decided to implement the virtual robot dog and talking EVE robot distractors. The EVE distractor didn't work out because it seemed that any voice was easily localizable. The panting dog worked better. Either of them were better than prior work because distractors in motion are harder to locate for a blinded user but still possible to locate at all (as opposed to something like a bee). Made in UE4.

COMP541 Digital Logic course project (2019)

I made a 1000-line MIPS program for the Nexys board in which the player moves around with the keyboard and must destroy enemies moving around the scene with a cursor controlled with the accelerometer. The cursor needs to be powered up with some powerups in the scene before it can hurt anything. Hitting an enemy with the cursor at all will aggro it and cause it to follow the player forever. Enemies can only be hurt by powerups that are the same type (e.g. sun enemy gets hurt by sun powerup). Enemies have different strengths, with each enemy having a different path (e.g. star>sun>moon). The player will die if the enemy touches them at all. They move at half of the player's speed. If the player aggros an enemy by mistake and dies, then the place that they aggroed the enemy will be a trap that causes the enemy to get aggroed in future attempts automatically, making the game harder. We made our own processors in Vivado.

562finalNickRewkowski _1_.pdf

COMP562 Machine Learning course project (2019)

I precomputed some features for images in a collection of paintings and used them to cluster the paintings based on style. The accuracy was surprisingly good for a naive, non-deep learning method, according to my pseudo-intersection-over-union checking method.

c776_final_report (9).pdf

COMP776 Computer Vision course project (2020)

This project was basically the pipeline for the project above about using cheap, unsynchronized, & dynamic cameras for AR surgical training.

COMP755_Final_Project_Writeup__Copy_.pdf

COMP755 Machine Learning course project (2020)

This project was about generating synthetic datasets to be used to train an eye-tracking based network to recognize a user's object of interest, with the applications of auto-designing accessible keyboards, making AR guides, making my robot dog have a better prediction of user behavior, etc.

My job was to do everything in the game engine (setting up the VEs and generating the images/labels that the network could train on). Also did most of the writeups and proposal.

CMSC818B_Final_Report.pdf
CMSC 818B Project Presentation.pdf

CMSC818B Decision-Making for Robotics course project (2020)

This project was about creating a robotic assistant that can help with a task optimally given that they are tethered to the human user. I tested it in this collection task and did a report comparing the performance of different versions of this robot with different values for adaptability to the real user behavior. I created a reward function that normalizes a few different metrics related to distance from the human, how much the human follows the robot, etc.

I worked on this by myself.

CMSC740 Advanced 3D Graphics course project (2021)

This project was about using synthetic datasets in UE4 to train some GANs to apply/remove some complicated post-processing effects like depth of field, ambient occlusion, etc. It kind of worked, sometimes in unexpected ways. I probably could have done better with more data and training.

The attached report has plenty of image comparisons. I did not embed the entire report PDF because it is very large (82mb). Click the link or the picture for the full report.

CMSC730 Human-Computer Interaction course project (2021)

This project was about creating a mobile accessory for the HoloLens 2 that would provide haptic feedback to a handheld device like a pen by locking it in place with motors when the physical device touched a virtual surface. It worked pretty well as a prototype, although we focused on a simple drawing task when much more can potentially be done with it.

The below report has more information and progress images.

annotated-haptic_pen_for_xr.pdf
cmsc727 final report (1).pdf

CMSC727 Machine Learning course project (2022)

This project was about using various types of generative networks to figure out details of animals like texture and shape, and use the learned information to transfer features between animals, e.g. tiger2 shark would take a tiger shape and put a shark texture on it. I worked on the training data generation and debugging. We tested multiple pairs of animals, and I needed to manually modify their UV maps for the training data, but for a prototype, we got reasonable results for the multiple GANs that we tried, such as pix2pix.

The included report contains details and images.

cmsc724_final (1).pdf

CMSC724 Databases course project (2022)

This project was about evaluating the storage needs of an informational metaverse, such as the ones I was exploring with UMD and Adobe Research. We wanted to see how to structure content such that queries/searches, e.g. a query about the information in the room the user wearing the HMD is currently in, touch the smallest amount of information before getting the necessary values. We tried some naive methods up to some methods that are more along the lines of 3D navigation, which is unusual for databases problems. We found that a combination of using a bounding volume hierarchy (BVH) and some precomputation of factors like visibility and environment structure (e.g. floors) helped immensively vs. the naive solutions like Euclidean distance, which means that a large-scale informational Metaverse should absolutely be using techniques from 3D graphics and robotics in order to have the best link between the real and virtual world.

The included report contains details and images.