Co-Authored Projects

These are projects that I worked on for which I'm not the primary author and was drafted to do a specific part (usually making realtime demos and figures).

The primary author is usually the first person in the list of collaborators (I'll bold them if there IS a primary author at this stage of the project)

Ongoing

Tracking and reconstructing full-body process like surgery or physical therapy

Co-investigators: Husam Shaik, Ritika Tejwani, Andrei State

Project summary: reconstruct a person's body prescan and then track and apply their motions over some process to that prescan

My involvement: Advise Shaik and Ritika on the reconstruction/tracking pipeline and make a bunch of tutorials (found on the left).

Realtime hand-based task performance tracking for teaching sign language and surgical knot-tying

Co-investigators: Austin Hale, Manuel Sanchez-casalongue

Project summary: track people's hand during a hand-based dexterity task and provide them realtime feedback and good UX in AR so they can train better

My involvement: Advise Austin on methods for tracking accuracy, skeletal animation, reconstructing things like the rope, user study, etc.

(look at Austin's YT channel found in the left videos for more details of the process)

Redirected Walking with AI-based Deterrents and Distractors for House Tours

Co-investigators: Andrew Fulmer, Surya Poddutoori, Andrei State

Project summary: have virtual avatars in the virtual house that are smart and appropriately move into place to stop the user from leaving the real VR bounds or distract the user to cause extra head rotation.

My involvement: advise Surya and Andrew on RDW methods, navigation, user study, etc.

Ongoing but I'm no longer involved

Eye-tracking for egocentric displays

Co-investigators: Conny Lu, Praneeth Chakravarthula, Andrei State

Project summary: It's an extension of the egocentric reconstruction project that uses eye tracking in an egocentric display like glasses with cameras.

My involvement: Making figures and videos (for now)

Opaque AR displays

Co-investigators: Kishore Rathinavel

Project summary: Making an AR display that can render completely opaque pixels, which also allows us to do shadowing between virtual and physical objects

My involvement: Realtime demo, especially having a physical object cast a shadow on virtual objects.

Customizing walk patterns and locomotion visualizer

Co-investigators: Brennora Cameron, Mary Whitton 

Project summary: Getting data and making visualizers that can be used to customize the locomotion method for a particular user, especially for walking-in-place

My involvement: Making the Python visualizer, mentoring/advising, helping with data collection, making the data collection system in Unreal 4 and helping Brennora with the Unity version. This old visualizer used matplotlib, but I have a new one that uses Qt5.

Completed Research Projects

See associated papers under Publications

Optimizing placement of commodity depth cameras for known 3D dynamic scene capture (2015-2016)

Co-investigators: Rohan Chabra, Adrian Illie

Project summary: Reconstructing surgeries using an array of Kinects around the surgery room

My involvement: Making the synthetic data with Blender, making the realtime implementation and synthetic ground truth with Unity, making the video, being a nurse that gets reconstructed

Effects of virtual acoustics on dynamic auditory distance perception (2017)

Co-investigators: Atul Rungta, Roberta Klatzky, Ming Lin

Project summary: User studies studying how distance perception changes in 3D audio like in VR

My involvement: Helping with user study, making figures in Unreal 4

Supporting free walking in a large virtual environment: imperceptible redirected walking with an immersive distractor (2016-2017)

Co-investigators: Haiwei Chen

Project summary: Getting RDW to work with small tracking areas like in commercial VR using exciting distractors

My involvement: Helping with user study and making video

Glass half full: sound synthesis for fluid–structure coupling using added mass operator (2017)

Co-investigators: Justin Wilson, Auston Sterling

Project summary: Using physically-based modelling and sound synthesis to allow users to play music by hitting virtual glasses filled with various amount of liquid in specific ways

My involvement: Making the entire realtime implementation in Unreal 4 and the video

Diffraction Kernels for Interactive Sound Propagation in Dynamic Environments (2017-2018)

Co-investigators: Atul Rungta, Carl Schissler

Project summary: Efficiently simulating diffraction using diffraction kernels, which are precomputed files giving information about how propagated sound will distort based on user and sound source positions around a 3D object.

My involvement: Entire realtime implementation in Unreal 4, including 3 demos with predefined paths, a modified version of Oculus' First Contact demo, and a new implementation of Oculus' Toybox demo

Effects of virtual acoustics on target-word identification performance in multi-talker environments (2018)

Co-investigators: Atul Rungta, Carl Schissler

Project summary: Seeing if the cocktail party effect exists in VR

My involvement: Making the entire realtime implementation in Unity, made video and some figures

Towards Fully Mobile 3D Face, Body, and Environment Capture Using Only Head-worn Cameras (2018)

Co-investigators: True Price, Zhen Wei, Xinran Lu, Rohan Chabra, Adrian Ilie, Andrei State, Zhenlin Xu

Project summary: Initial implementation of an egocentric display that gets information about the user (body and face) and the environment so that they can be reconstructed and sent to a different display for telepresence

My involvement: Making the realtime implementation using Unreal 4, Maya, and Alembic. Making video

P-Reverb: Perceptual Characterization of Early and Late Reflections for Auditory Displays (2018-2019)

Co-investigators: Atul Rungta

Project summary: Optimizing reverb so that it's suitable for mobile devices

My involvement: Making the figures, video, realtime implementation in Unreal 4, and making multiple tools that compute valid sample points for a scene given certain parameters (see paper)

Audio-Material Reconstruction for Virtualized Reality Using a Probabilistic Damping Model (2019)

Co-investigators: Auston Sterling

Project summary: Using data processed from collected material data to allow for a realtime sound synthesis when users hit objects in a 3D scene

My involvement: making video, realtime implementation in Unreal 4

Generating Emotive Gaits for Virtual Agents Using Affect-Based Autoregression (2020)

Co-investigators: Uttaran Bhattacharya, Pooja Guhan, Niall L. Williams

Project summary: Teach a deep network how to create animations describing different emotions, and see if humans can recognize the generated gaits as the correct emotion

My involvement: making the realtime AR implementation, rigging characters, making the 3D parts of the video

Text2Gestures: A Transformer-Based Network for Generating Emotive Body Gestures for Virtual Agents

Co-investigators: Uttaran Bhattacharya, Abhishek Banerjee, Pooja Guhan

Project summary: Teach a deep network how to create gesture animations describing different emotions given the text that they should be gesturing for.

My involvement: rigging characters, making the 3D parts of the video, rendering the synthetic output

Speech2AffectiveGestures: Synthesizing co-speech gestures with generative adversarial affective expression learning

Co-investigator: Uttaran Bhattacharya, Elizabeth Childs

Project summary: Using computer vision & ML to generate the emotive animations for a character that is supposed to be speaking a particular phrase with content of varying emotional features

My involvement: rigging characters, making the 3D parts of the video, rendering the synthetic output

Echo-Reconstruction: Audio-Augmented Scene Reconstruction with Mobile Devices/ Audio-Visual Depth and Material Estimation for Robot Navigation

Co-investigator: Justin Wilson

Project summary: Using ML methods and sound reverberation to reconstruct translucent objects like glass based on whether or not audio reflects off of them.

My involvement: Making the video, creating reconstructions, making realtime VR video (the bottom left one is much older)

Course Projects

CodeQuest (COMP585 Serious Games) (2017)

Co-investigators: Jarrett Grimm, Justin Leonard, Diane Brauner, Diane Pozefsky

Project summary: Making an accessible iOS game that teaches visually-impaired kids programming. It was extended by a later Serious Games class with my help as LA

My involvement: rewrote pretty much all of the previous team's code and wrote most of the new Swift code for the final implementation

AR Ghost Stories (COMP523 Software Development) (2018)

Co-investigators: Gabriel Timotei, Anna Reece, Brian Moynihan

Project summary: Teaching people about the Dorothea Dix hospital by displaying a ghost that tells its story in the Hololens. The goal was that the user would go through different rooms in the real hospital and different stories would happen. This was extended with my help as LA in a later Serious Games class

My involvement: wrote most of the code and made most of the realtime implementation in Unity. Designed the entire pipeline that gets a reconstruction from ItSeez3D and eventually gets us an animated, lip-synced 3D model.

Website: http://comp523ghoststories.web.unc.edu/functional-spec/