Publications

As First Author

Small Marker Tracking with Low-Cost, Unsynchronized, Movable Consumer Cameras For Augmented Reality Surgical Training

  • Rewkowski, N., State, A., & Fuchs, H. Small Marker Tracking with Low-Cost, Unsynchronized, Movable Consumer Cameras For Augmented Reality Surgical Training. In ISMAR 2020 Adjunct Proceedings.

  • Goal: Tracking objects in motion usually requires cameras to be synchronized, which most consumer cameras are not capable of, and even research cameras like PointGreys are very expensive or their cheaper alternatives like PiCams are hard to receive images from. This project tries to deal with all of these problems at once by using cheap $7-10 cameras that are "synchronized" with smart threading instead. 2 of them are attached to the dynamic manipulators themselves to deal with occlusion, which requires ViveTrackers as well.

  • Current progress: We can track Vive tech in Hololens space and the entire pipeline is complete, with some naive assumptions being made that should eventually be corrected and some interpolation needs to be done. There is also some delay that needs be be dealt with to have better realtime performance

  • Technology that I'm using: Unity with C# & Vuforia for marker-tracking, UE4 w/ Blueprint for ViveTrackers & pooling, Hololens/Unity/Vuforia/C# for application.

  • Documentation for replicability & source code will be provided after publication.

Laparo_ISMAR2020_edits_camera.pdf

Walk a Robot Dog in VR!

  • Rewkowski, N., & Lin, M. (2020). Walk a Robot Dog in VR!. In ACM SIGGRAPH 2020 Immersive Pavilion (pp. 1-2).

A_Walk_to_the_Park__Robot_Based_Active_Haptics_for_Virtual_Navigation_E_tech_extend_abstract_final.pdf

Exploring Automated Evaluation of Virtual Locomotion with Simulated Users (IN REVIEW)

  • We want to make it easier to evaluate small changes in locomotion methods by creating a simulated user based on real user behavior that can be used to reasonably estimate how a real user's performance would be affected by the new method. We test a velocity-based model in which the user changes their velocity to the value needed to reach a waypoint and an acceleration-based model in which the user accelerates/decelerates to the waypoint instead. We find that the acceleration model is much closer to how we observed real users to behave in the exact same user study.

chi21b-sub9250-i13 (2).pdf

Evaluating the Effectiveness of Redirected Walking with Auditory Distractors for Navigation in Virtual Environments

  • Rewkowski, N., Rungta, A., Whitton, M. & Lin, M. (2019). Evaluating the Effectiveness of Redirected Walking with Auditory Distractors for Navigation in Virtual Environments. In Virtual Reality (VR), 2019 IEEE. IEEE.

  • I built a redirected walking system allowing audio-impaired users to navigate a 3D environment with bimodal cues without noticing rotational distortion. Completed and evaluated user study of 60 subjects and extensive data analysis/visualization. Made in Unreal 4 with the Vive and Leap Motion

Evaluating_the_Effectiveness_of_Redirected_Walking_with_Auditory_Distractors_for_Navigation_in_Virtual_Environments__Revision_ (2).pdf

"A Walk to the Park": Virtual Locomotion with Robot-Guided Haptics and Persistent Distractors (IN REVIEW)

  • We want to allow free roam in an arbitrarily large VE with RDW and a robot distractor that provides haptics to the user and naturally restricts their motion. We propose and evaluate changes to current distractor-based RDW that significantly improve user performance and are likely to make our robot dog project possible. We use a randomized synthetic user to test the design and find that our methods work well.

A_Walk_to_the_Park__Robot_Based_Active_Haptics_for_Virtual_Navigation_SIGGRAPH.pdf

A Future for Hermits

  • Carolina Scientific, Fall 2015

  • An old article I wrote about how VR is showing potential but lacks necessary funding. The state of the industry has significantly changed since then...

A New Age of Accessibility

  • Carolina Scientific, Spring 2016

  • An old article I wrote about Gary Bishop's work on accessible technologies and Maze Day, a day on which kids with various disabilities come to the UNC CS department and test new technology.

As Co-Author

OCP_VR2017.pdf

Optimizing placement of commodity depth cameras for known 3D dynamic scene capture

  • Chabra, R., Ilie, A., Rewkowski, N., Cha, Y. W., & Fuchs, H. (2017, March). Optimizing placement of commodity depth cameras for known 3D dynamic scene capture. In Virtual Reality (VR), 2017 IEEE (pp. 157-166). IEEE

  • This work proposes and evaluates an algorithm deciding the optimal placement of an array of Kinect sensors to achieve the best possible 3D reconstruction of a dynamic scene

  • I worked on the 3D animated bodies for the nurse, doctor, and patient that were used as synthetic validation for the proposed optimization algorithm with Blender and Unity

paperJASAEL.pdf

Effects of virtual acoustics on dynamic auditory distance perception

  • Rungta, A., Rewkowski, N., Klatzky, R., Lin, M., & Manocha, D. (2017). Effects of virtual acoustics on dynamic auditory distance perception. The Journal of the Acoustical Society of America, 141(4), EL427-EL432.

  • This work proposes and evaluates an algorithm deciding the optimal placement of an array of Kinect sensors to achieve the best possible 3D reconstruction of a dynamic scene

  • I worked on the 3D animated bodies for the nurse, doctor, and patient that were used as synthetic validation for the proposed optimization algorithm with Blender and Unity

Wilson_et_al-2017-The_Visual_Computer.pdf

Glass half full: sound synthesis for fluid–structure coupling using added mass operator

  • Wilson, J., Sterling, A., Rewkowski, N., & Lin, M. C. (2017). Glass half full: sound synthesis for fluid–structure coupling using added mass operator. The Visual Computer, 33(6-8), 1039-1048

  • This work uses physically-based modeling to synthesize the sounds of objects with various amounts and types of liquids, allowing for musical notes to be played with only synthetic audio

  • I worked on the realtime UE4 implementation and UX design of the demo

  • website: http://gamma.cs.unc.edu/GlassHalfFull/

Diffraction_Kernels.pdf

Diffraction Kernels for Interactive Sound Propagation in Dynamic Environments

  • Rungta, A., Schissler, C., Rewkowski, N., Mehra, R., & Manocha, D. (2018). Diffraction Kernels for Interactive Sound Propagation in Dynamic Environments. IEEE transactions on visualization and computer graphics, 24(4), 1613-1622.

  • This work proposes a precomputation algorithm allowing for diffraction to be used in realtime audio scenarios through a technique involving diffraction kernels, which evaluate an audio obstacle from all possible angles, taking advantage of symmetry and other characteristics

  • I built all 3 non-realtime demos and modified Oculus demos to work with our improved audio engine (UE4), most figures, and video

  • website: http://gamma.cs.unc.edu/diffractionkernel/

cocktail.pdf

Effects of virtual acoustics on target-word identification performance in multi-talker environments

  • Rungta, A., Rewkowski, N., Schissler, C., Robinson, P., Mehra, R., & Manocha, D. (2018, August). Effects of virtual acoustics on target-word identification performance in multi-talker environments. In Proceedings of the 15th ACM Symposium on Applied Perception (p. 16). ACM.

  • This work evaluates the reproducibility of the cocktail party effect in synthetic environments and conducts a user study establishing limits of the ability to hear specific sounds in a crowd

  • I worked on study design and 3D user study environment (Unity), figures, and video

  • website: http://gamma.cs.unc.edu/ckp/

ismar-tvcg_2018_cha.pdf

Towards Fully Mobile 3D Face, Body, and Environment Capture Using Only Head-worn Cameras

  • Cha, Y. W., Price, T., Wei, Z., Lu, X., Rewkowski, N., Chabra, R., ... & Ilie, A. (2018). Towards Fully Mobile 3D Face, Body, and Environment Capture Using Only Head-worn Cameras. IEEE transactions on visualization and computer graphics.

  • This work proposes and implements a headworn camera rig capable of reproducing the wearer’s bodily actions, including appendage movement and facial expressions

  • I worked on the realtime implementation of the system output as an Alembic animation in UE4

vrsub.pdf

Audio-Material Reconstruction for Virtualized Reality Using a Probabilistic Damping Model

  • Sterling, A., Rewkowski, N., Klatzky, R. & Lin, M. (2019). Audio-Material Reconstruction for Virtualized Reality Using a Probabilistic Damping Model. IEEE transactions on visualization and computer graphics.

  • I built the realtime implementation in UE4, including all demos. Also made video

1902.06880.pdf

P-Reverb: Perceptual Characterization of Early and Late Reflections for Auditory Displays

  • Rungta, A., Rewkowski, N., Klatzky, R. & Lin, M. (2019). P-Reverb: Perceptual Characterization of Early and Late Reflections for Auditory Displays. In Virtual Reality (VR), 2019 IEEE. IEEE.

  • I worked on in-house reverberation engine integration and realtime system made in Unity (and figures). I also have tools made in Unreal 4 and Blender that get valid sample points used for computing reverb parameters

2010.01615.pdf

Generating Emotive Gaits for Virtual Agents Using Affect-Based Autoregression

  • Bhattacharya, U., Rewkowski, N., Guhan, P., Williams, N. L., Mittal, T., Bera, A., & Manocha, D. (2020). Generating Emotive Gaits for Virtual Agents Using Affect-Based Autoregression. arXiv preprint arXiv:2010.01615.

    • ISMAR 2020

  • Rigged the characters to work with the generated animations, rendered most of the 3D videos, made the realtime AR implementation with the HoloLens

Acknowledged

a22-chen.pdf

Supporting free walking in a large virtual environment: imperceptible redirected walking with an immersive distractor

  • Chen, H., & Fuchs, H. (2017, June). Supporting free walking in a large virtual environment: imperceptible redirected walking with an immersive distractor. In Proceedings of the Computer Graphics International Conference (p. 22). ACM.

  • Helped with user study and video

MPTC_1.0.pdf

MPTC: video rendering for virtual screens using compressed textures

  • Pratapa, S., Krajcevski, P., & Manocha, D. (2017, February). MPTC: video rendering for virtual screens using compressed textures. In Proceedings of the 21st ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games (p. 14). ACM.

  • Made video

  • website: http://gamma.cs.unc.edu/MPTC/