To enhance the realistic appearance of virtual environments, we project video onto a 3D model, using a technique known as a Video Flashlight. Our work advances this concept by replacing traditional 3D polygonal models with a LIDAR-scanned point cloud, unlocking new possibilities while presenting unique challenges that have been largely unexplored.

To investigate the challenges, we employ compute shader rasterization, enabling real-time rendering of massive point clouds (~0.5 billion points) while simultaneously decoding, projecting, and algorithmically resolving multi-video data onto pixel-sized points.  

Our application integrates real-world CCTV footage from a sports arena’s surveillance system with a LIDAR scan of the same arena, creating a dynamic, immersive representation of live events. We address several complex challenges, including conflicting viewpoints, camera alignment, projection through floors, video mosaic blending, and seam correction. Each solution is designed to balance three goals: maintaining scene accuracy, ensuring aesthetic realism, and sustaining a real-time frame rate for practical, live applications. 

This technique has broad potential applications in security monitoring, virtual reconstruction, immersive event visualization, and digital twin development, offering new ways to merge real-world video with high-fidelity 3D environments.