Drones are changing military defense processes more than any other technology in the recent past. They keep getting smaller, cheaper, and easier to deploy; however, as more and more drones deploy, analyzing their data is becoming increasingly challenging. Turning a screen of 20 or more video feeds into an operational picture and creating situational awareness is an incredibly difficult task for commanders to be effective leaders. The goal of the VARLab’s drone project is to replace the wall of video feeds with an integrated digital twin of the real world that is being observed by drones.

Here’s how: A 3D model is created through LIDAR scans or photogrammetric reconstruction. This 3D model is updated in real time, based on the video feeds from the drones. We explore different levels of detail for the updates, based only on video and/or 3D reconstruction. As we are updating the visual representation programmatically, we can also use it to solve the second major problem of drone swarms: control. Drones typically require one pilot per drone, which doesn’t scale to large swarms. In our case, we can keep track of when any part of the digital twin was last updated, and we can task a drone to update parts as well, and automatically keep the digital twin updated. This allows control of an arbitrary quantity of drones without manual intervention. We demonstrate the feasibility of these approaches in a simulated environment using virtual drones observing an animated virtual scene. The results show the potential and power of our proposed methods. This approach has the potential to significantly improve the safety, efficiency, and reliability of physical systems and applications in military operations, disaster recovery, search and rescue operations, safety and emergency training, and other areas.