Professional Overview
My name is Taoyuan Wang. I have 7+ years of industry experience in software development, specializing in autonomous systems. I hold both Master's and Bachelor's degrees in Computer Science.
Currently, I’m a backend engineer at Wing, an Alphabet moonshot from Google X. I work on the Uncrewed Traffic Management (UTM) team, where we are building Google Maps for the sky to help autonomous aircrafts safely navigate. Our system determines when and where drones can fly, enabling a reliable and scalable delivery network.
I have been granted Google C++ readability and received a National Interest Waiver (NIW) in recognition of my contributions to Highly Automated, Autonomous, and Uncrewed Systems (UxS) and Robotics. My work focuses on building and optimizing the infrastructures that power autonomous systems.
Previously, I was a full-stack developer at Appen, where I led the design and development of 3D point cloud annotation tools for autonomous vehicles. My work supported 3D visualization and multi-sensor fusion across lidar, radar, and camera data to produce large-scale, high-quality training datasets for machine learning.
Selected Projects
Wing OpenSky Operator Interfaces
The interfaces enable operators to oversee and control multiple aircraft safely and efficiently. Integrated with Google Maps and internal data sources, the systems provide real-time aircraft location tracking, flight path visualization, and compliance monitoring in accordance with FAA and other international regulatory guidelines. The systems enable timely four-dimensional (3D plus time) airspace restriction enforcement, ensuring adherence to safety protocols.
I developed the backend components for OpenSky, working closely with product managers to translate requirements into design docs. I implemented the backend handlers and validators to ensure compliance with regulatory standards and operational safety. Additionally, I coordinated planning, task allocation, and testing efforts, contributing to robust UTM systems that significantly increased operator capacity and supported the scaling of autonomous delivery services.
Wing Operational Configuration Systems
The systems, built on Alphabet’s dynamic server configuration push infrastructures, empower Solution Managers to manage compliance-related flight rules for multinational operations. The systems enable automated enforcement of geographical flight restrictions, ensure that operators hold up-to-date certifications, and also replace our previous authentication modules based on Google Zanzibar and Baggins with role-based configurations and permission checking optimized for low-latency applications, such as pilot-issued emergency landing commands.
I led the design and implementation of configuration schemas and validation mechanisms, preventing user errors while ensuring semantic correctness. Additionally, I integrated the systems as a module across multiple components to support rapid, reliable deployment of configuration updates. This work significantly enhanced the compliance, scalability, and operational efficiency of autonomous drone operations.
Wing Robot Coordinator by Gemini
A centralized LLM/VLA coordination system that allows users to control multiple drones using natural language, enabling them to perform general-purpose tasks without needing pre-programmed instructions. The system fuses real-time drone sensor data with object detection and tracking to build a rich, shared understanding of the environment. Natural language inputs are processed to contextualize human intent, which is then translated into executable drone tasks. An integrated 3D visualization and simulation interface supports intuitive human-AI-robot interaction, making it easier to test, monitor, and guide multi-agent behavior.
To accelerate development and testing, the system can also generate custom 3D environments from simple text descriptions, significantly reducing the engineering effort for scenario creation. All interactions, including user commands, AI reasoning, drone actions, and human feedback, are logged to build a comprehensive dataset for reinforcement learning, enabling continuous performance improvement. Designed for real-world deployment, the coordinator can interface with physical drones through low-latency control models, paving the way for advanced applications in autonomous delivery, disaster response, and multi-robot collaboration.
Appen AI-Assisted 3D Point Cloud Annotation Tools
The tools enhance the perception capabilities of autonomous systems by allowing precise labeling of 3D point cloud data from LiDAR and radar sensors. Features like semantic segmentation enable annotators to classify individual points, improving training data quality for ML models.
I led the full-stack development of these tools from prototype to production, integrating real-time AI-assisted annotation via point cloud clustering. This approach significantly improved efficiency and accuracy, reducing annotation time and costs for autonomous vehicle and robotics perception models.
Appen Sensor Fusion Experiments
Sensor fusion integrates data from LiDAR, radar, and cameras to enhance multimodal annotation, object detection, and tracking, which are crucial for self-driving cars and unmanned aircrafts.
I identified various camera models used in autonomous systems, developed sensor fusion and visualization scripts, and ensured compatibility between client coordinate systems and our fusion algorithms. My work enabled real-time projection of 3D points into the 2D domain, enhancing annotation accuracy and facilitating better environmental understanding for annotators.
Key Achievements
Wing
I addressed the challenge of integrating autonomous drones into regulated airspace. I developed the configuration systems and backend for OpenSky, Google Maps-based operational interfaces, allowing operators to monitor multiple drones efficiently. Previously, one operator could only oversee one aircraft, creating a bottleneck in scaling our service. My contributions met regulatory requirements from agencies such as FAA and CASA, enhancing the scalability and safety of our drone delivery operations.
Appen
As the lead developer of 3D point cloud annotation tools, I aimed to reduce annotation costs through machine learning. Initially, state-of-the-art object detection models required extensive labeling, limiting their effectiveness. To improve efficiency, I implemented a real-time AI-assisted interaction inspired by academic research. Annotators could click on the point cloud, triggering DBSCAN clustering to generate a bounding box. This approach improved annotation efficiency by 35%.

