About me
I completed my Master of Engineering (Mechanical) at the University of Melbourne in 2016. In 2015 I worked as a research assistant in the Fluid Dynamics lab at Melbourne before completing an exchange semester at ETH Zurich. My final year Master’s thesis was a combined project with the University of Melbourne and The Northern Hospital, performing computational fluid dynamics studies on patient-specific coronary arteries under the supervision of Andrew Ooi.
In 2017 I commenced my PhD in Robotic Vision under the supervision of Rob Mahony and Nick Barnes at the ANU. My PhD is focused on understanding and processing visual data from novel, biologically inspired sensors called ‘event cameras’. Event cameras do not capture images using a shutter like conventional cameras, instead responding to per-pixel changes in brightness, reporting these changes as they occur, but staying silent otherwise. As a result, event cameras are very fast and do not suffer from motion blur or under/overexposure, enabling robots to see in a range of challenging conditions such as high speed and low light.
In 2018 I moved to Zurich for a 12 month research visit at the University of Zurich, Robotics and Perception Group under the supervision of Davide Scaramuzza. While there, I did research on image reconstruction, optical flow and deep learning with event cameras.
In January 2021 I finished my PhD: How to See with an Event Camera.
From 2020 - 2021 I worked on a short animated film Finding X supported by a Screen Australia Skip Ahead grant.
From 2021 - 2024 I worked as a senior deep learning engineer at Skydio. I was responsible for the entire training pipeline for obstacle avoidance; generating synthetic training data, collecting and annotating real-world training/evaluation data, data augmentation, training infrastructure, model architecture, loss, hyperparameters, evaluation/metrics, hard example mining, visualization, validating and deploying new models. I often worked in a tiny team of 2 or 3 (sometimes just me!), so I had to pick up many different parts of a deep learning project. An example of a successful project was the night obstacle avoidance model, a long-standing dream for the company: the capability to fly and avoid obstacles autonomously in the dark. I was the key engineer who took the project from the initial idea to finally deploying a robust night model that now runs at the core of the NightSense technology. The project required careful testing and evaluation of night flight, understanding of various sensor modailties and visual artifacts that arise from night vision, specialized data augmentation, lots of hard mining, annotation and fine-tuning, and rigorous testing and evaluation (lots of iterations) to reach the bar for safety and robustness under adverse conditions.
After I left Skydio, I created a fun open-source project for image segmentation of corals. My aim was to complete the project quickly, so while basic, it has everything from data preprocessing (the input data was challenging ~1GB sized images), training and even a dashboard for visualization.
In 2024 I started a new role at Zoox. I work on data mining, training models to extract only the most useful data from a large pool of available robotaxi fleet data, for use in training/evaluating models that aim to predict future behavior of surrounding vehicles, pedestrians etc. Accurate predictions enable the robotaxi to plan the safest route for fully autonomous driving!
When I’m not doing research I enjoy playing underwater rugby!
Photo: Philipp Schmidli (Lucerne, 3 November 2018).