Road Sense

11.10.2025

For my EECS 473 (Advanced Embedded Systems) major design experience project, my team and I set out to build an embedded system that can automatically detect, classify, and report potholes while driving. Using a Jetson Orin Nano, a depth-sensing camera, GPS, and a custom PCB, we’ve been building a device that mounts on a car, runs a YOLO model in real time, tags each pothole with location data, and uploads everything to a web dashboard for visualization and analysis. It’s the most “real” embedded system I’ve worked on so far — hardware, software, computer vision, power, networking, all talking together in one pipeline.

The Idea

The whole project started with a simple constraint: cities shouldn’t rely on people manually reporting potholes. It’s slow, inconsistent, and misses a ton of road damage. Our idea was to automatically detect potholes using a combination of RGB images and depth data while a vehicle is driving normally. Every time a pothole is detected, the system records its GPS location, saves an image, estimates its severity, and uploads it to a central server. Over time, this would give municipalities a constantly updated map of road conditions — no human input required.

System Diagram

Our hardware setup is built around the NVIDIA Jetson Orin Nano, mounted inside a custom enclosure that sits on the front of the car. We pair it with a stereo depth camera — originally the Intel RealSense D435i, though we’re now testing the Luxonis OAK-D Lite for its onboard VPU and built-in depth support. For sensing location and power management, we designed our own custom PCB using a Raspberry Pi RP2350 microcontroller, GPS module, and IMU. One cool part: the MCU monitors the car’s ignition line and automatically puts the Jetson into sleep mode when the engine turns off, saving power. We originally planned to use a UPS battery, but in Milestone 1 we switched to pulling 12V directly from the car battery—it was simpler and more reliable.

hardware

Software Pipeline

Our detection pipeline runs a YOLOv8n model that we trained on a dataset of roughly 1,500 pothole images (plus 400 validation). The model runs on the Jetson in real time as the vehicle moves, outputting bounding boxes for each pothole. We pair that with ByteTrack to ensure each pothole is only logged once instead of multiple times as the car drives past. Right now we’re detecting potholes off the RGB feed only, but next we’re pulling in stereodepth data so we can estimate the actual depth/size of each pothole. That’s a huge part of our final goal: letting cities prioritize repairs by severity, not just presence.

Web Dashboard

To make everything visual and user-friendly, we built a Node.js backend + SQLite database and a lightweight web dashboard. When the Jetson reconnects to Wi-Fi (like pulling into a city garage), it automatically sends its saved detections to the server with POST requests. The dashboard shows:

  • A map with pins for every detected pothole
  • A table of pothole images, timestamps, GPS locations
  • Filters and sorting for analysis

During testing, we successfully transmitted pothole data from the Jetson to the server over a mobile hotspot, and the dashboard updated in seconds.

dashboard

Field Testing

By Milestone 2, we had the whole camera system mounted on a real car and spent several hours driving around Ann Arbor streets in both rain and clear weather. The mount held perfectly, the electronics stayed stable, and the detection model worked better than we expected for an early version. It definitely missed some potholes and sometimes confused shadows for defects, but after retraining with different epochs and batch sizes, accuracy started improving. Our next big integration step is merging detections with depth data — the part that actually lets us estimate real pothole severity.

diagram

What's Next

The next phase of the project focuses on:

  • Integrating stereodepth to compute pothole depth and width
  • Improving model accuracy through more training and augmentation
  • Polishing the data pipeline so uploads are automatic and reliable
  • Adding severity classification so the dashboard can sort potholes by priority

We’re also evaluating whether the OAK-D Lite can replace the RealSense entirely. If it performs well, it could simplify our hardware and shift more work off the Jetson. Overall, the project has been a deep dive into real-world embedded AI — balancing hardware constraints, networking issues, power delivery, and computer vision all at once. Easily one of the most complex (and fun) systems I’ve worked on.