Setup Guide: Physical AI Edge Kit
Estimated Time: 3 hours | Difficulty: Advanced
Overview
The Physical AI Edge Kit uses NVIDIA Jetson Orin Nano for on-robot computation. Unlike the workstation setup (cloud processing), this runs all AI models directly on the robot for:
- Low latency: No network delays for real-time control
- Privacy: Data stays on-device
- Portability: Deploy robots without WiFi/cloud dependency
- Power efficiency: 5-15W power consumption
Use Cases
- Mobile robots requiring real-time obstacle avoidance
- Autonomous drones with onboard perception
- Warehouse robots with edge AI inference
- Educational robotics kits (TurtleBot, DuckieBot)
Hardware Requirements
Core Components
| Item | Specification | Cost (USD) |
|---|---|---|
| Jetson Orin Nano Developer Kit | 8GB RAM, 1024 CUDA cores | $499 |
| microSD Card | 128GB UHS-I (for JetPack OS) | $20 |
| Power Supply | 19V 3.42A (65W) barrel jack | Included |
| USB-C Cable | For initial flashing | $10 |
| Cooling Fan | Active cooling (recommended) | $15 |
| WiFi Module (optional) | M.2 Key E slot for wireless | $25 |
| Total | ~$550-$600 |
Optional Sensors/Actuators
- Camera: Intel RealSense D435i ($200) for RGB-D perception
- LiDAR: RPLidar A1M8 ($100) for 2D mapping
- IMU: MPU6050 ($5) for motion sensing
- Motors: Dynamixel servos ($50-200 each) for manipulation
Installation Steps
Step 1: Flash JetPack 6.0
-
Download JetPack SD Card Image:
- Visit: https://developer.nvidia.com/embedded/jetpack
- Select: JetPack 6.0 for Orin Nano
- Download:
.img.zipfile (~8GB)
-
Flash to microSD:
# On your host computer (Linux/Windows/Mac)
# Using balenaEtcher (recommended): https://www.balena.io/etcher/
# OR using dd on Linux:
sudo dd if=jetpack-6.0-orin-nano.img of=/dev/sdX bs=4M status=progress
sudo sync -
Boot Jetson:
- Insert microSD into Jetson Orin Nano
- Connect monitor (HDMI), keyboard, mouse
- Power on (barrel jack)
- Complete Ubuntu setup wizard
Step 2: Configure Network
Option A: Ethernet (Recommended for Setup):
- Connect ethernet cable
- DHCP should auto-configure
Option B: WiFi (if M.2 WiFi module installed):
# Scan for networks
nmcli device wifi list
# Connect
nmcli device wifi connect "SSID" password "PASSWORD"
# Verify
ping -c 3 google.com
Step 3: Install ROS 2 Humble (ARM64)
# Update system
sudo apt update && sudo apt upgrade -y
# Add ROS 2 repository (ARM64)
sudo curl -sSL https://raw.githubusercontent.com/ros/rosdistro/master/ros.key \
-o /usr/share/keyrings/ros-archive-keyring.gpg
echo "deb [arch=arm64 signed-by=/usr/share/keyrings/ros-archive-keyring.gpg] \
http://packages.ros.org/ros2/ubuntu jammy main" | \
sudo tee /etc/apt/sources.list.d/ros2.list
# Install ROS 2 Humble Base (lighter than Desktop for edge)
sudo apt update
sudo apt install ros-humble-ros-base python3-colcon-common-extensions -y
# Source ROS 2
echo "source /opt/ros/humble/setup.bash" >> ~/.bashrc
source ~/.bashrc
# Verify
ros2 run demo_nodes_cpp talker
Step 4: Enable GPIO and I2C
# Install Jetson GPIO library
sudo pip3 install Jetson.GPIO
# Add user to gpio group
sudo groupadd -f -r gpio
sudo usermod -a -G gpio $USER
# Enable I2C (for sensors like IMU, LiDAR)
sudo apt install python3-smbus i2c-tools -y
# Verify I2C devices
sudo i2cdetect -y -r 1
# Reboot to apply group changes
sudo reboot
Step 5: Install Lightweight Tools
# Minimal footprint tools for edge robotics
sudo apt install -y \
python3-pip \
python3-opencv \
nano \
htop \
can-utils \
v4l-utils
# Python packages for robotics
pip3 install numpy scipy transforms3d
# Optional: TensorRT for optimized AI inference
sudo apt install nvidia-tensorrt -y
Verification Checklist
- Boot: Jetson boots from microSD and shows Ubuntu desktop
- Network:
ping google.comsucceeds - ROS 2:
ros2 run demo_nodes_cpp talkerruns - GPIO:
sudo python3 -c "import Jetson.GPIO"(no errors) - I2C:
sudo i2cdetect -y -r 1shows I2C devices - CUDA:
nvcc --versionshows CUDA toolkit
Example: Connecting RealSense Camera
# Install librealsense
sudo apt install ros-humble-librealsense2* -y
# Launch camera node
ros2 launch realsense2_camera rs_launch.py
# Verify (in another terminal)
ros2 topic list # Should see /camera/color/image_raw
Limitations
| Limitation | Impact | Mitigation |
|---|---|---|
| No Isaac Sim | Cannot run GPU-accelerated simulation | Use workstation or cloud for simulation, deploy to Jetson |
| Limited RAM (8GB) | Large neural networks may not fit | Use TensorRT optimization, quantization |
| ARM64 Architecture | Some packages unavailable | Build from source or use Docker containers |
| Power Constraints (15W) | Thermal throttling under load | Use active cooling fan, optimize workloads |
Cost Comparison
| Setup | Initial Cost | Ongoing Cost | Best For |
|---|---|---|---|
| Workstation | $600-$2,500 | $0/month | Development, simulation, training |
| Edge Kit | $550-$800 | $0/month | On-robot deployment, mobile systems |
| Cloud | $0-$500 | $50-$200/month | No hardware, pay-as-you-go |
Next Steps
- Attach Sensors: Connect camera, LiDAR, IMU to Jetson I/O
- Test ROS 2 Nodes: Run sensor drivers and verify data streams
- Start Learning: Begin Module 1: ROS 2
Questions? Refer to the Glossary for terminology or consult Jetson forums.
Alternatives:
- Workstation Setup - For simulation and development
- Cloud Setup - For no-hardware option