Skip to main content

Setup Guide: Physical AI Edge Kit

Estimated Time: 3 hours | Difficulty: Advanced


Overview

The Physical AI Edge Kit uses NVIDIA Jetson Orin Nano for on-robot computation. Unlike the workstation setup (cloud processing), this runs all AI models directly on the robot for:

  • Low latency: No network delays for real-time control
  • Privacy: Data stays on-device
  • Portability: Deploy robots without WiFi/cloud dependency
  • Power efficiency: 5-15W power consumption

Use Cases

  • Mobile robots requiring real-time obstacle avoidance
  • Autonomous drones with onboard perception
  • Warehouse robots with edge AI inference
  • Educational robotics kits (TurtleBot, DuckieBot)

Hardware Requirements

Core Components

ItemSpecificationCost (USD)
Jetson Orin Nano Developer Kit8GB RAM, 1024 CUDA cores$499
microSD Card128GB UHS-I (for JetPack OS)$20
Power Supply19V 3.42A (65W) barrel jackIncluded
USB-C CableFor initial flashing$10
Cooling FanActive cooling (recommended)$15
WiFi Module (optional)M.2 Key E slot for wireless$25
Total~$550-$600

Optional Sensors/Actuators

  • Camera: Intel RealSense D435i ($200) for RGB-D perception
  • LiDAR: RPLidar A1M8 ($100) for 2D mapping
  • IMU: MPU6050 ($5) for motion sensing
  • Motors: Dynamixel servos ($50-200 each) for manipulation

Installation Steps

Step 1: Flash JetPack 6.0

  1. Download JetPack SD Card Image:

  2. Flash to microSD:

    # On your host computer (Linux/Windows/Mac)
    # Using balenaEtcher (recommended): https://www.balena.io/etcher/
    # OR using dd on Linux:
    sudo dd if=jetpack-6.0-orin-nano.img of=/dev/sdX bs=4M status=progress
    sudo sync
  3. Boot Jetson:

    • Insert microSD into Jetson Orin Nano
    • Connect monitor (HDMI), keyboard, mouse
    • Power on (barrel jack)
    • Complete Ubuntu setup wizard

Step 2: Configure Network

Option A: Ethernet (Recommended for Setup):

  • Connect ethernet cable
  • DHCP should auto-configure

Option B: WiFi (if M.2 WiFi module installed):

# Scan for networks
nmcli device wifi list

# Connect
nmcli device wifi connect "SSID" password "PASSWORD"

# Verify
ping -c 3 google.com

Step 3: Install ROS 2 Humble (ARM64)

# Update system
sudo apt update && sudo apt upgrade -y

# Add ROS 2 repository (ARM64)
sudo curl -sSL https://raw.githubusercontent.com/ros/rosdistro/master/ros.key \
-o /usr/share/keyrings/ros-archive-keyring.gpg
echo "deb [arch=arm64 signed-by=/usr/share/keyrings/ros-archive-keyring.gpg] \
http://packages.ros.org/ros2/ubuntu jammy main" | \
sudo tee /etc/apt/sources.list.d/ros2.list

# Install ROS 2 Humble Base (lighter than Desktop for edge)
sudo apt update
sudo apt install ros-humble-ros-base python3-colcon-common-extensions -y

# Source ROS 2
echo "source /opt/ros/humble/setup.bash" >> ~/.bashrc
source ~/.bashrc

# Verify
ros2 run demo_nodes_cpp talker

Step 4: Enable GPIO and I2C

# Install Jetson GPIO library
sudo pip3 install Jetson.GPIO

# Add user to gpio group
sudo groupadd -f -r gpio
sudo usermod -a -G gpio $USER

# Enable I2C (for sensors like IMU, LiDAR)
sudo apt install python3-smbus i2c-tools -y

# Verify I2C devices
sudo i2cdetect -y -r 1

# Reboot to apply group changes
sudo reboot

Step 5: Install Lightweight Tools

# Minimal footprint tools for edge robotics
sudo apt install -y \
python3-pip \
python3-opencv \
nano \
htop \
can-utils \
v4l-utils

# Python packages for robotics
pip3 install numpy scipy transforms3d

# Optional: TensorRT for optimized AI inference
sudo apt install nvidia-tensorrt -y

Verification Checklist

  • Boot: Jetson boots from microSD and shows Ubuntu desktop
  • Network: ping google.com succeeds
  • ROS 2: ros2 run demo_nodes_cpp talker runs
  • GPIO: sudo python3 -c "import Jetson.GPIO" (no errors)
  • I2C: sudo i2cdetect -y -r 1 shows I2C devices
  • CUDA: nvcc --version shows CUDA toolkit

Example: Connecting RealSense Camera

# Install librealsense
sudo apt install ros-humble-librealsense2* -y

# Launch camera node
ros2 launch realsense2_camera rs_launch.py

# Verify (in another terminal)
ros2 topic list # Should see /camera/color/image_raw

Limitations

LimitationImpactMitigation
No Isaac SimCannot run GPU-accelerated simulationUse workstation or cloud for simulation, deploy to Jetson
Limited RAM (8GB)Large neural networks may not fitUse TensorRT optimization, quantization
ARM64 ArchitectureSome packages unavailableBuild from source or use Docker containers
Power Constraints (15W)Thermal throttling under loadUse active cooling fan, optimize workloads

Cost Comparison

SetupInitial CostOngoing CostBest For
Workstation$600-$2,500$0/monthDevelopment, simulation, training
Edge Kit$550-$800$0/monthOn-robot deployment, mobile systems
Cloud$0-$500$50-$200/monthNo hardware, pay-as-you-go

Next Steps

  1. Attach Sensors: Connect camera, LiDAR, IMU to Jetson I/O
  2. Test ROS 2 Nodes: Run sensor drivers and verify data streams
  3. Start Learning: Begin Module 1: ROS 2

Questions? Refer to the Glossary for terminology or consult Jetson forums.

Alternatives: