Browse Source

Adding readme

tags/v0.3.12-rc0
haixuantao 7 months ago
parent
commit
1b2b76085a
5 changed files with 100 additions and 7 deletions
  1. +94
    -0
      examples/so100-remote/README.md
  2. +2
    -3
      examples/so100-remote/no_torque.yml
  3. +1
    -1
      examples/so100-remote/qwenvl-compression.yml
  4. +1
    -1
      examples/so100-remote/qwenvl-remote.yml
  5. +2
    -2
      examples/so100-remote/qwenvl.yml

+ 94
- 0
examples/so100-remote/README.md View File

@@ -0,0 +1,94 @@
# SO100 and SO101 Remote Example

## Hardware requirements

- Realsense Camera
- so101 robotic arm

## Download the 3D model of the SO100

```bash
[ -f "$HOME/Downloads/so100_urdf.zip" ] || (wget -O "$HOME/Downloads/so100_urdf.zip" https://huggingface.co/datasets/haixuantao/urdfs/resolve/main/so100/so100_urdf.zip && unzip -o "$HOME/Downloads/so100_urdf.zip" -d "$HOME/Downloads/so100_urdf")
```

## To get started

```bash
uv venv --seed
dora build no_torque.yml --uv
```

## Make sure that both realsense and robotic arm connected

On linux, for the arm you can check connection with:

```bash
ls /dev/ttyACM*
```

This should show something like:

```bash
/dev/ttyACM0
```

Make sure to enable read with:

```bash
sudo chmod 777 /dev/ttyACM0
```

On linux, For the camera, make sure to have it well connected and check with:

```bash
ls /dev/video**
```

Result should be as follows:

```bash
/dev/video0 /dev/video2 /dev/video4 /dev/video6 /dev/video8
/dev/video1 /dev/video3 /dev/video5 /dev/video7 /dev/video9
```

## To run the no torque demo:

```bash
dora run no_torque.yml --uv
```

If the placement of the virtual robot arm is wrong, you can move it using the so100_transform environment configuration.

## To run the qwenvl demo:

```bash
dora run qwenvl.yml --uv
```

## To run the qwenvl remote demo:

On a remote machine:

```bash
dora coordinator &
dora daemon --machine-id gpu
```

```bash
dora daemon --coordinator-addr <IP_COORDINATOR_ADDR>
dora start qwenvl-remote.yml --uv --coordinator-addr <IP_COORDINATOR_ADDR>
```

## To run the qwenvl compression demo:

On a remote machine:

```bash
dora coordinator &
dora daemon --machine-id gpu
```

```bash
dora daemon --coordinator-addr <IP_COORDINATOR_ADDR>
dora start qwenvl-compression.yml --uv --coordinator-addr <IP_COORDINATOR_ADDR>
```

+ 2
- 3
examples/so100-remote/no_torque.yml View File

@@ -30,7 +30,7 @@ nodes:
env:
# Link to your installation of so100-urdf.
# https://huggingface.co/datasets/haixuantao/urdfs/resolve/main/so100/so100_urdf.zip
URDF_PATH: /home/xavier/Downloads/so100_urdf/so100.urdf
URDF_PATH: $HOME/Downloads/so100_urdf/so100.urdf
END_EFFECTOR_LINK: "Moving Jaw"
TRANSFORM: -0.2 -0.01 -0.57 0.7 0 0 0.7

@@ -38,13 +38,12 @@ nodes:
build: pip install -e ../../node-hub/dora-rerun
path: dora-rerun
inputs:
series_fk: pytorch-kinematics/pose
jointstate_so100: so100/pose
camera/image: camera/image
camera/depth: camera/depth
env:
# Link to your installation of so100-urdf.
# https://huggingface.co/datasets/haixuantao/urdfs/resolve/main/so100/so100_urdf.zip
so100_urdf: /home/xavier/Downloads/so100_urdf/so100.urdf
so100_urdf: $HOME/Downloads/so100_urdf/so100.urdf
so100_transform: -0.2 -0.01 -0.57 0.7 0 0 0.7
CAMERA_PITCH: -3.1415

+ 1
- 1
examples/so100-remote/qwenvl-compression.yml View File

@@ -84,7 +84,7 @@ nodes:
camera/boxes2d: parse_bbox/bbox
camera/masks: sam2/masks
env:
so100_urdf: /home/xavier/Downloads/so100_urdf/so100.urdf
so100_urdf: $HOME/Downloads/so100_urdf/so100.urdf
so100_transform: -0.2 -0.01 -0.57 0.7 0 0 0.7
CAMERA_PITCH: -3.1415



+ 1
- 1
examples/so100-remote/qwenvl-remote.yml View File

@@ -56,7 +56,7 @@ nodes:
env:
# Link to your installation of so100-urdf.
# https://huggingface.co/datasets/haixuantao/urdfs/resolve/main/so100/so100_urdf.zip
so100_urdf: /home/xavier/Downloads/so100_urdf/so100.urdf
so100_urdf: $HOME/Downloads/so100_urdf/so100.urdf
so100_transform: -0.2 -0.01 -0.57 0.7 0 0 0.7
so100_inference_transform: -0.2 -0.01 -0.57 0.7 0 0 0.7
CAMERA_PITCH: -3.1415


+ 2
- 2
examples/so100-remote/qwenvl.yml View File

@@ -36,7 +36,7 @@ nodes:
env:
# Link to your installation of so100-urdf.
# https://huggingface.co/datasets/haixuantao/urdfs/resolve/main/so100/so100_urdf.zip
URDF_PATH: /home/xavier/Downloads/so100_urdf/so100.urdf
URDF_PATH: $HOME/Downloads/so100_urdf/so100.urdf
END_EFFECTOR_LINK: "Moving Jaw"
TRANSFORM: -0.2 -0.01 -0.57 0.7 0 0 0.7

@@ -55,7 +55,7 @@ nodes:
camera/boxes2d: parse_bbox/bbox
camera/masks: sam2/masks
env:
so100_urdf: /home/xavier/Downloads/so100_urdf/so100.urdf
so100_urdf: $HOME/Downloads/so100_urdf/so100.urdf
so100_transform: -0.2 -0.01 -0.57 0.7 0 0 0.7
so100_inference_transform: -0.2 -0.01 -0.57 0.7 0 0 0.7
CAMERA_PITCH: -3.1415


Loading…
Cancel
Save