Browse Source

Reconfigure the folder structure

tags/v0.3.11-rc1
Rahat2134 10 months ago
parent
commit
f5ca7cbd86
100 changed files with 7654 additions and 0 deletions
  1. +40
    -0
      examples/alexk-lcr/ASSEMBLING.md
  2. +90
    -0
      examples/alexk-lcr/CONFIGURING.md
  3. +82
    -0
      examples/alexk-lcr/INSTALLATION.md
  4. +174
    -0
      examples/alexk-lcr/README.md
  5. +80
    -0
      examples/alexk-lcr/RECORDING.md
  6. +135
    -0
      examples/alexk-lcr/assets/simulation/lift_cube.xml
  7. +135
    -0
      examples/alexk-lcr/assets/simulation/pick_place_cube.xml
  8. +137
    -0
      examples/alexk-lcr/assets/simulation/push_cube.xml
  9. +135
    -0
      examples/alexk-lcr/assets/simulation/reach_cube.xml
  10. +141
    -0
      examples/alexk-lcr/assets/simulation/stack_two_cubes.xml
  11. +328
    -0
      examples/alexk-lcr/bus.py
  12. +0
    -0
      examples/alexk-lcr/configs/.gitkeep
  13. +213
    -0
      examples/alexk-lcr/configure.py
  14. +74
    -0
      examples/alexk-lcr/graphs/bi_teleop_real.yml
  15. +40
    -0
      examples/alexk-lcr/graphs/mono_replay_real.yml
  16. +37
    -0
      examples/alexk-lcr/graphs/mono_teleop_real.yml
  17. +70
    -0
      examples/alexk-lcr/graphs/mono_teleop_real_and_simu.yml
  18. +43
    -0
      examples/alexk-lcr/graphs/mono_teleop_simu.yml
  19. +119
    -0
      examples/alexk-lcr/graphs/record_mono_teleop_real.yml
  20. +148
    -0
      examples/alexk-lcr/nodes/interpolate_lcr_to_lcr.py
  21. +114
    -0
      examples/alexk-lcr/nodes/interpolate_lcr_to_record.py
  22. +133
    -0
      examples/alexk-lcr/nodes/interpolate_lcr_to_simu_lcr.py
  23. +83
    -0
      examples/alexk-lcr/nodes/interpolate_replay_to_lcr.py
  24. +64
    -0
      examples/aloha/ASSEMBLING.md
  25. +95
    -0
      examples/aloha/CONFIGURING.md
  26. +87
    -0
      examples/aloha/INSTALLATION.md
  27. +42
    -0
      examples/aloha/README.md
  28. +17
    -0
      examples/aloha/RECORDING.md
  29. +13
    -0
      examples/aloha/benchmark/python/README.md
  30. +325
    -0
      examples/aloha/benchmark/python/dynamixel.py
  31. +191
    -0
      examples/aloha/benchmark/python/robot.py
  32. +24
    -0
      examples/aloha/benchmark/python/teleoperate_real_robot.py
  33. +103
    -0
      examples/aloha/benchmark/ros2/README.md
  34. +9
    -0
      examples/aloha/benchmark/ros2/config/master_modes_left.yaml
  35. +9
    -0
      examples/aloha/benchmark/ros2/config/master_modes_right.yaml
  36. +17
    -0
      examples/aloha/benchmark/ros2/config/puppet_modes_left.yaml
  37. +17
    -0
      examples/aloha/benchmark/ros2/config/puppet_modes_right.yaml
  38. +4
    -0
      examples/aloha/benchmark/ros2/dataflow.yml
  39. +22
    -0
      examples/aloha/benchmark/ros2/setup_ros2.sh
  40. +90
    -0
      examples/aloha/benchmark/ros2/teleop.py
  41. +9
    -0
      examples/aloha/benchmark/rust/README.md
  42. +141
    -0
      examples/aloha/graphs/eval.yml
  43. +38
    -0
      examples/aloha/graphs/gym.yml
  44. +162
    -0
      examples/aloha/graphs/record_2arms_teleop.yml
  45. +32
    -0
      examples/aloha/graphs/record_teleop.yml
  46. +61
    -0
      examples/aloha/graphs/replay.yml
  47. +4
    -0
      examples/aloha/hardware_config/99-fixed-interbotix-udev.rules
  48. +24
    -0
      examples/aloha/hardware_config/99-interbotix-udev.rules
  49. +13
    -0
      examples/aloha/nodes/aloha-client/Cargo.toml
  50. +104
    -0
      examples/aloha/nodes/aloha-client/src/main.rs
  51. +15
    -0
      examples/aloha/nodes/aloha-teleop/Cargo.toml
  52. +38
    -0
      examples/aloha/nodes/aloha-teleop/src/_main.rs
  53. +242
    -0
      examples/aloha/nodes/aloha-teleop/src/main.rs
  54. +66
    -0
      examples/aloha/nodes/gym_dora_node.py
  55. +31
    -0
      examples/aloha/nodes/keyboard_node.py
  56. +78
    -0
      examples/aloha/nodes/lerobot_webcam_saver.py
  57. +234
    -0
      examples/aloha/nodes/llm_op.py
  58. +53
    -0
      examples/aloha/nodes/plot_node.py
  59. +17
    -0
      examples/aloha/nodes/policy.py
  60. +28
    -0
      examples/aloha/nodes/realsense_node.py
  61. +27
    -0
      examples/aloha/nodes/replay.py
  62. +45
    -0
      examples/aloha/nodes/webcam.py
  63. +59
    -0
      examples/aloha/nodes/whisper_node.py
  64. +24
    -0
      examples/aloha/tests/test_realsense.py
  65. +63
    -0
      examples/lebai/graphs/dataflow.yml
  66. +128
    -0
      examples/lebai/graphs/dataflow_full.yml
  67. +88
    -0
      examples/lebai/graphs/keyboard_teleop.yml
  68. +115
    -0
      examples/lebai/graphs/qwenvl2.yml
  69. +1
    -0
      examples/lebai/graphs/train.sh
  70. +96
    -0
      examples/lebai/graphs/voice_teleop.yml
  71. +60
    -0
      examples/lebai/nodes/interpolation.py
  72. +49
    -0
      examples/lebai/nodes/key_interpolation.py
  73. +21
    -0
      examples/lebai/nodes/prompt_interpolation.py
  74. +1
    -0
      examples/lebai/nodes/vlm_prompt.py
  75. +22
    -0
      examples/lebai/nodes/voice_interpolation.py
  76. +103
    -0
      examples/reachy/README.md
  77. +86
    -0
      examples/reachy/nodes/gym_dora_node.py
  78. +31
    -0
      examples/reachy/nodes/keyboard_node.py
  79. +78
    -0
      examples/reachy/nodes/lerobot_webcam_saver.py
  80. +56
    -0
      examples/reachy/nodes/plot_node.py
  81. +156
    -0
      examples/reachy/nodes/reachy_client.py
  82. +73
    -0
      examples/reachy/nodes/reachy_vision_client.py
  83. +21
    -0
      examples/reachy/nodes/replay_node.py
  84. +103
    -0
      examples/reachy1/README.md
  85. +80
    -0
      examples/reachy1/graphs/eval.yml
  86. +74
    -0
      examples/reachy1/graphs/qwenvl2.yml
  87. +73
    -0
      examples/reachy1/graphs/qwenvl2_recorder.yml
  88. +86
    -0
      examples/reachy1/nodes/gym_dora_node.py
  89. +41
    -0
      examples/reachy1/nodes/key_interpolation.py
  90. +31
    -0
      examples/reachy1/nodes/keyboard_node.py
  91. +78
    -0
      examples/reachy1/nodes/lerobot_webcam_saver.py
  92. +56
    -0
      examples/reachy1/nodes/plot_node.py
  93. +156
    -0
      examples/reachy1/nodes/reachy_client.py
  94. +73
    -0
      examples/reachy1/nodes/reachy_vision_client.py
  95. +21
    -0
      examples/reachy1/nodes/replay_node.py
  96. +72
    -0
      examples/reachy1/nodes/text_interpolation.py
  97. +12
    -0
      examples/so100/ASSEMBLING.md
  98. +75
    -0
      examples/so100/CONFIGURING.md
  99. +82
    -0
      examples/so100/INSTALLATION.md
  100. +68
    -0
      examples/so100/README.md

+ 40
- 0
examples/alexk-lcr/ASSEMBLING.md View File

@@ -0,0 +1,40 @@
# Dora pipeline Robots

AlexK Low Cost Robot is a low-cost robotic arm that can be teleoperated using a similar arm. This repository contains
the Dora pipeline to manipulate the arms, the camera, and record/replay episodes with LeRobot.

## Assembling

**Please read the instructions carefully before buying or printing the parts.**

You will need to get the parts for a Follower arm and a Leader:

- [AlexK Follower Low Cost Robot](https://github.com/AlexanderKoch-Koch/low_cost_robot/?tab=readme-ov-file#follower-arm)
- [AlexK Leader Low Cost Robot](https://github.com/AlexanderKoch-Koch/low_cost_robot/?tab=readme-ov-file#follower-arm)

You **must** assemble the arm **with the extension** to be able to do some of the tasks.

You then need to print the Follower arm and the Leader arm. The STL files are:

- [AlexK Follower Low Cost Robot](https://github.com/AlexanderKoch-Koch/low_cost_robot/tree/main/hardware/follower/stl)
- [AlexK Leader Low Cost Robot](https://github.com/AlexanderKoch-Koch/low_cost_robot/tree/main/hardware/leader/stl)

Some parts **must** be replaced by the ones in this repository:

- [Dora-LeRobot Base Leader Low Cost Robot](assets/stl/LEADER_Base.stl)

If you struggle buying XL330 Frame or XL330/XL430 Idler Wheel, here are STL files that can be printed instead:

- [XL330 Frame](assets/stl/XL330_Frame.stl)
- [XL330 Idler Wheel](assets/stl/XL330_Idler_Wheel.stl)
- [XL430 Idler Wheel](assets/stl/XL430_Idler_Wheel.stl)

Please then follow the [YouTube Tutorial by Alexander Koch](https://youtu.be/RckrXOEoWrk?si=ZXDnnlF6BQd_o7v8) to
assemble the arm correctly.
Note that the tutorial is for the arm without the extension, so you will have to adapt the assembly.

Then you can place the two cameras on your desk, following this [image]()

## License

This library is licensed under the [Apache License 2.0](../../LICENSE).

+ 90
- 0
examples/alexk-lcr/CONFIGURING.md View File

@@ -0,0 +1,90 @@
# Dora pipeline Robots

AlexK Low Cost Robot is a low-cost robotic arm that can be teleoperated using a similar arm. This repository contains
the Dora pipeline to manipulate the arms, the camera, and record/replay episodes with LeRobot.

## Configuring

Once you have assembled the robot, and installed the required software, you can configure the robot. It's essential to
configure it
correctly for the robot to work as expected. Here are the reasons why you need to configure the robot:

- You may have done some 'mistakes' during the assembly, like inverting the motors, or changing the offsets by rotating
the motors before assembling the robot. So your configuration will be different from the one we used to record the
data set.
- The recording pipeline needs to know the position of the motors to record the data set correctly. If the motors are
calibrated differently, the data set will be incorrect.

**Please read the instructions carefully before configuring the robot.**

The first thing to do is to configure the Servo BUS:

- Setting all the servos to the same baud rate (1M).
- Setting the ID of the servos from the base (1) to the gripper (6) for the Follower and Leader arms.

Those steps can be done using the official wizard provided by the
manufacturer [ROBOTIS](https://emanual.robotis.com/docs/en/software/dynamixel/dynamixel_wizard2/).

After that, you need to configure the homing offsets and drive mode to have the same behavior for every user. We
recommend using our on-board tool to set all of that automatically:

- Connect the Follower arm to your computer.
- Retrieve the device port from the official wizard.
- Run the configuration tool with the following command and follow the instructions:

```bash
cd dora-lerobot/

# If you are using a custom environment, you will have to activate it before running the command
source [your_custom_env_bin]/activate

# If you followed the installation instructions, you can run the following command
source venv/bin/activate # On Linux
source venv/Scripts/activate # On Windows bash
venv\Scripts\activate.bat # On Windows cmd
venv\Scripts\activate.ps1 # On Windows PowerShell

python ./robots/alexk-lcr/configure.py --port /dev/ttyUSB0 --follower --left # (or right)
```

**Note:** change `/dev/ttyUSB0` to the device port you retrieved from the official wizard (like `COM3` on Windows).
**Note:** The configuration tool will disable all torque so you can move the arm freely to the Position 1.
**Note:** You will be asked to set the arm in two different positions. The two positions are:

![image](https://github.com/Hennzau/Hennzau/blob/main/assets/Koch_arm_positions.png)

**Node:** You will be asked the path of the configuration file, you can press enter to use the default one.

- Repeat the same steps for the Leader arm:

```bash
python ./robots/alexk-lcr/configure.py --port /dev/ttyUSB1 --leader --left # (or right)
```

**Note:** change `/dev/ttyUSB1` to the device port you retrieved from the official wizard (like `COM4` on Windows).
**Note:** The configuration tool will disable all torque so you can move the arm freely to the Position 1.
**Node:** You will be asked the path of the configuration file, you can press enter to use the default one.

After following the guide, you should have the following configuration:

![image](https://github.com/Hennzau/Hennzau/blob/main/assets/Koch_arm_wanted_configuration.png)

This configuration has to be exported into environment variables inside the graph file. Here is an example of the
configuration:

```YAML
nodes:
- id: lcr-follower
env:
PORT: /dev/ttyUSB0
CONFIG: ../configs/follower.left.json # relative path to `./robots/alexk-lcr/configs/follower.json`

- id: lcr-to-lcr
env:
LEADER_CONTROL: ../configs/leader.left.json
FOLLOWER_CONTROL: ../configs/follower.left.json
```

## License

This library is licensed under the [Apache License 2.0](../../LICENSE).

+ 82
- 0
examples/alexk-lcr/INSTALLATION.md View File

@@ -0,0 +1,82 @@
# Dora pipeline Robots

AlexK Low Cost Robot is a low-cost robotic arm that can be teleoperated using a similar arm. This repository contains
the Dora pipeline to manipulate the arms, the camera, and record/replay episodes with LeRobot.

## Installation

Dataflow-oriented robotic application (Dora) is a framework that makes creation of robotic applications fast and simple.
See [Dora repository](https://github.com/dora-rs/dora).

**Please read the instructions carefully before installing the required software and environment to run the robot.**

You must install Dora before attempting to run the AlexK Low Cost Robot pipeline. Here are the steps to install Dora:

- Install Rust by following the instructions at [Rustup](https://rustup.rs/). (You may need to install Visual Studio C++
build tools on Windows.)
- Install Dora by running the following command:

```bash
cargo install dora-cli
```

Now you're ready to run Rust dataflow applications! We decided to only make Python dataflow for AlexK Low Cost Robot, so
you may need to setup your Python environment:

- Install Python 3.12 or later by following the instructions at [Python](https://www.python.org/downloads/).
- Clone this repository by running the following command:

```bash
git clone https://github.com/dora-rs/dora-lerobot
```

- Open a bash terminal and navigate to the repository by running the following command:

```bash
cd dora-lerobot
```

- Create a virtual environment by running the following command (you can find where is all your pythons executable with
the command `whereis python3` on Linux, on default for Windows it's located
in `C:\Users\<User>\AppData\Local\Programs\Python\Python3.12\python.exe)`):

```bash
path_to_your_python3_executable -m venv venv
```

- Activate the virtual environment and install the required Python packages by running the following command:

```bash
# If you are using a custom environment, you will have to activate it before running the command
source [your_custom_env_bin]/activate

# If you followed the installation instructions, you can run the following command
source venv/bin/activate # On Linux
source venv/Scripts/activate # On Windows bash
venv\Scripts\activate.bat # On Windows cmd
venv\Scripts\activate.ps1 # On Windows PowerShell

pip install -r robots/alexk-lcr/requirements.txt
```

If you want to install the required Python packages in development mode, you can run the following command, but you will
have to avoid using `dora build` during execution procedure:

```bash
pip install -r robots/alexk-lcr/development.txt # You **MUST** be inside dora-lerobot to run this command, not robots/alexk-lcr
```

**Note**: You're totally free to use your own Python environment, a Conda environment, or whatever you prefer, you will
have to activate
your custom python environment before running `dora up && dora start [graph].yml`.

In order to record episodes, you need ffmpeg installed on your system. You can download it from
the [official website](https://ffmpeg.org/download.html). If you're on Windows, you can download the latest build
from [here](https://www.gyan.dev/ffmpeg/builds/). You can
extract the zip file and add the `bin` folder to your PATH.
If you're on Linux, you can install ffmpeg using the package manager of your distribution. (
e.g `sudo apt install ffmpeg` on Ubuntu, `brew install ffmpeg` on macOS)

## License

This library is licensed under the [Apache License 2.0](../../LICENSE).

+ 174
- 0
examples/alexk-lcr/README.md View File

@@ -0,0 +1,174 @@
# Dora pipeline Robots

AlexK Low Cost Robot is a low-cost robotic arm that can be teleoperated using a similar arm. This repository contains
the Dora pipeline to record episodes for LeRobot.

## Assembling

Check the [ASSEMBLING.md](ASSEMBLING.md) file for instructions on how to assemble the robot from scratch using the
provided parts from the [AlexK Low Cost Robot](https://github.com/AlexanderKoch-Koch/low_cost_robot)

## Installation

Check the [INSTALLATION.md](INSTALLATION.md) file for instructions on how to install the required software and
environment
to run the robot.

## Configuring

Check the [CONFIGURING.md](CONFIGURING.md) file for instructions on how to configure the robot to record episodes for
LeRobot and teleoperate the robot.

## Recording

It's probably better to check the [examples](#examples) below before trying to record episodes. It will give you a
better
understanding of how Dora works.

Check the [RECORDING.md](RECORDING.md) file for instructions on how to record episodes for LeRobot.

## Examples

There are also some other example applications in the `graphs` folder. Have fun!

Here is a list of the available examples:

- `mono_teleop_real.yml`: A simple real teleoperation pipeline that allows you to control a follower arm using a leader
arm. It
does not record the episodes, so you don't need to have a camera.

You must configure the arms, retrieve the device port, and modify the file `mono_teleop_real.yml` to set the correct
environment variables. (e.g. `PORT` and `CONFIG`, `LEADER_CONTROL` and `FOLLOWER_CONTROL`)

```bash
cd dora-lerobot/

# If you are using a custom environment, you will have to activate it before running the command
source [your_custom_env_bin]/activate

# If you followed the installation instructions, you can run the following command
source venv/bin/activate # On Linux
source venv/Scripts/activate # On Windows bash
venv\Scripts\activate.bat # On Windows cmd
venv\Scripts\activate.ps1 # On Windows PowerShell

dora build ./robots/alexk-lcr/graphs/mono_teleop_real.yml # Only the first time, it will install all the requirements if needed

dora up
dora start ./robots/alexk-lcr/graphs/mono_teleop_real.yml
```

[![](https://mermaid.ink/img/pako:eNqVUsFOxCAQ_RUy591Urz14MF496W0xZCzTlkihmUI2ZrP_LtDtutomRg4w83jvMcCcoPGaoAZxGa31x6ZHDuL1UTohbMPKEmriJTMuEI_eYqAFar1NskyZ4nvHOPZCKaU9Y1rEIQdvmXu7G8xAfJkzqUSFJUQWVAWoBmOtmar7u4OU17gqPHJaujJtK8R-L8ZorRr9ZILxLgEPGxdaqi_8hYqTWPC1fuMJZsvfFjP6p8H_qv9-7dWHZFHn8UaUijiyCaR-wmsv2EE6f0CjUzecsreE0NNAEuoUauQPCdKdEw9j8C-froE6cKQdsI9dD3WLdkpZHHWq5Mlg-urhipI2wfPz3Gyl585fka3hkA?type=png)](https://mermaid.live/edit#pako:eNqVUsFOxCAQ_RUy591Urz14MF496W0xZCzTlkihmUI2ZrP_LtDtutomRg4w83jvMcCcoPGaoAZxGa31x6ZHDuL1UTohbMPKEmriJTMuEI_eYqAFar1NskyZ4nvHOPZCKaU9Y1rEIQdvmXu7G8xAfJkzqUSFJUQWVAWoBmOtmar7u4OU17gqPHJaujJtK8R-L8ZorRr9ZILxLgEPGxdaqi_8hYqTWPC1fuMJZsvfFjP6p8H_qv9-7dWHZFHn8UaUijiyCaR-wmsv2EE6f0CjUzecsreE0NNAEuoUauQPCdKdEw9j8C-froE6cKQdsI9dD3WLdkpZHHWq5Mlg-urhipI2wfPz3Gyl585fka3hkA)

- `bi_teleop_real.yml`: A simple real tele operation pipeline that allows you to control two follower arm using two
leader arm
(left and right). It does not record the episodes, so you don't need to have a camera.

You must configure the arms, retrieve the device port, and modify the file `bi_teleop_real.yml` to set the correct
environment variables. (e.g. `PORT` and `CONFIG`)

```bash
cd dora-lerobot/

# If you are using a custom environment, you will have to activate it before running the command
source [your_custom_env_bin]/activate

# If you followed the installation instructions, you can run the following command
source venv/bin/activate # On Linux
source venv/Scripts/activate # On Windows bash
venv\Scripts\activate.bat # On Windows cmd
venv\Scripts\activate.ps1 # On Windows PowerShell

dora build ./robots/alexk-lcr/graphs/bi_teleop_real.yml # Only the first time, it will install all the requirements if needed

dora up
dora start ./robots/alexk-lcr/graphs/bi_teleop_real.yml
```

[![](https://mermaid.ink/img/pako:eNqlVMFugzAM_ZUo51ZsVw47TLvutN2aKsqIgWghQSZRNVX99yWhtAXBNjoOxrz4vdgmzpEWVgLNKTk_pbaHohboyPszM4ToArmG0gUjJOAIUsYBtlYLByO8tDqoXINRVfVUoMdmFPqFq0TnPyoUbU0459KiCC-yi84-Mm5XnWoAzzYGJS9FERIJWQKyRmmtuuzxYcfYxc9SHBjJTDLzDLLdktZrzVvbKaesCcDTjy0a6kjMgSQ6MuALSkud7XeYivXo36TuKGv6O6eykV5ZcUMPOR1QOeBjeFF1XVLLx2l9t385huv6PSt2T23zA_Sflk916YaGjBqhZJj9Y9yHUVdDA4zmwZUCPxll5hTihHf27csUNHfoYUPR-qqmeSl0F758K0M-L0qEMWwuKEjlLL72V0u6YU7fOOqbHg?type=png)](https://mermaid.live/edit#pako:eNqlVMFugzAM_ZUo51ZsVw47TLvutN2aKsqIgWghQSZRNVX99yWhtAXBNjoOxrz4vdgmzpEWVgLNKTk_pbaHohboyPszM4ToArmG0gUjJOAIUsYBtlYLByO8tDqoXINRVfVUoMdmFPqFq0TnPyoUbU0459KiCC-yi84-Mm5XnWoAzzYGJS9FERIJWQKyRmmtuuzxYcfYxc9SHBjJTDLzDLLdktZrzVvbKaesCcDTjy0a6kjMgSQ6MuALSkud7XeYivXo36TuKGv6O6eykV5ZcUMPOR1QOeBjeFF1XVLLx2l9t385huv6PSt2T23zA_Sflk916YaGjBqhZJj9Y9yHUVdDA4zmwZUCPxll5hTihHf27csUNHfoYUPR-qqmeSl0F758K0M-L0qEMWwuKEjlLL72V0u6YU7fOOqbHg)

- `mono_teleop_simu.yml`: A simple simulation tele operation pipeline that allows you to control a simulated follower
arm using a leader arm. It does not record the episodes, so you don't need to have a camera.

You must configure the arms, retrieve the device port, and modify the file `mono_teleop_simu.yml` to set the correct
environment variables. (e.g. `PORT` and `CONFIG`)

```bash
cd dora-lerobot/


# If you are using a custom environment, you will have to activate it before running the command
source [your_custom_env_bin]/activate

# If you followed the installation instructions, you can run the following command
source venv/bin/activate # On Linux
source venv/Scripts/activate # On Windows bash
venv\Scripts\activate.bat # On Windows cmd
venv\Scripts\activate.ps1 # On Windows PowerShell

dora build ./robots/alexk-lcr/graphs/mono_teleop_simu.yml # Only the first time, it will install all the requirements if needed

dora up
dora start ./robots/alexk-lcr/graphs/mono_teleop_simu.yml
```

[![](https://mermaid.ink/img/pako:eNp1UstuwyAQ_JUV50Rurz70UPXaU3sLFdqatY2CwcKgqIry711w4ubhcoDdYWZ3eBxF4zWJWsB5tNYfmh5DhM9X6QBsE5Ql1BQumXGRwugtRrpArbcsy5QpfXcBxx6UUtoH5AV2OfjK3OvdaAYK5zmTSlRYAFlQFaAajLVmqp6fdlIucVV45LR0Zbp1AdstRNPsAScYk7Vq9JOJxjveeFk50Jxl1UJk5Yw-au-Ov2a1lFpt_HdR_yuL9TXBXffM7TxedWHXh2AiqVv4sZbYCG47oNH88sdcW4rY00BS1BxqDHsppDsxD1P0Hz-uEXUMiTYi-NT1om7RTpylUbOTN4P8rMOCkjbRh_f5Y5X_dfoF5ZjY9g?type=png)](https://mermaid.live/edit#pako:eNp1UstuwyAQ_JUV50Rurz70UPXaU3sLFdqatY2CwcKgqIry711w4ubhcoDdYWZ3eBxF4zWJWsB5tNYfmh5DhM9X6QBsE5Ql1BQumXGRwugtRrpArbcsy5QpfXcBxx6UUtoH5AV2OfjK3OvdaAYK5zmTSlRYAFlQFaAajLVmqp6fdlIucVV45LR0Zbp1AdstRNPsAScYk7Vq9JOJxjveeFk50Jxl1UJk5Yw-au-Ov2a1lFpt_HdR_yuL9TXBXffM7TxedWHXh2AiqVv4sZbYCG47oNH88sdcW4rY00BS1BxqDHsppDsxD1P0Hz-uEXUMiTYi-NT1om7RTpylUbOTN4P8rMOCkjbRh_f5Y5X_dfoF5ZjY9g)

- `mono_teleop_real_and_simu.yml`: A simple real and simulation tele operation pipeline that allows you to control a
simulated and real follower arm using a real leader arm. It does not record the episodes, so you don't need to have a
camera.

You must configure the arms, retrieve the device port, and modify the file `mono_teleop_real_and_simu.yml` to set the
correct
environment variables. (e.g. `PORT` and `CONFIG`)

```bash
cd dora-lerobot/


# If you are using a custom environment, you will have to activate it before running the command
source [your_custom_env_bin]/activate

# If you followed the installation instructions, you can run the following command
source venv/bin/activate # On Linux
source venv/Scripts/activate # On Windows bash
venv\Scripts\activate.bat # On Windows cmd
venv\Scripts\activate.ps1 # On Windows PowerShell

dora build ./robots/alexk-lcr/graphs/mono_teleop_real_and_simu.yml # Only the first time, it will install all the requirements if needed

dora up
dora start ./robots/alexk-lcr/graphs/mono_teleop_real_and_simu.yml
```

[![](https://mermaid.ink/img/pako:eNqdU8luwyAQ_RXEOZHbqw89VL321N5ChajBMQqLxaKoivLvHXCM3IS0lX3Aw-O9YRbmhDvLBW4xuny9ssduYC6g92diEFKdo0owLty8kyYIN1rFgpih3iqQVSnUSx2veRfQx8-9Y-OAKKXcOgY_tEvGRxIsT4PUoJrWRMpWZiGUBE0GGi2Vkr55fNgRUuwm84ThxOSlEgrablGQ3QExj8aoFB2tl0FaAwdPlRLM4qQrVNAWpzf6StEml9cuJvRfDm5SgPQKf9mSWoXyvdVUf2lmEu0tW4gg4qOT0Oaf8D1fq3Muz2hdLn_Kc_fvqmrBrK5FVuMNhhg0kxxm75TuIDgMQguCWzA5cweCiTkDj8Vg375Mh9vgothgZ-N-wG3PlIddHDlE9CIZzIouqOAyWPc6jXae8PM3I_doSQ?type=png)](https://mermaid.live/edit#pako:eNqdU8luwyAQ_RXEOZHbqw89VL321N5ChajBMQqLxaKoivLvHXCM3IS0lX3Aw-O9YRbmhDvLBW4xuny9ssduYC6g92diEFKdo0owLty8kyYIN1rFgpih3iqQVSnUSx2veRfQx8-9Y-OAKKXcOgY_tEvGRxIsT4PUoJrWRMpWZiGUBE0GGi2Vkr55fNgRUuwm84ThxOSlEgrablGQ3QExj8aoFB2tl0FaAwdPlRLM4qQrVNAWpzf6StEml9cuJvRfDm5SgPQKf9mSWoXyvdVUf2lmEu0tW4gg4qOT0Oaf8D1fq3Muz2hdLn_Kc_fvqmrBrK5FVuMNhhg0kxxm75TuIDgMQguCWzA5cweCiTkDj8Vg375Mh9vgothgZ-N-wG3PlIddHDlE9CIZzIouqOAyWPc6jXae8PM3I_doSQ)

- `mono_replay_real.yml`: A simple real replay pipeline that allows you to replay a recorded episode.

You must configure the dataset path and episode index in the file `mono_replay_real.yml` to set the correct
environment variables. (e.g. `PATH`, `EPISODE`). You must also configure the follower arm, retrieve the device port, and
modify the file `mono_replay_real.yml` to set the correct environment variables. (e.g. `PORT` and `CONFIG`)

```bash
cd dora-lerobot/

# If you are using a custom environment, you will have to activate it before running the command
source [your_custom_env_bin]/activate

# If you followed the installation instructions, you can run the following command
source venv/bin/activate # On Linux
source venv/Scripts/activate # On Windows bash
venv\Scripts\activate.bat # On Windows cmd
venv\Scripts\activate.ps1 # On Windows PowerShell

dora build ./robots/alexk-lcr/graphs/mono_replay_real.yml # Only the first time, it will install all the requirements if needed

dora up
dora start ./robots/alexk-lcr/graphs/mono_replay_real.yml
```

[![](https://mermaid.ink/img/pako:eNptkbFuAyEMhl_F8pzohmw3dKiydmq3UCH38N2hcoB8oCiK8u4BmkZNWwbz8_PZCPuMQzCMPcJtjS4ch5kkwduz8gDC0dFJD86yT9Vwg-gxuIKxKL_mj0kozqC1NkGobHCo4r2yP2-TXVhusUJNNQqgJnTN6BbrnF273e6g1F13jWNvlG_h_wzYbiFm53QMq002-GI8_f3Ag9FyvnFa4Sg2sZ4C_ary-GvcYHl5IWtK5861qMI088IK-yINyadC5S-Fo5zC68kP2CfJvEEJeZqxH8mt5ZSjocR7S6VNy91lY1OQl6_BtPlcrmjBlKg?type=png)](https://mermaid.live/edit#pako:eNptkbFuAyEMhl_F8pzohmw3dKiydmq3UCH38N2hcoB8oCiK8u4BmkZNWwbz8_PZCPuMQzCMPcJtjS4ch5kkwduz8gDC0dFJD86yT9Vwg-gxuIKxKL_mj0kozqC1NkGobHCo4r2yP2-TXVhusUJNNQqgJnTN6BbrnF273e6g1F13jWNvlG_h_wzYbiFm53QMq002-GI8_f3Ag9FyvnFa4Sg2sZ4C_ary-GvcYHl5IWtK5861qMI088IK-yINyadC5S-Fo5zC68kP2CfJvEEJeZqxH8mt5ZSjocR7S6VNy91lY1OQl6_BtPlcrmjBlKg)

## License

This library is licensed under the [Apache License 2.0](../../LICENSE).

+ 80
- 0
examples/alexk-lcr/RECORDING.md View File

@@ -0,0 +1,80 @@
# Dora pipeline Robots

AlexK Low Cost Robot is a low-cost robotic arm that can be teleoperated using a similar arm. This repository contains
the Dora pipeline to manipulate the arms, the camera, and record/replay episodes with LeRobot.

## Recording

This section explains how to record episodes for LeRobot using the AlexK Low Cost Robot.

Recording is the process of tele operating the robot and saving the episodes to a dataset. The dataset is used to train
the robot to perform tasks autonomously.

To record episodes with Dora, you have to configure the Dataflow `record_mono_teleop_real.yml` file to integrate the
arms and the camera. The graph file is located in the `graphs` folder.

Make sure to:

- Adjust the serial ports of `lcr-leader` and `lcr-follower` in the `record_mono_teleop_real.yml` file.
- Adjust the camera PATH in the `record_mono_teleop_real.yml` file.
- Adjust image and video WIDTH and HEIGHT in the `record_mono_teleop_real.yml` file, if needed.
- Adjust recording framerate with your camera framerate in the `record_mono_teleop_real.yml` file.
- Adjust CONFIG path environment variables in the `record_mono_teleop_real.yml` file for both arms if needed.
- Adjust `LEADER_CONTROL` and `FOLLOWER_CONTROL` environment variables in the `record_mono_teleop_real.yml` file if
needed.

You can now start the Dora pipeline to record episodes for LeRobot:

```bash
cd dora-lerobot

# If you are using a custom environment, you will have to activate it before running the command
source [your_custom_env_bin]/activate

# If you followed the installation instructions, you can run the following command
source venv/bin/activate # On Linux
source venv/Scripts/activate # On Windows bash
venv\Scripts\activate.bat # On Windows cmd
venv\Scripts\activate.ps1 # On Windows PowerShell

dora build ./robots/alexk-lcr/graphs/record_mono_teleop_real.yml # Only the first time, it will install all the requirements if needed

dora up
dora start ./robots/alexk-lcr/graphs/record_mono_teleop_real.yml
```

Then, you can tele operate the follower with the leader. A window will pop up showing the camera feed, and some text.

1. Press space to start/stop recording
2. Press return if you want to tell the recording that you failed the current episode, or the previous episode if you
have not started the current one
3. Close the window to stop the recording
4. Write down the location of the logs (e.g `018fc3a8-3b76-70f5-84a2-22b84df24739`), this is where the
dataset (and logs) are stored.

You can now use our script to convert the logs to an understandable dataset:

```bash
cd dora-lerobot

# If you are using a custom environment, you will have to activate it before running the command
source [your_custom_env_bin]/activate

# If you followed the installation instructions, you can run the following command
source venv/bin/activate # On Linux
source venv/Scripts/activate # On Windows bash
venv\Scripts\activate.bat # On Windows cmd
venv\Scripts\activate.ps1 # On Windows PowerShell

python ./datasets/build_dataset.py --record-path [path_to_recorded_logs] --dataset-name [dataset_name] --framerate [framerate]
```

**Note:** On default, the framerate is 30. If you have recorded with a different framerate, you will have to adjust it.

## The dora graph

[![](https://mermaid.ink/img/pako:eNqdVMFu2zAM_RVB57berjn0MOzaU3eLCoGR6ESobBqSnK4o-u-j5NizE6Np64NBPfE9Us-03qQhi3IjxempPb2YA4Qk_vxSrRDeBO0RLIZx5dqEoSMPCUeoJs-0IYU6bM1RG2gwQAaOziJpBmkUwUA7SjqgoWAzYinAabmtZgulnlQb-90-QHcQWuuyp7XY5uApU-e7yXHN0zsnlahkDSWqAlSN897F6uePrVJTXJU8bLmf8jpvU9zeiuTMs4Aout573VF0yVHLG_crLo2WZN6UytwRv-Sv-DpInksM6HWBi_75YFPy_JN99aQL7rJwJu8JZiRWeQkuoV7C3-jDNbDHQryYsZWzdi7ywGXyKeQ2Lf4t_IuRXAhm-v9an83lQiXgj1an4Xirc34-hNMx1ymf8RfMZOn85_m6L1fZNTiPtsxxifRVjYV9C7doFzEcIbd-V8B4x57qvltt5YNfai4U02DSQkDeSLa8AWf5onvLckqmAzao5IZDC-FZSdW-cx70iR5fWyM3KfR4IwP1-4Pc1OAjr_rOsvxvB3zlNBOK1iUKD8M9Wq7T93-SiOfx?type=png)](https://mermaid.live/edit#pako:eNqdVMFu2zAM_RVB57berjn0MOzaU3eLCoGR6ESobBqSnK4o-u-j5NizE6Np64NBPfE9Us-03qQhi3IjxempPb2YA4Qk_vxSrRDeBO0RLIZx5dqEoSMPCUeoJs-0IYU6bM1RG2gwQAaOziJpBmkUwUA7SjqgoWAzYinAabmtZgulnlQb-90-QHcQWuuyp7XY5uApU-e7yXHN0zsnlahkDSWqAlSN897F6uePrVJTXJU8bLmf8jpvU9zeiuTMs4Aout573VF0yVHLG_crLo2WZN6UytwRv-Sv-DpInksM6HWBi_75YFPy_JN99aQL7rJwJu8JZiRWeQkuoV7C3-jDNbDHQryYsZWzdi7ywGXyKeQ2Lf4t_IuRXAhm-v9an83lQiXgj1an4Xirc34-hNMx1ymf8RfMZOn85_m6L1fZNTiPtsxxifRVjYV9C7doFzEcIbd-V8B4x57qvltt5YNfai4U02DSQkDeSLa8AWf5onvLckqmAzao5IZDC-FZSdW-cx70iR5fWyM3KfR4IwP1-4Pc1OAjr_rOsvxvB3zlNBOK1iUKD8M9Wq7T93-SiOfx)

## License

This library is licensed under the [Apache License 2.0](../../LICENSE).

+ 135
- 0
examples/alexk-lcr/assets/simulation/lift_cube.xml View File

@@ -0,0 +1,135 @@
<mujoco model="bd1 scene">
<option timestep="0.005"/>

<compiler angle="radian" autolimits="true"/>

<asset>
<mesh name="base" file="base.stl"/>
<mesh name="dc11_a01_spacer_dummy" file="dc11_a01_spacer_dummy.stl"/>
<mesh name="dc11_a01_dummy" file="dc11_a01_dummy.stl"/>
<mesh name="rotation_connector" file="rotation_connector.stl"/>
<mesh name="arm_connector" file="arm_connector.stl"/>
<mesh name="dc15_a01_horn_idle2_dummy" file="dc15_a01_horn_idle2_dummy.stl"/>
<mesh name="dc15_a01_case_m_dummy" file="dc15_a01_case_m_dummy.stl"/>
<mesh name="dc15_a01_case_f_dummy" file="dc15_a01_case_f_dummy.stl"/>
<mesh name="dc15_a01_horn_dummy" file="dc15_a01_horn_dummy.stl"/>
<mesh name="dc15_a01_case_b_dummy" file="dc15_a01_case_b_dummy.stl"/>
<mesh name="connector" file="connector.stl"/>
<mesh name="shoulder_rotation" file="shoulder_rotation.stl"/>
<mesh name="static_side" file="static_side.stl"/>
<mesh name="moving_side" file="moving_side.stl"/>
<texture type="skybox" builtin="gradient" rgb1="0.3 0.5 0.7" rgb2="0 0 0" width="512" height="3072" />
<texture type="2d" name="groundplane" builtin="checker" mark="edge" rgb1="0.2 0.3 0.4" rgb2="0.1 0.2 0.3" markrgb="0.8 0.8 0.8" width="300" height="300" />
<material name="groundplane" texture="groundplane" texuniform="true" texrepeat="5 5" reflectance="0.2" />
</asset>

<visual>
<headlight diffuse="0.6 0.6 0.6" ambient="0.3 0.3 0.3" specular="0 0 0" />
<rgba haze="0.15 0.25 0.35 1" />
<global azimuth="150" elevation="-20" offheight="640" />
</visual>

<worldbody>
<light pos="0 0 3" dir="0 0 -1" directional="false" />
<body name="floor">
<geom pos="0 0 0" name="floor" size="0 0 .125" type="plane" material="groundplane" conaffinity="1" contype="1" />
</body>

<body name="cube" pos="0.1 0.1 0.01">
<freejoint name="cube"/>
<inertial pos="0 0 0" mass="0.1" diaginertia="0.00001125 0.00001125 0.00001125"/>
<geom friction="0.5" condim="3" pos="0 0 0" size="0.015 0.015 0.015" type="box" name="cube" rgba="0.5 0 0 1" priority="1"/>
</body>


<camera name="camera_front" pos="0.049 0.888 0.317" xyaxes="-0.998 0.056 -0.000 -0.019 -0.335 0.942"/>
<camera name="camera_top" pos="0 0 1" euler="0 0 0" mode="fixed"/>
<camera name="camera_vizu" pos="-0.1 0.6 0.3" quat="-0.15 -0.1 0.6 1"/>
<geom pos="0.0401555 -0.0353754 -0.0242427" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="base"/>
<geom pos="0.0511555 0.0406246 0.0099573" quat="0 0 0 1" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="0.0291555 0.000624643 0.0099573" quat="0 0 0 1" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="0.0511555 0.000624643 -0.0184427" quat="0 0 1 0" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="0.0401555 0.0326246 -0.0042427" type="mesh" rgba="0.498039 0.498039 0.498039 1" mesh="dc11_a01_dummy"/>
<geom pos="0.0291555 0.000624643 -0.0184427" quat="0 0.707107 0.707107 0" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="0.0291555 0.0406246 0.0099573" quat="0.707107 0 0 -0.707107" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="0.0511555 0.0406246 -0.0184427" quat="0 0.707107 -0.707107 0" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="0.0291555 0.0406246 -0.0184427" quat="0 -1 0 0" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="0.0511555 0.000624643 0.0099573" quat="0.707107 0 0 0.707107" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>

<body name="pitch1_assembly" pos="0.0401555 0.0326246 0.0166573">

<inertial pos="-0.000767103 -0.0121505 0.0134241" quat="0.498429 0.53272 -0.473938 0.493113" mass="0.0606831" diaginertia="1.86261e-05 1.72746e-05 1.11693e-05"/>
<joint name="shoulder_pan_joint" pos="0 0 0" axis="0 0 1" range="-3.14159 3.14159" damping="0.1" actuatorfrcrange="-0.5 0.5" actuatorfrclimited="true"/>
<geom pos="0 0 -0.0209" type="mesh" rgba="0.231373 0.380392 0.705882 1" mesh="rotation_connector"/>
<geom pos="-0.014 0.008 0.0264" quat="0 -0.707107 0 0.707107" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="-0.014 -0.032 0.0044" quat="0 -0.707107 0 0.707107" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="0.0144 -0.032 0.0264" quat="0.707107 0 0.707107 0" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="0.0002 0 0.0154" quat="0.707107 0 -0.707107 0" type="mesh" rgba="0.498039 0.498039 0.498039 1" mesh="dc11_a01_dummy"/>
<geom pos="0.0144 -0.032 0.0044" quat="0.5 0.5 0.5 0.5" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="-0.014 0.008 0.0044" quat="0.5 0.5 -0.5 -0.5" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="0.0144 0.008 0.0264" quat="0.5 -0.5 0.5 -0.5" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="0.0144 0.008 0.0044" quat="0 0.707107 0 0.707107" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="-0.014 -0.032 0.0264" quat="0.5 -0.5 -0.5 0.5" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>

<body name="pitch2_assembly" pos="-0.0188 0 0.0154" quat="0 0.707107 0 0.707107">

<inertial pos="0.0766242 -0.00031229 0.0187402" quat="0.52596 0.513053 0.489778 0.469319" mass="0.0432446" diaginertia="7.21796e-05 7.03107e-05 1.07533e-05"/>
<joint name="shoulder_lift_joint" pos="0 0 0" axis="0 0 1" range="-1.5708 1.22173" damping="0.1" actuatorfrcrange="-0.5 0.5" actuatorfrclimited="true"/>
<geom pos="0 0 0.019" quat="0.5 -0.5 -0.5 -0.5" type="mesh" rgba="0.980392 0.713726 0.00392157 1" mesh="arm_connector"/>
<geom pos="0.1083 -0.0148 0.03035" quat="1 0 0 0" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc15_a01_horn_idle2_dummy"/>
<geom pos="0.1083 -0.0148 0.01075" quat="0 -1 0 0" type="mesh" rgba="0.615686 0.811765 0.929412 1" mesh="dc15_a01_case_m_dummy"/>
<geom pos="0.1083 -0.0148 0.01075" quat="0 -1 0 0" type="mesh" rgba="0.980392 0.713726 0.00392157 1" mesh="dc15_a01_case_f_dummy"/>
<geom pos="0.1083 -0.0148 0.00715" quat="0 0 0.707107 -0.707107" type="mesh" rgba="0.972549 0.529412 0.00392157 1" mesh="dc15_a01_horn_dummy"/>
<geom pos="0.1083 -0.0148 0.03025" quat="0 -1 0 0" type="mesh" rgba="0.498039 0.498039 0.498039 1" mesh="dc15_a01_case_b_dummy"/>

<body name="pitch3_assembly" pos="0.1083 -0.0148 0.00425" quat="0.707107 0 0 0.707107">
<inertial pos="-0.0551014 -0.00287792 0.0144813" quat="0.500323 0.499209 0.499868 0.5006" mass="0.0788335" diaginertia="6.80912e-05 6.45748e-05 9.84479e-06"/>
<joint name="elbow_flex_joint" pos="0 0 0" axis="0 0 1" range="-1.48353 1.74533" damping="0.1" actuatorfrcrange="-0.5 0.5" actuatorfrclimited="true"/>
<geom pos="-0.00863031 0.00847376 0.0145" quat="0.5 0.5 0.5 -0.5" type="mesh" rgba="0.615686 0.811765 0.929412 1" mesh="connector"/>
<geom pos="-0.100476 -0.00269986 0.02635" quat="0.707107 0 0 -0.707107" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc15_a01_horn_idle2_dummy"/>
<geom pos="-0.100476 -0.00269986 0.00675" quat="0 -0.707107 0.707107 0" type="mesh" rgba="0.615686 0.811765 0.929412 1" mesh="dc15_a01_case_m_dummy"/>
<geom pos="-0.100476 -0.00269986 0.00675" quat="0 -0.707107 0.707107 0" type="mesh" rgba="0.980392 0.713726 0.00392157 1" mesh="dc15_a01_case_f_dummy"/>
<geom pos="-0.100476 -0.00269986 0.00315" quat="0.5 -0.5 -0.5 0.5" type="mesh" rgba="0.972549 0.529412 0.00392157 1" mesh="dc15_a01_horn_dummy"/>
<geom pos="-0.100476 -0.00269986 0.02625" quat="0 -0.707107 0.707107 0" type="mesh" rgba="0.498039 0.498039 0.498039 1" mesh="dc15_a01_case_b_dummy"/>

<body name="effector_roll_assembly" pos="-0.100476 -0.00269986 0.02925" quat="0 -0.707107 -0.707107 0">
<inertial pos="-1.65017e-05 -0.02659 0.0195388" quat="0.936813 0.349829 -0.00055331 -0.000300569" mass="0.0240506" diaginertia="6.03208e-06 4.12894e-06 3.3522e-06"/>
<joint name="wrist_flex_joint" pos="0 0 0" axis="0 0 1" range="-1.91986 1.91986" damping="0.1" actuatorfrcrange="-0.5 0.5" actuatorfrclimited="true"/>
<geom pos="-0.0109998 -0.0190002 0.039" quat="0.707107 -0.707107 0 0" type="mesh" rgba="0.615686 0.811765 0.929412 1" mesh="shoulder_rotation"/>
<geom pos="-7.44154e-06 -0.0385002 0.0133967" quat="0 0 0.707107 -0.707107" type="mesh" rgba="0.615686 0.811765 0.929412 1" mesh="dc15_a01_case_m_dummy"/>
<geom pos="-7.44154e-06 -0.0385002 0.0133967" quat="0 0 0.707107 -0.707107" type="mesh" rgba="0.980392 0.713726 0.00392157 1" mesh="dc15_a01_case_f_dummy"/>
<geom pos="-7.44154e-06 -0.0421002 0.0133967" quat="0 1 0 0" type="mesh" rgba="0.972549 0.529412 0.00392157 1" mesh="dc15_a01_horn_dummy"/>
<geom pos="-7.44154e-06 -0.0190002 0.0133967" quat="0 0 0.707107 -0.707107" type="mesh" rgba="0.498039 0.498039 0.498039 1" mesh="dc15_a01_case_b_dummy"/>

<body name="gripper_assembly" pos="-7.44154e-06 -0.0450002 0.0133967" quat="0.5 -0.5 -0.5 -0.5">
<inertial pos="-0.00548595 -0.000433143 -0.0190793" quat="0.700194 0.164851 0.167361 0.674197" mass="0.0360627" diaginertia="1.3261e-05 1.231e-05 5.3532e-06"/>
<joint name="wrist_roll_joint" pos="0 0 0" axis="0 0 1" range="-2.96706 2.96706" damping="0.1" actuatorfrcrange="-0.5 0.5" actuatorfrclimited="true"/>
<geom pos="-0.00075 -0.01475 -0.02" quat="0.707107 -0.707107 0 0" type="mesh" rgba="0.917647 0.917647 0.917647 1" mesh="static_side"/>
<geom pos="0.00755 0.01135 -0.013" quat="0.5 -0.5 0.5 0.5" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc15_a01_horn_idle2_dummy"/>
<geom pos="0.00755 -0.00825 -0.013" quat="0.5 0.5 0.5 -0.5" type="mesh" rgba="0.615686 0.811765 0.929412 1" mesh="dc15_a01_case_m_dummy"/>
<geom pos="0.00755 -0.00825 -0.013" quat="0.5 0.5 0.5 -0.5" type="mesh" rgba="0.980392 0.713726 0.00392157 1" mesh="dc15_a01_case_f_dummy"/>
<geom pos="0.00755 -0.01185 -0.013" quat="0 -0.707107 0 -0.707107" type="mesh" rgba="0.972549 0.529412 0.00392157 1" mesh="dc15_a01_horn_dummy"/>
<geom pos="0.00755 0.01125 -0.013" quat="0.5 0.5 0.5 -0.5" type="mesh" rgba="0.498039 0.498039 0.498039 1" mesh="dc15_a01_case_b_dummy"/>

<body name="moving_side" pos="0.00755 -0.01475 -0.013" quat="0.707107 -0.707107 0 0">
<inertial pos="-0.000395599 0.022415 0.0145636" quat="0.722353 0.689129 0.0389102 0.0423547" mass="0.0089856" diaginertia="3.28451e-06 2.24898e-06 1.41539e-06"/>
<joint name="gripper_joint" pos="0 0 0" axis="0 0 1" range="-1.74533 0.0523599" damping="0.1" actuatorfrcrange="-0.5 0.5" actuatorfrclimited="true"/>
<geom pos="-0.00838199 -0.000256591 -0.003" type="mesh" rgba="0.768627 0.886275 0.952941 1" mesh="moving_side"/>
</body>
</body>
</body>
</body>
</body>
</body>
</worldbody>

<actuator>
<position joint="shoulder_pan_joint" ctrlrange="-3.14159 3.14159" ctrllimited="true" kp="20" />
<position joint="shoulder_lift_joint" ctrlrange="-1.5708 1.22173" ctrllimited="true" kp="20" />
<position joint="elbow_flex_joint" ctrlrange="-1.48353 1.74533" ctrllimited="true" kp="20" />
<position joint="wrist_flex_joint" ctrlrange="-1.91986 1.91986" ctrllimited="true" kp="20" />
<position joint="wrist_roll_joint" ctrlrange="-2.96706 2.96706" ctrllimited="true" kp="20" />
<position joint="gripper_joint" ctrlrange="-1.74533 0.0523599" ctrllimited="true" kp="20" />
</actuator>
</mujoco>

+ 135
- 0
examples/alexk-lcr/assets/simulation/pick_place_cube.xml View File

@@ -0,0 +1,135 @@
<mujoco model="bd1 scene">
<option timestep="0.005"/>

<compiler angle="radian" autolimits="true"/>

<asset>
<mesh name="base" file="base.stl"/>
<mesh name="dc11_a01_spacer_dummy" file="dc11_a01_spacer_dummy.stl"/>
<mesh name="dc11_a01_dummy" file="dc11_a01_dummy.stl"/>
<mesh name="rotation_connector" file="rotation_connector.stl"/>
<mesh name="arm_connector" file="arm_connector.stl"/>
<mesh name="dc15_a01_horn_idle2_dummy" file="dc15_a01_horn_idle2_dummy.stl"/>
<mesh name="dc15_a01_case_m_dummy" file="dc15_a01_case_m_dummy.stl"/>
<mesh name="dc15_a01_case_f_dummy" file="dc15_a01_case_f_dummy.stl"/>
<mesh name="dc15_a01_horn_dummy" file="dc15_a01_horn_dummy.stl"/>
<mesh name="dc15_a01_case_b_dummy" file="dc15_a01_case_b_dummy.stl"/>
<mesh name="connector" file="connector.stl"/>
<mesh name="shoulder_rotation" file="shoulder_rotation.stl"/>
<mesh name="static_side" file="static_side.stl"/>
<mesh name="moving_side" file="moving_side.stl"/>
<texture type="skybox" builtin="gradient" rgb1="0.3 0.5 0.7" rgb2="0 0 0" width="512" height="3072" />
<texture type="2d" name="groundplane" builtin="checker" mark="edge" rgb1="0.2 0.3 0.4" rgb2="0.1 0.2 0.3" markrgb="0.8 0.8 0.8" width="300" height="300" />
<material name="groundplane" texture="groundplane" texuniform="true" texrepeat="5 5" reflectance="0.2" />
</asset>

<visual>
<headlight diffuse="0.6 0.6 0.6" ambient="0.3 0.3 0.3" specular="0 0 0" />
<rgba haze="0.15 0.25 0.35 1" />
<global azimuth="150" elevation="-20" offheight="640" />
</visual>

<worldbody>
<light pos="0 0 3" dir="0 0 -1" directional="false" />
<body name="floor">
<geom pos="0 0 0" name="floor" size="0 0 .125" type="plane" material="groundplane" conaffinity="1" contype="1" />
</body>

<body name="cube" pos="0.1 0.1 0.01">
<freejoint name="cube"/>
<inertial pos="0 0 0" mass="0.1" diaginertia="0.00001125 0.00001125 0.00001125"/>
<geom friction="0.5" condim="3" pos="0 0 0" size="0.015 0.015 0.015" type="box" name="cube" rgba="0.5 0 0 1" priority="1"/>
</body>


<camera name="camera_front" pos="0.049 0.888 0.317" xyaxes="-0.998 0.056 -0.000 -0.019 -0.335 0.942"/>
<camera name="camera_top" pos="0 0 1" euler="0 0 0" mode="fixed"/>
<camera name="camera_vizu" pos="-0.1 0.6 0.3" quat="-0.15 -0.1 0.6 1"/>
<geom pos="0.0401555 -0.0353754 -0.0242427" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="base"/>
<geom pos="0.0511555 0.0406246 0.0099573" quat="0 0 0 1" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="0.0291555 0.000624643 0.0099573" quat="0 0 0 1" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="0.0511555 0.000624643 -0.0184427" quat="0 0 1 0" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="0.0401555 0.0326246 -0.0042427" type="mesh" rgba="0.498039 0.498039 0.498039 1" mesh="dc11_a01_dummy"/>
<geom pos="0.0291555 0.000624643 -0.0184427" quat="0 0.707107 0.707107 0" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="0.0291555 0.0406246 0.0099573" quat="0.707107 0 0 -0.707107" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="0.0511555 0.0406246 -0.0184427" quat="0 0.707107 -0.707107 0" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="0.0291555 0.0406246 -0.0184427" quat="0 -1 0 0" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="0.0511555 0.000624643 0.0099573" quat="0.707107 0 0 0.707107" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>

<body name="pitch1_assembly" pos="0.0401555 0.0326246 0.0166573">

<inertial pos="-0.000767103 -0.0121505 0.0134241" quat="0.498429 0.53272 -0.473938 0.493113" mass="0.0606831" diaginertia="1.86261e-05 1.72746e-05 1.11693e-05"/>
<joint name="shoulder_pan_joint" pos="0 0 0" axis="0 0 1" range="-3.14159 3.14159" damping="0.1" actuatorfrcrange="-0.5 0.5" actuatorfrclimited="true"/>
<geom pos="0 0 -0.0209" type="mesh" rgba="0.231373 0.380392 0.705882 1" mesh="rotation_connector"/>
<geom pos="-0.014 0.008 0.0264" quat="0 -0.707107 0 0.707107" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="-0.014 -0.032 0.0044" quat="0 -0.707107 0 0.707107" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="0.0144 -0.032 0.0264" quat="0.707107 0 0.707107 0" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="0.0002 0 0.0154" quat="0.707107 0 -0.707107 0" type="mesh" rgba="0.498039 0.498039 0.498039 1" mesh="dc11_a01_dummy"/>
<geom pos="0.0144 -0.032 0.0044" quat="0.5 0.5 0.5 0.5" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="-0.014 0.008 0.0044" quat="0.5 0.5 -0.5 -0.5" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="0.0144 0.008 0.0264" quat="0.5 -0.5 0.5 -0.5" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="0.0144 0.008 0.0044" quat="0 0.707107 0 0.707107" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="-0.014 -0.032 0.0264" quat="0.5 -0.5 -0.5 0.5" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>

<body name="pitch2_assembly" pos="-0.0188 0 0.0154" quat="0 0.707107 0 0.707107">

<inertial pos="0.0766242 -0.00031229 0.0187402" quat="0.52596 0.513053 0.489778 0.469319" mass="0.0432446" diaginertia="7.21796e-05 7.03107e-05 1.07533e-05"/>
<joint name="shoulder_lift_joint" pos="0 0 0" axis="0 0 1" range="-1.5708 1.22173" damping="0.1" actuatorfrcrange="-0.5 0.5" actuatorfrclimited="true"/>
<geom pos="0 0 0.019" quat="0.5 -0.5 -0.5 -0.5" type="mesh" rgba="0.980392 0.713726 0.00392157 1" mesh="arm_connector"/>
<geom pos="0.1083 -0.0148 0.03035" quat="1 0 0 0" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc15_a01_horn_idle2_dummy"/>
<geom pos="0.1083 -0.0148 0.01075" quat="0 -1 0 0" type="mesh" rgba="0.615686 0.811765 0.929412 1" mesh="dc15_a01_case_m_dummy"/>
<geom pos="0.1083 -0.0148 0.01075" quat="0 -1 0 0" type="mesh" rgba="0.980392 0.713726 0.00392157 1" mesh="dc15_a01_case_f_dummy"/>
<geom pos="0.1083 -0.0148 0.00715" quat="0 0 0.707107 -0.707107" type="mesh" rgba="0.972549 0.529412 0.00392157 1" mesh="dc15_a01_horn_dummy"/>
<geom pos="0.1083 -0.0148 0.03025" quat="0 -1 0 0" type="mesh" rgba="0.498039 0.498039 0.498039 1" mesh="dc15_a01_case_b_dummy"/>

<body name="pitch3_assembly" pos="0.1083 -0.0148 0.00425" quat="0.707107 0 0 0.707107">
<inertial pos="-0.0551014 -0.00287792 0.0144813" quat="0.500323 0.499209 0.499868 0.5006" mass="0.0788335" diaginertia="6.80912e-05 6.45748e-05 9.84479e-06"/>
<joint name="elbow_flex_joint" pos="0 0 0" axis="0 0 1" range="-1.48353 1.74533" damping="0.1" actuatorfrcrange="-0.5 0.5" actuatorfrclimited="true"/>
<geom pos="-0.00863031 0.00847376 0.0145" quat="0.5 0.5 0.5 -0.5" type="mesh" rgba="0.615686 0.811765 0.929412 1" mesh="connector"/>
<geom pos="-0.100476 -0.00269986 0.02635" quat="0.707107 0 0 -0.707107" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc15_a01_horn_idle2_dummy"/>
<geom pos="-0.100476 -0.00269986 0.00675" quat="0 -0.707107 0.707107 0" type="mesh" rgba="0.615686 0.811765 0.929412 1" mesh="dc15_a01_case_m_dummy"/>
<geom pos="-0.100476 -0.00269986 0.00675" quat="0 -0.707107 0.707107 0" type="mesh" rgba="0.980392 0.713726 0.00392157 1" mesh="dc15_a01_case_f_dummy"/>
<geom pos="-0.100476 -0.00269986 0.00315" quat="0.5 -0.5 -0.5 0.5" type="mesh" rgba="0.972549 0.529412 0.00392157 1" mesh="dc15_a01_horn_dummy"/>
<geom pos="-0.100476 -0.00269986 0.02625" quat="0 -0.707107 0.707107 0" type="mesh" rgba="0.498039 0.498039 0.498039 1" mesh="dc15_a01_case_b_dummy"/>

<body name="effector_roll_assembly" pos="-0.100476 -0.00269986 0.02925" quat="0 -0.707107 -0.707107 0">
<inertial pos="-1.65017e-05 -0.02659 0.0195388" quat="0.936813 0.349829 -0.00055331 -0.000300569" mass="0.0240506" diaginertia="6.03208e-06 4.12894e-06 3.3522e-06"/>
<joint name="wrist_flex_joint" pos="0 0 0" axis="0 0 1" range="-1.91986 1.91986" damping="0.1" actuatorfrcrange="-0.5 0.5" actuatorfrclimited="true"/>
<geom pos="-0.0109998 -0.0190002 0.039" quat="0.707107 -0.707107 0 0" type="mesh" rgba="0.615686 0.811765 0.929412 1" mesh="shoulder_rotation"/>
<geom pos="-7.44154e-06 -0.0385002 0.0133967" quat="0 0 0.707107 -0.707107" type="mesh" rgba="0.615686 0.811765 0.929412 1" mesh="dc15_a01_case_m_dummy"/>
<geom pos="-7.44154e-06 -0.0385002 0.0133967" quat="0 0 0.707107 -0.707107" type="mesh" rgba="0.980392 0.713726 0.00392157 1" mesh="dc15_a01_case_f_dummy"/>
<geom pos="-7.44154e-06 -0.0421002 0.0133967" quat="0 1 0 0" type="mesh" rgba="0.972549 0.529412 0.00392157 1" mesh="dc15_a01_horn_dummy"/>
<geom pos="-7.44154e-06 -0.0190002 0.0133967" quat="0 0 0.707107 -0.707107" type="mesh" rgba="0.498039 0.498039 0.498039 1" mesh="dc15_a01_case_b_dummy"/>

<body name="gripper_assembly" pos="-7.44154e-06 -0.0450002 0.0133967" quat="0.5 -0.5 -0.5 -0.5">
<inertial pos="-0.00548595 -0.000433143 -0.0190793" quat="0.700194 0.164851 0.167361 0.674197" mass="0.0360627" diaginertia="1.3261e-05 1.231e-05 5.3532e-06"/>
<joint name="wrist_roll_joint" pos="0 0 0" axis="0 0 1" range="-2.96706 2.96706" damping="0.1" actuatorfrcrange="-0.5 0.5" actuatorfrclimited="true"/>
<geom pos="-0.00075 -0.01475 -0.02" quat="0.707107 -0.707107 0 0" type="mesh" rgba="0.917647 0.917647 0.917647 1" mesh="static_side"/>
<geom pos="0.00755 0.01135 -0.013" quat="0.5 -0.5 0.5 0.5" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc15_a01_horn_idle2_dummy"/>
<geom pos="0.00755 -0.00825 -0.013" quat="0.5 0.5 0.5 -0.5" type="mesh" rgba="0.615686 0.811765 0.929412 1" mesh="dc15_a01_case_m_dummy"/>
<geom pos="0.00755 -0.00825 -0.013" quat="0.5 0.5 0.5 -0.5" type="mesh" rgba="0.980392 0.713726 0.00392157 1" mesh="dc15_a01_case_f_dummy"/>
<geom pos="0.00755 -0.01185 -0.013" quat="0 -0.707107 0 -0.707107" type="mesh" rgba="0.972549 0.529412 0.00392157 1" mesh="dc15_a01_horn_dummy"/>
<geom pos="0.00755 0.01125 -0.013" quat="0.5 0.5 0.5 -0.5" type="mesh" rgba="0.498039 0.498039 0.498039 1" mesh="dc15_a01_case_b_dummy"/>

<body name="moving_side" pos="0.00755 -0.01475 -0.013" quat="0.707107 -0.707107 0 0">
<inertial pos="-0.000395599 0.022415 0.0145636" quat="0.722353 0.689129 0.0389102 0.0423547" mass="0.0089856" diaginertia="3.28451e-06 2.24898e-06 1.41539e-06"/>
<joint name="gripper_joint" pos="0 0 0" axis="0 0 1" range="-1.74533 0.0523599" damping="0.1" actuatorfrcrange="-0.5 0.5" actuatorfrclimited="true"/>
<geom pos="-0.00838199 -0.000256591 -0.003" type="mesh" rgba="0.768627 0.886275 0.952941 1" mesh="moving_side"/>
</body>
</body>
</body>
</body>
</body>
</body>
</worldbody>

<actuator>
<position joint="shoulder_pan_joint" ctrlrange="-3.14159 3.14159" ctrllimited="true" kp="20" />
<position joint="shoulder_lift_joint" ctrlrange="-1.5708 1.22173" ctrllimited="true" kp="20" />
<position joint="elbow_flex_joint" ctrlrange="-1.48353 1.74533" ctrllimited="true" kp="20" />
<position joint="wrist_flex_joint" ctrlrange="-1.91986 1.91986" ctrllimited="true" kp="20" />
<position joint="wrist_roll_joint" ctrlrange="-2.96706 2.96706" ctrllimited="true" kp="20" />
<position joint="gripper_joint" ctrlrange="-1.74533 0.0523599" ctrllimited="true" kp="20" />
</actuator>
</mujoco>

+ 137
- 0
examples/alexk-lcr/assets/simulation/push_cube.xml View File

@@ -0,0 +1,137 @@
<mujoco model="bd1 scene">
<option timestep="0.005"/>

<compiler angle="radian" autolimits="true"/>

<asset>
<mesh name="base" file="base.stl"/>
<mesh name="dc11_a01_spacer_dummy" file="dc11_a01_spacer_dummy.stl"/>
<mesh name="dc11_a01_dummy" file="dc11_a01_dummy.stl"/>
<mesh name="rotation_connector" file="rotation_connector.stl"/>
<mesh name="arm_connector" file="arm_connector.stl"/>
<mesh name="dc15_a01_horn_idle2_dummy" file="dc15_a01_horn_idle2_dummy.stl"/>
<mesh name="dc15_a01_case_m_dummy" file="dc15_a01_case_m_dummy.stl"/>
<mesh name="dc15_a01_case_f_dummy" file="dc15_a01_case_f_dummy.stl"/>
<mesh name="dc15_a01_horn_dummy" file="dc15_a01_horn_dummy.stl"/>
<mesh name="dc15_a01_case_b_dummy" file="dc15_a01_case_b_dummy.stl"/>
<mesh name="connector" file="connector.stl"/>
<mesh name="shoulder_rotation" file="shoulder_rotation.stl"/>
<mesh name="static_side" file="static_side.stl"/>
<mesh name="moving_side" file="moving_side.stl"/>
<texture type="skybox" builtin="gradient" rgb1="0.3 0.5 0.7" rgb2="0 0 0" width="512" height="3072" />
<texture type="2d" name="groundplane" builtin="checker" mark="edge" rgb1="0.2 0.3 0.4" rgb2="0.1 0.2 0.3" markrgb="0.8 0.8 0.8" width="300" height="300" />
<material name="groundplane" texture="groundplane" texuniform="true" texrepeat="5 5" reflectance="0.2" />
</asset>

<visual>
<headlight diffuse="0.6 0.6 0.6" ambient="0.3 0.3 0.3" specular="0 0 0" />
<rgba haze="0.15 0.25 0.35 1" />
<global azimuth="150" elevation="-20" offheight="640" />
</visual>

<worldbody>
<light pos="0 0 3" dir="0 0 -1" directional="false" />
<body name="floor">
<geom pos="0 0 0" name="floor" size="0 0 .125" type="plane" material="groundplane" conaffinity="1" contype="1" />
</body>

<body name="cube" pos="0.1 0.1 0.01">
<freejoint name="cube"/>
<inertial pos="0 0 0" mass="0.1" diaginertia="0.00001125 0.00001125 0.00001125"/>
<geom friction="0.5" condim="3" pos="0 0 0" size="0.015 0.015 0.015" type="box" name="cube" rgba="0.5 0 0 1" priority="1"/>
</body>

<camera name="camera_front" pos="0.049 0.888 0.317" xyaxes="-0.998 0.056 -0.000 -0.019 -0.335 0.942"/>
<camera name="camera_top" pos="0 0 1" euler="0 0 0" mode="fixed"/>
<camera name="camera_vizu" pos="-0.1 0.6 0.3" quat="-0.15 -0.1 0.6 1"/>
<geom pos="0.0401555 -0.0353754 -0.0242427" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="base"/>
<geom pos="0.0511555 0.0406246 0.0099573" quat="0 0 0 1" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="0.0291555 0.000624643 0.0099573" quat="0 0 0 1" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="0.0511555 0.000624643 -0.0184427" quat="0 0 1 0" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="0.0401555 0.0326246 -0.0042427" type="mesh" rgba="0.498039 0.498039 0.498039 1" mesh="dc11_a01_dummy"/>
<geom pos="0.0291555 0.000624643 -0.0184427" quat="0 0.707107 0.707107 0" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="0.0291555 0.0406246 0.0099573" quat="0.707107 0 0 -0.707107" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="0.0511555 0.0406246 -0.0184427" quat="0 0.707107 -0.707107 0" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="0.0291555 0.0406246 -0.0184427" quat="0 -1 0 0" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="0.0511555 0.000624643 0.0099573" quat="0.707107 0 0 0.707107" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>

<body name="pitch1_assembly" pos="0.0401555 0.0326246 0.0166573">

<inertial pos="-0.000767103 -0.0121505 0.0134241" quat="0.498429 0.53272 -0.473938 0.493113" mass="0.0606831" diaginertia="1.86261e-05 1.72746e-05 1.11693e-05"/>
<joint name="shoulder_pan_joint" pos="0 0 0" axis="0 0 1" range="-3.14159 3.14159" damping="0.1" actuatorfrcrange="-0.5 0.5" actuatorfrclimited="true"/>
<geom pos="0 0 -0.0209" type="mesh" rgba="0.231373 0.380392 0.705882 1" mesh="rotation_connector"/>
<geom pos="-0.014 0.008 0.0264" quat="0 -0.707107 0 0.707107" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="-0.014 -0.032 0.0044" quat="0 -0.707107 0 0.707107" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="0.0144 -0.032 0.0264" quat="0.707107 0 0.707107 0" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="0.0002 0 0.0154" quat="0.707107 0 -0.707107 0" type="mesh" rgba="0.498039 0.498039 0.498039 1" mesh="dc11_a01_dummy"/>
<geom pos="0.0144 -0.032 0.0044" quat="0.5 0.5 0.5 0.5" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="-0.014 0.008 0.0044" quat="0.5 0.5 -0.5 -0.5" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="0.0144 0.008 0.0264" quat="0.5 -0.5 0.5 -0.5" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="0.0144 0.008 0.0044" quat="0 0.707107 0 0.707107" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="-0.014 -0.032 0.0264" quat="0.5 -0.5 -0.5 0.5" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>

<body name="pitch2_assembly" pos="-0.0188 0 0.0154" quat="0 0.707107 0 0.707107">

<inertial pos="0.0766242 -0.00031229 0.0187402" quat="0.52596 0.513053 0.489778 0.469319" mass="0.0432446" diaginertia="7.21796e-05 7.03107e-05 1.07533e-05"/>
<joint name="shoulder_lift_joint" pos="0 0 0" axis="0 0 1" range="-1.5708 1.22173" damping="0.1" actuatorfrcrange="-0.5 0.5" actuatorfrclimited="true"/>
<geom pos="0 0 0.019" quat="0.5 -0.5 -0.5 -0.5" type="mesh" rgba="0.980392 0.713726 0.00392157 1" mesh="arm_connector"/>
<geom pos="0.1083 -0.0148 0.03035" quat="1 0 0 0" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc15_a01_horn_idle2_dummy"/>
<geom pos="0.1083 -0.0148 0.01075" quat="0 -1 0 0" type="mesh" rgba="0.615686 0.811765 0.929412 1" mesh="dc15_a01_case_m_dummy"/>
<geom pos="0.1083 -0.0148 0.01075" quat="0 -1 0 0" type="mesh" rgba="0.980392 0.713726 0.00392157 1" mesh="dc15_a01_case_f_dummy"/>
<geom pos="0.1083 -0.0148 0.00715" quat="0 0 0.707107 -0.707107" type="mesh" rgba="0.972549 0.529412 0.00392157 1" mesh="dc15_a01_horn_dummy"/>
<geom pos="0.1083 -0.0148 0.03025" quat="0 -1 0 0" type="mesh" rgba="0.498039 0.498039 0.498039 1" mesh="dc15_a01_case_b_dummy"/>

<body name="pitch3_assembly" pos="0.1083 -0.0148 0.00425" quat="0.707107 0 0 0.707107">
<inertial pos="-0.0551014 -0.00287792 0.0144813" quat="0.500323 0.499209 0.499868 0.5006" mass="0.0788335" diaginertia="6.80912e-05 6.45748e-05 9.84479e-06"/>
<joint name="elbow_flex_joint" pos="0 0 0" axis="0 0 1" range="-1.48353 1.74533" damping="0.1" actuatorfrcrange="-0.5 0.5" actuatorfrclimited="true"/>
<geom pos="-0.00863031 0.00847376 0.0145" quat="0.5 0.5 0.5 -0.5" type="mesh" rgba="0.615686 0.811765 0.929412 1" mesh="connector"/>
<geom pos="-0.100476 -0.00269986 0.02635" quat="0.707107 0 0 -0.707107" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc15_a01_horn_idle2_dummy"/>
<geom pos="-0.100476 -0.00269986 0.00675" quat="0 -0.707107 0.707107 0" type="mesh" rgba="0.615686 0.811765 0.929412 1" mesh="dc15_a01_case_m_dummy"/>
<geom pos="-0.100476 -0.00269986 0.00675" quat="0 -0.707107 0.707107 0" type="mesh" rgba="0.980392 0.713726 0.00392157 1" mesh="dc15_a01_case_f_dummy"/>
<geom pos="-0.100476 -0.00269986 0.00315" quat="0.5 -0.5 -0.5 0.5" type="mesh" rgba="0.972549 0.529412 0.00392157 1" mesh="dc15_a01_horn_dummy"/>
<geom pos="-0.100476 -0.00269986 0.02625" quat="0 -0.707107 0.707107 0" type="mesh" rgba="0.498039 0.498039 0.498039 1" mesh="dc15_a01_case_b_dummy"/>

<body name="effector_roll_assembly" pos="-0.100476 -0.00269986 0.02925" quat="0 -0.707107 -0.707107 0">
<inertial pos="-1.65017e-05 -0.02659 0.0195388" quat="0.936813 0.349829 -0.00055331 -0.000300569" mass="0.0240506" diaginertia="6.03208e-06 4.12894e-06 3.3522e-06"/>
<joint name="wrist_flex_joint" pos="0 0 0" axis="0 0 1" range="-1.91986 1.91986" damping="0.1" actuatorfrcrange="-0.5 0.5" actuatorfrclimited="true"/>
<geom pos="-0.0109998 -0.0190002 0.039" quat="0.707107 -0.707107 0 0" type="mesh" rgba="0.615686 0.811765 0.929412 1" mesh="shoulder_rotation"/>
<geom pos="-7.44154e-06 -0.0385002 0.0133967" quat="0 0 0.707107 -0.707107" type="mesh" rgba="0.615686 0.811765 0.929412 1" mesh="dc15_a01_case_m_dummy"/>
<geom pos="-7.44154e-06 -0.0385002 0.0133967" quat="0 0 0.707107 -0.707107" type="mesh" rgba="0.980392 0.713726 0.00392157 1" mesh="dc15_a01_case_f_dummy"/>
<geom pos="-7.44154e-06 -0.0421002 0.0133967" quat="0 1 0 0" type="mesh" rgba="0.972549 0.529412 0.00392157 1" mesh="dc15_a01_horn_dummy"/>
<geom pos="-7.44154e-06 -0.0190002 0.0133967" quat="0 0 0.707107 -0.707107" type="mesh" rgba="0.498039 0.498039 0.498039 1" mesh="dc15_a01_case_b_dummy"/>

<body name="gripper_assembly" pos="-7.44154e-06 -0.0450002 0.0133967" quat="0.5 -0.5 -0.5 -0.5">
<inertial pos="-0.00548595 -0.000433143 -0.0190793" quat="0.700194 0.164851 0.167361 0.674197" mass="0.0360627" diaginertia="1.3261e-05 1.231e-05 5.3532e-06"/>
<joint name="wrist_roll_joint" pos="0 0 0" axis="0 0 1" range="-2.96706 2.96706" damping="0.1" actuatorfrcrange="-0.5 0.5" actuatorfrclimited="true"/>
<geom pos="-0.00075 -0.01475 -0.02" quat="0.707107 -0.707107 0 0" type="mesh" rgba="0.917647 0.917647 0.917647 1" mesh="static_side"/>
<geom pos="0.00755 0.01135 -0.013" quat="0.5 -0.5 0.5 0.5" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc15_a01_horn_idle2_dummy"/>
<geom pos="0.00755 -0.00825 -0.013" quat="0.5 0.5 0.5 -0.5" type="mesh" rgba="0.615686 0.811765 0.929412 1" mesh="dc15_a01_case_m_dummy"/>
<geom pos="0.00755 -0.00825 -0.013" quat="0.5 0.5 0.5 -0.5" type="mesh" rgba="0.980392 0.713726 0.00392157 1" mesh="dc15_a01_case_f_dummy"/>
<geom pos="0.00755 -0.01185 -0.013" quat="0 -0.707107 0 -0.707107" type="mesh" rgba="0.972549 0.529412 0.00392157 1" mesh="dc15_a01_horn_dummy"/>
<geom pos="0.00755 0.01125 -0.013" quat="0.5 0.5 0.5 -0.5" type="mesh" rgba="0.498039 0.498039 0.498039 1" mesh="dc15_a01_case_b_dummy"/>

<body name="moving_side" pos="0.00755 -0.01475 -0.013" quat="0.707107 -0.707107 0 0">
<inertial pos="-0.000395599 0.022415 0.0145636" quat="0.722353 0.689129 0.0389102 0.0423547" mass="0.0089856" diaginertia="3.28451e-06 2.24898e-06 1.41539e-06"/>
<joint name="gripper_joint" pos="0 0 0" axis="0 0 1" range="-1.74533 0.0523599" damping="0.1" actuatorfrcrange="-0.5 0.5" actuatorfrclimited="true"/>
<geom pos="-0.00838199 -0.000256591 -0.003" type="mesh" rgba="0.768627 0.886275 0.952941 1" mesh="moving_side"/>
</body>
</body>
</body>
</body>
</body>
</body>

<site name="cube_target" pos="1 0 0" size="0.01" rgba="1 0 0 1" />

</worldbody>

<actuator>
<position joint="shoulder_pan_joint" ctrlrange="-3.14159 3.14159" ctrllimited="true" kp="20" />
<position joint="shoulder_lift_joint" ctrlrange="-1.5708 1.22173" ctrllimited="true" kp="20" />
<position joint="elbow_flex_joint" ctrlrange="-1.48353 1.74533" ctrllimited="true" kp="20" />
<position joint="wrist_flex_joint" ctrlrange="-1.91986 1.91986" ctrllimited="true" kp="20" />
<position joint="wrist_roll_joint" ctrlrange="-2.96706 2.96706" ctrllimited="true" kp="20" />
<position joint="gripper_joint" ctrlrange="-1.74533 0.0523599" ctrllimited="true" kp="20" />
</actuator>
</mujoco>

+ 135
- 0
examples/alexk-lcr/assets/simulation/reach_cube.xml View File

@@ -0,0 +1,135 @@
<mujoco model="bd1 scene">
<option timestep="0.005"/>

<compiler angle="radian" autolimits="true"/>

<asset>
<mesh name="base" file="base.stl"/>
<mesh name="dc11_a01_spacer_dummy" file="dc11_a01_spacer_dummy.stl"/>
<mesh name="dc11_a01_dummy" file="dc11_a01_dummy.stl"/>
<mesh name="rotation_connector" file="rotation_connector.stl"/>
<mesh name="arm_connector" file="arm_connector.stl"/>
<mesh name="dc15_a01_horn_idle2_dummy" file="dc15_a01_horn_idle2_dummy.stl"/>
<mesh name="dc15_a01_case_m_dummy" file="dc15_a01_case_m_dummy.stl"/>
<mesh name="dc15_a01_case_f_dummy" file="dc15_a01_case_f_dummy.stl"/>
<mesh name="dc15_a01_horn_dummy" file="dc15_a01_horn_dummy.stl"/>
<mesh name="dc15_a01_case_b_dummy" file="dc15_a01_case_b_dummy.stl"/>
<mesh name="connector" file="connector.stl"/>
<mesh name="shoulder_rotation" file="shoulder_rotation.stl"/>
<mesh name="static_side" file="static_side.stl"/>
<mesh name="moving_side" file="moving_side.stl"/>
<texture type="skybox" builtin="gradient" rgb1="0.3 0.5 0.7" rgb2="0 0 0" width="512" height="3072" />
<texture type="2d" name="groundplane" builtin="checker" mark="edge" rgb1="0.2 0.3 0.4" rgb2="0.1 0.2 0.3" markrgb="0.8 0.8 0.8" width="300" height="300" />
<material name="groundplane" texture="groundplane" texuniform="true" texrepeat="5 5" reflectance="0.2" />
</asset>

<visual>
<headlight diffuse="0.6 0.6 0.6" ambient="0.3 0.3 0.3" specular="0 0 0" />
<rgba haze="0.15 0.25 0.35 1" />
<global azimuth="150" elevation="-20" offheight="640" />
</visual>

<worldbody>
<light pos="0 0 3" dir="0 0 -1" directional="false" />
<body name="floor">
<geom pos="0 0 0" name="floor" size="0 0 .125" type="plane" material="groundplane" conaffinity="1" contype="1" />
</body>

<body name="cube" pos="0.1 0.1 0.01">
<freejoint name="cube"/>
<inertial pos="0 0 0" mass="0.1" diaginertia="0.00001125 0.00001125 0.00001125"/>
<geom friction="0.5" condim="3" pos="0 0 0" size="0.015 0.015 0.015" type="box" name="cube" rgba="0.5 0 0 1" priority="1"/>
</body>


<camera name="camera_front" pos="0.049 0.888 0.317" xyaxes="-0.998 0.056 -0.000 -0.019 -0.335 0.942"/>
<camera name="camera_top" pos="0 0 1" euler="0 0 0" mode="fixed"/>
<camera name="camera_vizu" pos="-0.1 0.6 0.3" quat="-0.15 -0.1 0.6 1"/>
<geom pos="0.0401555 -0.0353754 -0.0242427" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="base"/>
<geom pos="0.0511555 0.0406246 0.0099573" quat="0 0 0 1" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="0.0291555 0.000624643 0.0099573" quat="0 0 0 1" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="0.0511555 0.000624643 -0.0184427" quat="0 0 1 0" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="0.0401555 0.0326246 -0.0042427" type="mesh" rgba="0.498039 0.498039 0.498039 1" mesh="dc11_a01_dummy"/>
<geom pos="0.0291555 0.000624643 -0.0184427" quat="0 0.707107 0.707107 0" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="0.0291555 0.0406246 0.0099573" quat="0.707107 0 0 -0.707107" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="0.0511555 0.0406246 -0.0184427" quat="0 0.707107 -0.707107 0" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="0.0291555 0.0406246 -0.0184427" quat="0 -1 0 0" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="0.0511555 0.000624643 0.0099573" quat="0.707107 0 0 0.707107" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>

<body name="pitch1_assembly" pos="0.0401555 0.0326246 0.0166573">

<inertial pos="-0.000767103 -0.0121505 0.0134241" quat="0.498429 0.53272 -0.473938 0.493113" mass="0.0606831" diaginertia="1.86261e-05 1.72746e-05 1.11693e-05"/>
<joint name="shoulder_pan_joint" pos="0 0 0" axis="0 0 1" range="-3.14159 3.14159" damping="0.1" actuatorfrcrange="-0.5 0.5" actuatorfrclimited="true"/>
<geom pos="0 0 -0.0209" type="mesh" rgba="0.231373 0.380392 0.705882 1" mesh="rotation_connector"/>
<geom pos="-0.014 0.008 0.0264" quat="0 -0.707107 0 0.707107" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="-0.014 -0.032 0.0044" quat="0 -0.707107 0 0.707107" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="0.0144 -0.032 0.0264" quat="0.707107 0 0.707107 0" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="0.0002 0 0.0154" quat="0.707107 0 -0.707107 0" type="mesh" rgba="0.498039 0.498039 0.498039 1" mesh="dc11_a01_dummy"/>
<geom pos="0.0144 -0.032 0.0044" quat="0.5 0.5 0.5 0.5" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="-0.014 0.008 0.0044" quat="0.5 0.5 -0.5 -0.5" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="0.0144 0.008 0.0264" quat="0.5 -0.5 0.5 -0.5" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="0.0144 0.008 0.0044" quat="0 0.707107 0 0.707107" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="-0.014 -0.032 0.0264" quat="0.5 -0.5 -0.5 0.5" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>

<body name="pitch2_assembly" pos="-0.0188 0 0.0154" quat="0 0.707107 0 0.707107">

<inertial pos="0.0766242 -0.00031229 0.0187402" quat="0.52596 0.513053 0.489778 0.469319" mass="0.0432446" diaginertia="7.21796e-05 7.03107e-05 1.07533e-05"/>
<joint name="shoulder_lift_joint" pos="0 0 0" axis="0 0 1" range="-1.5708 1.22173" damping="0.1" actuatorfrcrange="-0.5 0.5" actuatorfrclimited="true"/>
<geom pos="0 0 0.019" quat="0.5 -0.5 -0.5 -0.5" type="mesh" rgba="0.980392 0.713726 0.00392157 1" mesh="arm_connector"/>
<geom pos="0.1083 -0.0148 0.03035" quat="1 0 0 0" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc15_a01_horn_idle2_dummy"/>
<geom pos="0.1083 -0.0148 0.01075" quat="0 -1 0 0" type="mesh" rgba="0.615686 0.811765 0.929412 1" mesh="dc15_a01_case_m_dummy"/>
<geom pos="0.1083 -0.0148 0.01075" quat="0 -1 0 0" type="mesh" rgba="0.980392 0.713726 0.00392157 1" mesh="dc15_a01_case_f_dummy"/>
<geom pos="0.1083 -0.0148 0.00715" quat="0 0 0.707107 -0.707107" type="mesh" rgba="0.972549 0.529412 0.00392157 1" mesh="dc15_a01_horn_dummy"/>
<geom pos="0.1083 -0.0148 0.03025" quat="0 -1 0 0" type="mesh" rgba="0.498039 0.498039 0.498039 1" mesh="dc15_a01_case_b_dummy"/>

<body name="pitch3_assembly" pos="0.1083 -0.0148 0.00425" quat="0.707107 0 0 0.707107">
<inertial pos="-0.0551014 -0.00287792 0.0144813" quat="0.500323 0.499209 0.499868 0.5006" mass="0.0788335" diaginertia="6.80912e-05 6.45748e-05 9.84479e-06"/>
<joint name="elbow_flex_joint" pos="0 0 0" axis="0 0 1" range="-1.48353 1.74533" damping="0.1" actuatorfrcrange="-0.5 0.5" actuatorfrclimited="true"/>
<geom pos="-0.00863031 0.00847376 0.0145" quat="0.5 0.5 0.5 -0.5" type="mesh" rgba="0.615686 0.811765 0.929412 1" mesh="connector"/>
<geom pos="-0.100476 -0.00269986 0.02635" quat="0.707107 0 0 -0.707107" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc15_a01_horn_idle2_dummy"/>
<geom pos="-0.100476 -0.00269986 0.00675" quat="0 -0.707107 0.707107 0" type="mesh" rgba="0.615686 0.811765 0.929412 1" mesh="dc15_a01_case_m_dummy"/>
<geom pos="-0.100476 -0.00269986 0.00675" quat="0 -0.707107 0.707107 0" type="mesh" rgba="0.980392 0.713726 0.00392157 1" mesh="dc15_a01_case_f_dummy"/>
<geom pos="-0.100476 -0.00269986 0.00315" quat="0.5 -0.5 -0.5 0.5" type="mesh" rgba="0.972549 0.529412 0.00392157 1" mesh="dc15_a01_horn_dummy"/>
<geom pos="-0.100476 -0.00269986 0.02625" quat="0 -0.707107 0.707107 0" type="mesh" rgba="0.498039 0.498039 0.498039 1" mesh="dc15_a01_case_b_dummy"/>

<body name="effector_roll_assembly" pos="-0.100476 -0.00269986 0.02925" quat="0 -0.707107 -0.707107 0">
<inertial pos="-1.65017e-05 -0.02659 0.0195388" quat="0.936813 0.349829 -0.00055331 -0.000300569" mass="0.0240506" diaginertia="6.03208e-06 4.12894e-06 3.3522e-06"/>
<joint name="wrist_flex_joint" pos="0 0 0" axis="0 0 1" range="-1.91986 1.91986" damping="0.1" actuatorfrcrange="-0.5 0.5" actuatorfrclimited="true"/>
<geom pos="-0.0109998 -0.0190002 0.039" quat="0.707107 -0.707107 0 0" type="mesh" rgba="0.615686 0.811765 0.929412 1" mesh="shoulder_rotation"/>
<geom pos="-7.44154e-06 -0.0385002 0.0133967" quat="0 0 0.707107 -0.707107" type="mesh" rgba="0.615686 0.811765 0.929412 1" mesh="dc15_a01_case_m_dummy"/>
<geom pos="-7.44154e-06 -0.0385002 0.0133967" quat="0 0 0.707107 -0.707107" type="mesh" rgba="0.980392 0.713726 0.00392157 1" mesh="dc15_a01_case_f_dummy"/>
<geom pos="-7.44154e-06 -0.0421002 0.0133967" quat="0 1 0 0" type="mesh" rgba="0.972549 0.529412 0.00392157 1" mesh="dc15_a01_horn_dummy"/>
<geom pos="-7.44154e-06 -0.0190002 0.0133967" quat="0 0 0.707107 -0.707107" type="mesh" rgba="0.498039 0.498039 0.498039 1" mesh="dc15_a01_case_b_dummy"/>

<body name="gripper_assembly" pos="-7.44154e-06 -0.0450002 0.0133967" quat="0.5 -0.5 -0.5 -0.5">
<inertial pos="-0.00548595 -0.000433143 -0.0190793" quat="0.700194 0.164851 0.167361 0.674197" mass="0.0360627" diaginertia="1.3261e-05 1.231e-05 5.3532e-06"/>
<joint name="wrist_roll_joint" pos="0 0 0" axis="0 0 1" range="-2.96706 2.96706" damping="0.1" actuatorfrcrange="-0.5 0.5" actuatorfrclimited="true"/>
<geom pos="-0.00075 -0.01475 -0.02" quat="0.707107 -0.707107 0 0" type="mesh" rgba="0.917647 0.917647 0.917647 1" mesh="static_side"/>
<geom pos="0.00755 0.01135 -0.013" quat="0.5 -0.5 0.5 0.5" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc15_a01_horn_idle2_dummy"/>
<geom pos="0.00755 -0.00825 -0.013" quat="0.5 0.5 0.5 -0.5" type="mesh" rgba="0.615686 0.811765 0.929412 1" mesh="dc15_a01_case_m_dummy"/>
<geom pos="0.00755 -0.00825 -0.013" quat="0.5 0.5 0.5 -0.5" type="mesh" rgba="0.980392 0.713726 0.00392157 1" mesh="dc15_a01_case_f_dummy"/>
<geom pos="0.00755 -0.01185 -0.013" quat="0 -0.707107 0 -0.707107" type="mesh" rgba="0.972549 0.529412 0.00392157 1" mesh="dc15_a01_horn_dummy"/>
<geom pos="0.00755 0.01125 -0.013" quat="0.5 0.5 0.5 -0.5" type="mesh" rgba="0.498039 0.498039 0.498039 1" mesh="dc15_a01_case_b_dummy"/>

<body name="moving_side" pos="0.00755 -0.01475 -0.013" quat="0.707107 -0.707107 0 0">
<inertial pos="-0.000395599 0.022415 0.0145636" quat="0.722353 0.689129 0.0389102 0.0423547" mass="0.0089856" diaginertia="3.28451e-06 2.24898e-06 1.41539e-06"/>
<joint name="gripper_joint" pos="0 0 0" axis="0 0 1" range="-1.74533 0.0523599" damping="0.1" actuatorfrcrange="-0.5 0.5" actuatorfrclimited="true"/>
<geom pos="-0.00838199 -0.000256591 -0.003" type="mesh" rgba="0.768627 0.886275 0.952941 1" mesh="moving_side"/>
</body>
</body>
</body>
</body>
</body>
</body>
</worldbody>

<actuator>
<position joint="shoulder_pan_joint" ctrlrange="-3.14159 3.14159" ctrllimited="true" kp="20" />
<position joint="shoulder_lift_joint" ctrlrange="-1.5708 1.22173" ctrllimited="true" kp="20" />
<position joint="elbow_flex_joint" ctrlrange="-1.48353 1.74533" ctrllimited="true" kp="20" />
<position joint="wrist_flex_joint" ctrlrange="-1.91986 1.91986" ctrllimited="true" kp="20" />
<position joint="wrist_roll_joint" ctrlrange="-2.96706 2.96706" ctrllimited="true" kp="20" />
<position joint="gripper_joint" ctrlrange="-1.74533 0.0523599" ctrllimited="true" kp="20" />
</actuator>
</mujoco>

+ 141
- 0
examples/alexk-lcr/assets/simulation/stack_two_cubes.xml View File

@@ -0,0 +1,141 @@
<mujoco model="bd1 scene">
<option timestep="0.005"/>

<compiler angle="radian" autolimits="true"/>

<asset>
<mesh name="base" file="base.stl"/>
<mesh name="dc11_a01_spacer_dummy" file="dc11_a01_spacer_dummy.stl"/>
<mesh name="dc11_a01_dummy" file="dc11_a01_dummy.stl"/>
<mesh name="rotation_connector" file="rotation_connector.stl"/>
<mesh name="arm_connector" file="arm_connector.stl"/>
<mesh name="dc15_a01_horn_idle2_dummy" file="dc15_a01_horn_idle2_dummy.stl"/>
<mesh name="dc15_a01_case_m_dummy" file="dc15_a01_case_m_dummy.stl"/>
<mesh name="dc15_a01_case_f_dummy" file="dc15_a01_case_f_dummy.stl"/>
<mesh name="dc15_a01_horn_dummy" file="dc15_a01_horn_dummy.stl"/>
<mesh name="dc15_a01_case_b_dummy" file="dc15_a01_case_b_dummy.stl"/>
<mesh name="connector" file="connector.stl"/>
<mesh name="shoulder_rotation" file="shoulder_rotation.stl"/>
<mesh name="static_side" file="static_side.stl"/>
<mesh name="moving_side" file="moving_side.stl"/>
<texture type="skybox" builtin="gradient" rgb1="0.3 0.5 0.7" rgb2="0 0 0" width="512" height="3072" />
<texture type="2d" name="groundplane" builtin="checker" mark="edge" rgb1="0.2 0.3 0.4" rgb2="0.1 0.2 0.3" markrgb="0.8 0.8 0.8" width="300" height="300" />
<material name="groundplane" texture="groundplane" texuniform="true" texrepeat="5 5" reflectance="0.2" />
</asset>

<visual>
<headlight diffuse="0.6 0.6 0.6" ambient="0.3 0.3 0.3" specular="0 0 0" />
<rgba haze="0.15 0.25 0.35 1" />
<global azimuth="150" elevation="-20" offheight="640" />
</visual>

<worldbody>
<light pos="0 0 3" dir="0 0 -1" directional="false" />
<body name="floor">
<geom pos="0 0 0" name="floor" size="0 0 .125" type="plane" material="groundplane" conaffinity="1" contype="1" />
</body>

<body name="cube_red" pos="0.1 0.1 0.01">
<freejoint name="cube_red"/>
<inertial pos="0 0 0" mass="0.1" diaginertia="0.00001125 0.00001125 0.00001125"/>
<geom friction="0.5" condim="3" pos="0 0 0" size="0.015 0.015 0.015" type="box" name="cube_red" rgba="0.5 0 0 1" priority="1"/>
</body>

<body name="cube_blue" pos="-0.1 -0.1 0.01">
<freejoint name="cube_blue"/>
<inertial pos="0 0 0" mass="0.1" diaginertia="0.00001125 0.00001125 0.00001125"/>
<geom friction="0.5" condim="3" pos="0 0 0" size="0.015 0.015 0.015" type="box" name="cube_blue" rgba="0 0 0.5 1" priority="1"/>
</body>


<camera name="camera_front" pos="0.049 0.888 0.317" xyaxes="-0.998 0.056 -0.000 -0.019 -0.335 0.942"/>
<camera name="camera_top" pos="0 0 1" euler="0 0 0" mode="fixed"/>
<camera name="camera_vizu" pos="-0.1 0.6 0.3" quat="-0.15 -0.1 0.6 1"/>
<geom pos="0.0401555 -0.0353754 -0.0242427" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="base"/>
<geom pos="0.0511555 0.0406246 0.0099573" quat="0 0 0 1" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="0.0291555 0.000624643 0.0099573" quat="0 0 0 1" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="0.0511555 0.000624643 -0.0184427" quat="0 0 1 0" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="0.0401555 0.0326246 -0.0042427" type="mesh" rgba="0.498039 0.498039 0.498039 1" mesh="dc11_a01_dummy"/>
<geom pos="0.0291555 0.000624643 -0.0184427" quat="0 0.707107 0.707107 0" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="0.0291555 0.0406246 0.0099573" quat="0.707107 0 0 -0.707107" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="0.0511555 0.0406246 -0.0184427" quat="0 0.707107 -0.707107 0" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="0.0291555 0.0406246 -0.0184427" quat="0 -1 0 0" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="0.0511555 0.000624643 0.0099573" quat="0.707107 0 0 0.707107" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>

<body name="pitch1_assembly" pos="0.0401555 0.0326246 0.0166573">

<inertial pos="-0.000767103 -0.0121505 0.0134241" quat="0.498429 0.53272 -0.473938 0.493113" mass="0.0606831" diaginertia="1.86261e-05 1.72746e-05 1.11693e-05"/>
<joint name="shoulder_pan_joint" pos="0 0 0" axis="0 0 1" range="-3.14159 3.14159" damping="0.1" actuatorfrcrange="-0.5 0.5" actuatorfrclimited="true"/>
<geom pos="0 0 -0.0209" type="mesh" rgba="0.231373 0.380392 0.705882 1" mesh="rotation_connector"/>
<geom pos="-0.014 0.008 0.0264" quat="0 -0.707107 0 0.707107" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="-0.014 -0.032 0.0044" quat="0 -0.707107 0 0.707107" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="0.0144 -0.032 0.0264" quat="0.707107 0 0.707107 0" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="0.0002 0 0.0154" quat="0.707107 0 -0.707107 0" type="mesh" rgba="0.498039 0.498039 0.498039 1" mesh="dc11_a01_dummy"/>
<geom pos="0.0144 -0.032 0.0044" quat="0.5 0.5 0.5 0.5" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="-0.014 0.008 0.0044" quat="0.5 0.5 -0.5 -0.5" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="0.0144 0.008 0.0264" quat="0.5 -0.5 0.5 -0.5" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="0.0144 0.008 0.0044" quat="0 0.707107 0 0.707107" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>
<geom pos="-0.014 -0.032 0.0264" quat="0.5 -0.5 -0.5 0.5" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc11_a01_spacer_dummy"/>

<body name="pitch2_assembly" pos="-0.0188 0 0.0154" quat="0 0.707107 0 0.707107">

<inertial pos="0.0766242 -0.00031229 0.0187402" quat="0.52596 0.513053 0.489778 0.469319" mass="0.0432446" diaginertia="7.21796e-05 7.03107e-05 1.07533e-05"/>
<joint name="shoulder_lift_joint" pos="0 0 0" axis="0 0 1" range="-1.5708 1.22173" damping="0.1" actuatorfrcrange="-0.5 0.5" actuatorfrclimited="true"/>
<geom pos="0 0 0.019" quat="0.5 -0.5 -0.5 -0.5" type="mesh" rgba="0.980392 0.713726 0.00392157 1" mesh="arm_connector"/>
<geom pos="0.1083 -0.0148 0.03035" quat="1 0 0 0" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc15_a01_horn_idle2_dummy"/>
<geom pos="0.1083 -0.0148 0.01075" quat="0 -1 0 0" type="mesh" rgba="0.615686 0.811765 0.929412 1" mesh="dc15_a01_case_m_dummy"/>
<geom pos="0.1083 -0.0148 0.01075" quat="0 -1 0 0" type="mesh" rgba="0.980392 0.713726 0.00392157 1" mesh="dc15_a01_case_f_dummy"/>
<geom pos="0.1083 -0.0148 0.00715" quat="0 0 0.707107 -0.707107" type="mesh" rgba="0.972549 0.529412 0.00392157 1" mesh="dc15_a01_horn_dummy"/>
<geom pos="0.1083 -0.0148 0.03025" quat="0 -1 0 0" type="mesh" rgba="0.498039 0.498039 0.498039 1" mesh="dc15_a01_case_b_dummy"/>

<body name="pitch3_assembly" pos="0.1083 -0.0148 0.00425" quat="0.707107 0 0 0.707107">
<inertial pos="-0.0551014 -0.00287792 0.0144813" quat="0.500323 0.499209 0.499868 0.5006" mass="0.0788335" diaginertia="6.80912e-05 6.45748e-05 9.84479e-06"/>
<joint name="elbow_flex_joint" pos="0 0 0" axis="0 0 1" range="-1.48353 1.74533" damping="0.1" actuatorfrcrange="-0.5 0.5" actuatorfrclimited="true"/>
<geom pos="-0.00863031 0.00847376 0.0145" quat="0.5 0.5 0.5 -0.5" type="mesh" rgba="0.615686 0.811765 0.929412 1" mesh="connector"/>
<geom pos="-0.100476 -0.00269986 0.02635" quat="0.707107 0 0 -0.707107" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc15_a01_horn_idle2_dummy"/>
<geom pos="-0.100476 -0.00269986 0.00675" quat="0 -0.707107 0.707107 0" type="mesh" rgba="0.615686 0.811765 0.929412 1" mesh="dc15_a01_case_m_dummy"/>
<geom pos="-0.100476 -0.00269986 0.00675" quat="0 -0.707107 0.707107 0" type="mesh" rgba="0.980392 0.713726 0.00392157 1" mesh="dc15_a01_case_f_dummy"/>
<geom pos="-0.100476 -0.00269986 0.00315" quat="0.5 -0.5 -0.5 0.5" type="mesh" rgba="0.972549 0.529412 0.00392157 1" mesh="dc15_a01_horn_dummy"/>
<geom pos="-0.100476 -0.00269986 0.02625" quat="0 -0.707107 0.707107 0" type="mesh" rgba="0.498039 0.498039 0.498039 1" mesh="dc15_a01_case_b_dummy"/>

<body name="effector_roll_assembly" pos="-0.100476 -0.00269986 0.02925" quat="0 -0.707107 -0.707107 0">
<inertial pos="-1.65017e-05 -0.02659 0.0195388" quat="0.936813 0.349829 -0.00055331 -0.000300569" mass="0.0240506" diaginertia="6.03208e-06 4.12894e-06 3.3522e-06"/>
<joint name="wrist_flex_joint" pos="0 0 0" axis="0 0 1" range="-1.91986 1.91986" damping="0.1" actuatorfrcrange="-0.5 0.5" actuatorfrclimited="true"/>
<geom pos="-0.0109998 -0.0190002 0.039" quat="0.707107 -0.707107 0 0" type="mesh" rgba="0.615686 0.811765 0.929412 1" mesh="shoulder_rotation"/>
<geom pos="-7.44154e-06 -0.0385002 0.0133967" quat="0 0 0.707107 -0.707107" type="mesh" rgba="0.615686 0.811765 0.929412 1" mesh="dc15_a01_case_m_dummy"/>
<geom pos="-7.44154e-06 -0.0385002 0.0133967" quat="0 0 0.707107 -0.707107" type="mesh" rgba="0.980392 0.713726 0.00392157 1" mesh="dc15_a01_case_f_dummy"/>
<geom pos="-7.44154e-06 -0.0421002 0.0133967" quat="0 1 0 0" type="mesh" rgba="0.972549 0.529412 0.00392157 1" mesh="dc15_a01_horn_dummy"/>
<geom pos="-7.44154e-06 -0.0190002 0.0133967" quat="0 0 0.707107 -0.707107" type="mesh" rgba="0.498039 0.498039 0.498039 1" mesh="dc15_a01_case_b_dummy"/>

<body name="gripper_assembly" pos="-7.44154e-06 -0.0450002 0.0133967" quat="0.5 -0.5 -0.5 -0.5">
<inertial pos="-0.00548595 -0.000433143 -0.0190793" quat="0.700194 0.164851 0.167361 0.674197" mass="0.0360627" diaginertia="1.3261e-05 1.231e-05 5.3532e-06"/>
<joint name="wrist_roll_joint" pos="0 0 0" axis="0 0 1" range="-2.96706 2.96706" damping="0.1" actuatorfrcrange="-0.5 0.5" actuatorfrclimited="true"/>
<geom pos="-0.00075 -0.01475 -0.02" quat="0.707107 -0.707107 0 0" type="mesh" rgba="0.917647 0.917647 0.917647 1" mesh="static_side"/>
<geom pos="0.00755 0.01135 -0.013" quat="0.5 -0.5 0.5 0.5" type="mesh" rgba="0.647059 0.647059 0.647059 1" mesh="dc15_a01_horn_idle2_dummy"/>
<geom pos="0.00755 -0.00825 -0.013" quat="0.5 0.5 0.5 -0.5" type="mesh" rgba="0.615686 0.811765 0.929412 1" mesh="dc15_a01_case_m_dummy"/>
<geom pos="0.00755 -0.00825 -0.013" quat="0.5 0.5 0.5 -0.5" type="mesh" rgba="0.980392 0.713726 0.00392157 1" mesh="dc15_a01_case_f_dummy"/>
<geom pos="0.00755 -0.01185 -0.013" quat="0 -0.707107 0 -0.707107" type="mesh" rgba="0.972549 0.529412 0.00392157 1" mesh="dc15_a01_horn_dummy"/>
<geom pos="0.00755 0.01125 -0.013" quat="0.5 0.5 0.5 -0.5" type="mesh" rgba="0.498039 0.498039 0.498039 1" mesh="dc15_a01_case_b_dummy"/>

<body name="moving_side" pos="0.00755 -0.01475 -0.013" quat="0.707107 -0.707107 0 0">
<inertial pos="-0.000395599 0.022415 0.0145636" quat="0.722353 0.689129 0.0389102 0.0423547" mass="0.0089856" diaginertia="3.28451e-06 2.24898e-06 1.41539e-06"/>
<joint name="gripper_joint" pos="0 0 0" axis="0 0 1" range="-1.74533 0.0523599" damping="0.1" actuatorfrcrange="-0.5 0.5" actuatorfrclimited="true"/>
<geom pos="-0.00838199 -0.000256591 -0.003" type="mesh" rgba="0.768627 0.886275 0.952941 1" mesh="moving_side"/>
</body>
</body>
</body>
</body>
</body>
</body>
</worldbody>

<actuator>
<position joint="shoulder_pan_joint" ctrlrange="-3.14159 3.14159" ctrllimited="true" kp="20" />
<position joint="shoulder_lift_joint" ctrlrange="-1.5708 1.22173" ctrllimited="true" kp="20" />
<position joint="elbow_flex_joint" ctrlrange="-1.48353 1.74533" ctrllimited="true" kp="20" />
<position joint="wrist_flex_joint" ctrlrange="-1.91986 1.91986" ctrllimited="true" kp="20" />
<position joint="wrist_roll_joint" ctrlrange="-2.96706 2.96706" ctrllimited="true" kp="20" />
<position joint="gripper_joint" ctrlrange="-1.74533 0.0523599" ctrllimited="true" kp="20" />
</actuator>
</mujoco>

+ 328
- 0
examples/alexk-lcr/bus.py View File

@@ -0,0 +1,328 @@
import enum

import pyarrow as pa

from typing import Union

from dynamixel_sdk import (
PacketHandler,
PortHandler,
COMM_SUCCESS,
GroupSyncRead,
GroupSyncWrite,
)
from dynamixel_sdk import DXL_HIBYTE, DXL_HIWORD, DXL_LOBYTE, DXL_LOWORD

PROTOCOL_VERSION = 2.0
BAUD_RATE = 1_000_000
TIMEOUT_MS = 1000


def wrap_joints_and_values(
joints: Union[list[str], pa.Array],
values: Union[int, list[int], pa.Array],
) -> pa.StructArray:
"""
Wraps joints and their corresponding values into a structured array.

:param joints: A list, numpy array, or pyarrow array of joint names.
:type joints: Union[list[str], np.array, pa.Array]
:param values: A single integer value, or a list, numpy array, or pyarrow array of integer values.
If a single integer is provided, it will be broadcasted to all joints.
:type values: Union[int, list[int], np.array, pa.Array]

:return: A structured array with two fields:
- "joints": A string field containing the names of the joints.
- "values": An Int32Array containing the values corresponding to the joints.
:rtype: pa.StructArray

:raises ValueError: If lengths of joints and values do not match.

Example:
--------
joints = ["shoulder_pan", "shoulder_lift", "elbow_flex"]
values = [100, 200, 300]
struct_array = wrap_joints_and_values(joints, values)

This example wraps the given joints and their corresponding values into a structured array.

Another example with a single integer value:
joints = ["shoulder_pan", "shoulder_lift", "elbow_flex"]
value = 150
struct_array = wrap_joints_and_values(joints, value)

This example broadcasts the single integer value to all joints and wraps them into a structured array.
"""

if isinstance(values, int):
values = [values] * len(joints)

if len(joints) != len(values):
raise ValueError("joints and values must have the same length")

mask = pa.array([False] * len(values), type=pa.bool_())

if isinstance(values, list):
mask = pa.array([value is None for value in values])

if isinstance(values, pa.Array):
mask = values.is_null()

return pa.StructArray.from_arrays(
arrays=[joints, values], names=["joints", "values"], mask=mask
).drop_null()


class TorqueMode(enum.Enum):
ENABLED = pa.scalar(1, pa.uint32())
DISABLED = pa.scalar(0, pa.uint32())


class OperatingMode(enum.Enum):
VELOCITY = pa.scalar(1, pa.uint32())
POSITION = pa.scalar(3, pa.uint32())
EXTENDED_POSITION = pa.scalar(4, pa.uint32())
CURRENT_CONTROLLED_POSITION = pa.scalar(5, pa.uint32())
PWM = pa.scalar(16, pa.uint32())


X_SERIES_CONTROL_TABLE = [
("Model_Number", 0, 2),
("Model_Information", 2, 4),
("Firmware_Version", 6, 1),
("ID", 7, 1),
("Baud_Rate", 8, 1),
("Return_Delay_Time", 9, 1),
("Drive_Mode", 10, 1),
("Operating_Mode", 11, 1),
("Secondary_ID", 12, 1),
("Protocol_Type", 13, 1),
("Homing_Offset", 20, 4),
("Moving_Threshold", 24, 4),
("Temperature_Limit", 31, 1),
("Max_Voltage_Limit", 32, 2),
("Min_Voltage_Limit", 34, 2),
("PWM_Limit", 36, 2),
("Current_Limit", 38, 2),
("Acceleration_Limit", 40, 4),
("Velocity_Limit", 44, 4),
("Max_Position_Limit", 48, 4),
("Min_Position_Limit", 52, 4),
("Shutdown", 63, 1),
("Torque_Enable", 64, 1),
("LED", 65, 1),
("Status_Return_Level", 68, 1),
("Registered_Instruction", 69, 1),
("Hardware_Error_Status", 70, 1),
("Velocity_I_Gain", 76, 2),
("Velocity_P_Gain", 78, 2),
("Position_D_Gain", 80, 2),
("Position_I_Gain", 82, 2),
("Position_P_Gain", 84, 2),
("Feedforward_2nd_Gain", 88, 2),
("Feedforward_1st_Gain", 90, 2),
("Bus_Watchdog", 98, 1),
("Goal_PWM", 100, 2),
("Goal_Current", 102, 2),
("Goal_Velocity", 104, 4),
("Profile_Acceleration", 108, 4),
("Profile_Velocity", 112, 4),
("Goal_Position", 116, 4),
("Realtime_Tick", 120, 2),
("Moving", 122, 1),
("Moving_Status", 123, 1),
("Present_PWM", 124, 2),
("Present_Current", 126, 2),
("Present_Velocity", 128, 4),
("Present_Position", 132, 4),
("Velocity_Trajectory", 136, 4),
("Position_Trajectory", 140, 4),
("Present_Input_Voltage", 144, 2),
("Present_Temperature", 146, 1),
]

MODEL_CONTROL_TABLE = {
"x_series": X_SERIES_CONTROL_TABLE,
"xl330-m077": X_SERIES_CONTROL_TABLE,
"xl330-m288": X_SERIES_CONTROL_TABLE,
"xl430-w250": X_SERIES_CONTROL_TABLE,
"xm430-w350": X_SERIES_CONTROL_TABLE,
"xm540-w270": X_SERIES_CONTROL_TABLE,
}


class DynamixelBus:

def __init__(self, port: str, description: dict[str, (int, str)]):
self.port = port
self.descriptions = description
self.motor_ctrl = {}

for motor_name, (motor_id, motor_model) in description.items():
if motor_model not in MODEL_CONTROL_TABLE:
raise ValueError(f"Model {motor_model} is not supported.")

self.motor_ctrl[motor_name] = {}

self.motor_ctrl[motor_name]["id"] = motor_id
for data_name, address, bytes_size in MODEL_CONTROL_TABLE[motor_model]:
self.motor_ctrl[motor_name][data_name] = {
"addr": address,
"bytes_size": bytes_size,
}

self.port_handler = PortHandler(self.port)
self.packet_handler = PacketHandler(PROTOCOL_VERSION)

if not self.port_handler.openPort():
raise OSError(f"Failed to open port {self.port}")

self.port_handler.setBaudRate(BAUD_RATE)
self.port_handler.setPacketTimeoutMillis(TIMEOUT_MS)

self.group_readers = {}
self.group_writers = {}

def close(self):
self.port_handler.closePort()

def write(self, data_name: str, data: pa.StructArray):
motor_ids = [
self.motor_ctrl[motor_name.as_py()]["id"]
for motor_name in data.field("joints")
]

values = pa.Array.from_buffers(
pa.uint32(),
length=len(data.field("values")),
buffers=data.field("values").buffers(),
)

group_key = f"{data_name}_" + "_".join([str(idx) for idx in motor_ids])

first_motor_name = list(self.motor_ctrl.keys())[0]

packet_address = self.motor_ctrl[first_motor_name][data_name]["addr"]
packet_bytes_size = self.motor_ctrl[first_motor_name][data_name]["bytes_size"]

init_group = data_name not in self.group_readers

if init_group:
self.group_writers[group_key] = GroupSyncWrite(
self.port_handler,
self.packet_handler,
packet_address,
packet_bytes_size,
)

for idx, value in zip(motor_ids, values):
value = value.as_py()
if value is None:
continue

if packet_bytes_size == 1:
data = [
DXL_LOBYTE(DXL_LOWORD(value)),
]
elif packet_bytes_size == 2:
data = [
DXL_LOBYTE(DXL_LOWORD(value)),
DXL_HIBYTE(DXL_LOWORD(value)),
]
elif packet_bytes_size == 4:
data = [
DXL_LOBYTE(DXL_LOWORD(value)),
DXL_HIBYTE(DXL_LOWORD(value)),
DXL_LOBYTE(DXL_HIWORD(value)),
DXL_HIBYTE(DXL_HIWORD(value)),
]
else:
raise NotImplementedError(
f"Value of the number of bytes to be sent is expected to be in [1, 2, 4], but {packet_bytes_size} "
f"is provided instead."
)

if init_group:
self.group_writers[group_key].addParam(idx, data)
else:
self.group_writers[group_key].changeParam(idx, data)

comm = self.group_writers[group_key].txPacket()
if comm != COMM_SUCCESS:
raise ConnectionError(
f"Write failed due to communication error on port {self.port} for group_key {group_key}: "
f"{self.packet_handler.getTxRxResult(comm)}"
)

def read(self, data_name: str, motor_names: pa.Array) -> pa.StructArray:
motor_ids = [
self.motor_ctrl[motor_name.as_py()]["id"] for motor_name in motor_names
]

group_key = f"{data_name}_" + "_".join([str(idx) for idx in motor_ids])

first_motor_name = list(self.motor_ctrl.keys())[0]

packet_address = self.motor_ctrl[first_motor_name][data_name]["addr"]
packet_bytes_size = self.motor_ctrl[first_motor_name][data_name]["bytes_size"]

if data_name not in self.group_readers:
self.group_readers[group_key] = GroupSyncRead(
self.port_handler,
self.packet_handler,
packet_address,
packet_bytes_size,
)

for idx in motor_ids:
self.group_readers[group_key].addParam(idx)

comm = self.group_readers[group_key].txRxPacket()
if comm != COMM_SUCCESS:
raise ConnectionError(
f"Read failed due to communication error on port {self.port} for group_key {group_key}: "
f"{self.packet_handler.getTxRxResult(comm)}"
)

values = pa.array(
[
self.group_readers[group_key].getData(
idx, packet_address, packet_bytes_size
)
for idx in motor_ids
],
type=pa.uint32(),
)
values = values.from_buffers(pa.int32(), len(values), values.buffers())

return wrap_joints_and_values(motor_names, values)

def write_torque_enable(self, torque_mode: pa.StructArray):
self.write("Torque_Enable", torque_mode)

def write_operating_mode(self, operating_mode: pa.StructArray):
self.write("Operating_Mode", operating_mode)

def read_position(self, motor_names: pa.Array) -> pa.StructArray:
return self.read("Present_Position", motor_names)

def read_velocity(self, motor_names: pa.Array) -> pa.StructArray:
return self.read("Present_Velocity", motor_names)

def read_current(self, motor_names: pa.Array) -> pa.StructArray:
return self.read("Present_Current", motor_names)

def write_goal_position(self, goal_position: pa.StructArray):
self.write("Goal_Position", goal_position)

def write_goal_current(self, goal_current: pa.StructArray):
self.write("Goal_Current", goal_current)

def write_position_p_gain(self, position_p_gain: pa.StructArray):
self.write("Position_P_Gain", position_p_gain)

def write_position_i_gain(self, position_i_gain: pa.StructArray):
self.write("Position_I_Gain", position_i_gain)

def write_position_d_gain(self, position_d_gain: pa.StructArray):
self.write("Position_D_Gain", position_d_gain)

+ 0
- 0
examples/alexk-lcr/configs/.gitkeep View File


+ 213
- 0
examples/alexk-lcr/configure.py View File

@@ -0,0 +1,213 @@
"""
LCR Configuration Tool: This program is used to automatically configure the Low Cost Robot (LCR) for the user.

The program will:
1. Disable all torque motors of provided LCR.
2. Ask the user to move the LCR to the position 1 (see CONFIGURING.md for more details).
3. Record the position of the LCR.
4. Ask the user to move the LCR to the position 2 (see CONFIGURING.md for more details).
5. Record the position of the LCR.
8. Calculate interpolation functions.
9. Let the user verify in real time that the LCR is working properly.

It will also enable all appropriate operating modes for the LCR.
"""

import argparse
import time
import json

import pyarrow as pa

from bus import DynamixelBus, TorqueMode, OperatingMode

from pwm_position_control.transform import pwm_to_logical_arrow, wrap_joints_and_values

from pwm_position_control.tables import (
construct_logical_to_pwm_conversion_table_arrow,
construct_pwm_to_logical_conversion_table_arrow,
)

from pwm_position_control.functions import construct_control_table

FULL_ARM = pa.array(
[
"shoulder_pan",
"shoulder_lift",
"elbow_flex",
"wrist_flex",
"wrist_roll",
"gripper",
],
type=pa.string(),
)

ARM_WITHOUT_GRIPPER = pa.array(
["shoulder_pan", "shoulder_lift", "elbow_flex", "wrist_flex", "wrist_roll"],
type=pa.string(),
)

GRIPPER = pa.array(["gripper"], type=pa.string())


def pause():
input("Press Enter to continue...")


def configure_servos(bus: DynamixelBus):
bus.write_torque_enable(
wrap_joints_and_values(FULL_ARM, [TorqueMode.DISABLED.value] * 6)
)

bus.write_operating_mode(
wrap_joints_and_values(
ARM_WITHOUT_GRIPPER, [OperatingMode.EXTENDED_POSITION.value] * 5
)
)

bus.write_operating_mode(
wrap_joints_and_values(
GRIPPER, [OperatingMode.CURRENT_CONTROLLED_POSITION.value]
)
)


def main():
parser = argparse.ArgumentParser(
description="LCR Auto Configure: This program is used to automatically configure the Low Cost Robot (LCR) for "
"the user."
)

parser.add_argument("--port", type=str, required=True, help="The port of the LCR.")
parser.add_argument(
"--right",
action="store_true",
help="If the LCR is on the right side of the user.",
)
parser.add_argument(
"--left",
action="store_true",
help="If the LCR is on the left side of the user.",
)
parser.add_argument(
"--follower",
action="store_true",
help="If the LCR is the follower of the user.",
)
parser.add_argument(
"--leader", action="store_true", help="If the LCR is the leader of the user."
)

args = parser.parse_args()

if args.right and args.left:
raise ValueError("You cannot specify both --right and --left.")

if args.follower and args.leader:
raise ValueError("You cannot specify both --follower and --leader.")

targets = (
wrap_joints_and_values(FULL_ARM, [0, -90, 90, 0, -90, 0]),
wrap_joints_and_values(FULL_ARM, [90, 0, 0, 90, 0, -90]),
)

arm = DynamixelBus(
args.port,
{
"shoulder_pan": (1, "x_series"),
"shoulder_lift": (2, "x_series"),
"elbow_flex": (3, "x_series"),
"wrist_flex": (4, "x_series"),
"wrist_roll": (5, "x_series"),
"gripper": (6, "x_series"),
},
)

configure_servos(arm)

print("Please move the LCR to the first position.")
pause()
pwm_position_1 = arm.read_position(FULL_ARM)

print("Please move the LCR to the second position.")
pause()
pwm_position_2 = arm.read_position(FULL_ARM)

print("Configuration completed.")

pwm_positions = (pwm_position_1, pwm_position_2)

pwm_to_logical_conversion_table = construct_pwm_to_logical_conversion_table_arrow(
pwm_positions, targets
)
logical_to_pwm_conversion_table = construct_logical_to_pwm_conversion_table_arrow(
pwm_positions, targets
)

control_table_json = {}
for i in range(len(FULL_ARM)):
model = (
"xl430-w250"
if i <= 1 and args.follower
else "xl330-m288" if args.follower else "xl330-m077"
)

control_table_json[FULL_ARM[i].as_py()] = {
"id": i + 1,
"model": model,
"torque": (
True if args.follower else True if args.leader and i == 5 else False
),
"goal_current": (
500
if args.follower and i == 5
else 40 if args.leader and i == 5 else None
),
"goal_position": -40.0 if args.leader and i == 5 else None,
"pwm_to_logical": pwm_to_logical_conversion_table[FULL_ARM[i].as_py()],
"logical_to_pwm": logical_to_pwm_conversion_table[FULL_ARM[i].as_py()],
"P": (
640
if model == "xl430-w250"
else 1500 if model == "xl330-m288" and i != 5 else 250
),
"I": 0,
"D": 3600 if model == "xl430-w250" else 600,
}

left = "left" if args.left else "right"
leader = "leader" if args.leader else "follower"

path = (
input(
f"Please enter the path of the configuration file (default is ./robots/alexk-lcr/configs/{leader}.{left}.json): "
)
or f"./robots/alexk-lcr/configs/{leader}.{left}.json"
)

with open(path, "w") as file:
json.dump(control_table_json, file)

control_table = construct_control_table(
pwm_to_logical_conversion_table, logical_to_pwm_conversion_table
)

while True:
try:
pwm_position = arm.read_position(FULL_ARM)
logical_position = pwm_to_logical_arrow(
pwm_position, control_table, ranged=True
).field("values")

print(f"Logical Position: {logical_position}")

except ConnectionError:
print(
"Connection error occurred. Please check the connection and try again."
)

time.sleep(0.5)


if __name__ == "__main__":
main()

+ 74
- 0
examples/alexk-lcr/graphs/bi_teleop_real.yml View File

@@ -0,0 +1,74 @@
nodes:
- id: lcr-left-leader
build: pip install ../../../node-hub/dynamixel-client
path: dynamixel-client
inputs:
pull_position: dora/timer/millis/10 # pull the position every 10ms
write_goal_position: lcr-to-lcr-left/leader_goal
outputs:
- position
env:
PORT: /dev/tty.usbmodem575E0030111
CONFIG: ../configs/leader.left.json

- id: lcr-to-lcr-left
build: pip install git+https://github.com/Hennzau/pwm-position-control
path: ../nodes/interpolate_lcr_to_lcr.py
inputs:
leader_position: lcr-left-leader/position
follower_position: lcr-left-follower/position
outputs:
- follower_goal
- leader_goal
env:
LEADER_CONTROL: ../configs/leader.left.json
FOLLOWER_CONTROL: ../configs/follower.left.json

- id: lcr-left-follower
build: pip install ../../../node-hub/dynamixel-client
path: dynamixel-client
inputs:
pull_position: dora/timer/millis/10
write_goal_position: lcr-to-lcr-left/follower_goal
outputs:
- position
env:
PORT: /dev/tty.usbmodem575E0031141
CONFIG: ../configs/follower.left.json

- id: lcr-right-leader
build: pip install ../../../node-hub/dynamixel-client
path: dynamixel-client
inputs:
pull_position: dora/timer/millis/10 # pull the position every 10ms
write_goal_position: lcr-to-lcr-right/leader_goal
outputs:
- position
env:
PORT: /dev/tty.usbmodem575E0030531
CONFIG: ../configs/leader.right.json

- id: lcr-to-lcr-right
build: pip install git+https://github.com/Hennzau/pwm-position-control
path: ../nodes/interpolate_lcr_to_lcr.py
inputs:
leader_position: lcr-right-leader/position
follower_position: lcr-right-follower/position
outputs:
- follower_goal
- leader_goal
env:
LEADER_CONTROL: ../configs/leader.right.json
FOLLOWER_CONTROL: ../configs/follower.right.json

- id: lcr-right-follower
build: pip install ../../../node-hub/dynamixel-client
path: dynamixel-client
inputs:
pull_position: dora/timer/millis/10
write_goal_position: lcr-to-lcr-right/follower_goal
outputs:
- position
env:
PORT: /dev/tty.usbmodem575E0032531
CONFIG: ../configs/follower.right.json

+ 40
- 0
examples/alexk-lcr/graphs/mono_replay_real.yml View File

@@ -0,0 +1,40 @@
nodes:
- id: replay-client
build: pip install ../../../node-hub/replay-client
path: replay-client
inputs:
pull_position: dora/timer/millis/33
outputs:
- position
- end
env:
PATH: ../../../datasets/enzo2
EPISODE: 1

- id: replay-to-lcr
build: pip install git+https://github.com/Hennzau/pwm-position-control
path: ../nodes/interpolate_replay_to_lcr.py
inputs:
leader_position:
source: replay-client/position
queue_size: 1
follower_position:
source: lcr-follower/position
queue_size: 1
outputs:
- follower_goal
env:
FOLLOWER_CONTROL: ../configs/follower.left.json

- id: lcr-follower
build: pip install ../../../node-hub/dynamixel-client
path: dynamixel-client
inputs:
pull_position: dora/timer/millis/33
write_goal_position: replay-to-lcr/follower_goal
end: replay-client/end
outputs:
- position
env:
PORT: /dev/tty.usbmodem575E0031141
CONFIG: ../configs/follower.left.json

+ 37
- 0
examples/alexk-lcr/graphs/mono_teleop_real.yml View File

@@ -0,0 +1,37 @@
nodes:
- id: lcr-leader
build: pip install ../../../node-hub/dynamixel-client
path: dynamixel-client
inputs:
pull_position: dora/timer/millis/10
write_goal_position: lcr-to-lcr/leader_goal
outputs:
- position
env:
PORT: /dev/tty.usbmodem575E0030111
CONFIG: ../configs/leader.left.json

- id: lcr-to-lcr
build: pip install git+https://github.com/Hennzau/pwm-position-control
path: ../nodes/interpolate_lcr_to_lcr.py
inputs:
leader_position: lcr-leader/position
follower_position: lcr-follower/position
outputs:
- follower_goal
- leader_goal
env:
LEADER_CONTROL: ../configs/leader.left.json
FOLLOWER_CONTROL: ../configs/follower.left.json

- id: lcr-follower
build: pip install ../../../node-hub/dynamixel-client
path: dynamixel-client
inputs:
pull_position: dora/timer/millis/10
write_goal_position: lcr-to-lcr/follower_goal
outputs:
- position
env:
PORT: /dev/tty.usbmodem575E0031141
CONFIG: ../configs/follower.left.json

+ 70
- 0
examples/alexk-lcr/graphs/mono_teleop_real_and_simu.yml View File

@@ -0,0 +1,70 @@
nodes:
- id: lcr-leader
build: pip install ../../../node-hub/dynamixel-client
path: dynamixel-client
inputs:
pull_position: simu-lcr-follower/tick
write_goal_position: lcr-to-lcr/leader_goal
end: simu-lcr-follower/end
outputs:
- position
env:
PORT: /dev/tty.usbmodem575E0030111
CONFIG: ../configs/leader.left.json

- id: lcr-to-lcr
build: pip install git+https://github.com/Hennzau/pwm-position-control
path: ../nodes/interpolate_lcr_to_lcr.py
inputs:
leader_position: lcr-leader/position
follower_position: lcr-follower/position
outputs:
- follower_goal
- leader_goal
env:
LEADER_CONTROL: ../configs/leader.left.json
FOLLOWER_CONTROL: ../configs/follower.left.json

- id: lcr-follower
build: pip install ../../../node-hub/dynamixel-client
path: dynamixel-client
inputs:
pull_position: simu-lcr-follower/tick
write_goal_position: lcr-to-lcr/follower_goal
end: simu-lcr-follower/end
outputs:
- position
env:
PORT: /dev/tty.usbmodem575E0031141
CONFIG: ../configs/follower.left.json

- id: lcr-to-simu-lcr
build: pip install git+https://github.com/Hennzau/pwm-position-control
path: ../nodes/interpolate_lcr_to_simu_lcr.py
inputs:
leader_position:
source: lcr-leader/position
queue_size: 1
outputs:
- follower_goal
- leader_goal
env:
LEADER_CONTROL: ../configs/leader.left.json

- id: simu-lcr-follower
build: pip install ../../../node-hub/mujoco-client
# for windows
# path: mujoco-client
# for macos
path: mjpython
args: ../../../node-hub/mujoco-client/mujoco_client/main.py
inputs:
write_goal_position: lcr-to-simu-lcr/follower_goal
tick: dora/timer/millis/10
outputs:
- position
- tick
- end
env:
SCENE: ../assets/simulation/reach_cube.xml
CONFIG: ../configs/follower.left.json

+ 43
- 0
examples/alexk-lcr/graphs/mono_teleop_simu.yml View File

@@ -0,0 +1,43 @@
nodes:
- id: lcr-leader
build: pip install ../../../node-hub/dynamixel-client
path: dynamixel-client
inputs:
pull_position: simu-lcr-follower/tick
write_goal_position: lcr-to-simu-lcr/leader_goal
end: simu-lcr-follower/end
outputs:
- position
env:
PORT: /dev/tty.usbmodem575E0030111
CONFIG: ../configs/leader.left.json

- id: lcr-to-simu-lcr
build: pip install git+https://github.com/Hennzau/pwm-position-control
path: ../nodes/interpolate_lcr_to_simu_lcr.py
inputs:
leader_position: lcr-leader/position
outputs:
- follower_goal
- leader_goal
env:
LEADER_CONTROL: ../configs/leader.left.json

- id: simu-lcr-follower
build: pip install ../../../node-hub/mujoco-client
# for windows
# path: mujoco-client
# for macos
path: mjpython
args: ../../../node-hub/mujoco-client/mujoco_client/main.py
inputs:
write_goal_position: lcr-to-simu-lcr/follower_goal
tick: dora/timer/millis/10
outputs:
- position
- tick
- end

env:
SCENE: ../assets/simulation/reach_cube.xml
CONFIG: ../configs/follower.left.json

+ 119
- 0
examples/alexk-lcr/graphs/record_mono_teleop_real.yml View File

@@ -0,0 +1,119 @@
nodes:
- id: lcr-leader
build: pip install ../../../node-hub/dynamixel-client
path: dynamixel-client
inputs:
pull_position:
source: lerobot-dashboard/tick
queue_size: 1
write_goal_position: lcr-to-lcr/leader_goal
end: lerobot-dashboard/end
outputs:
- position
env:
PORT: /dev/tty.usbmodem575E0030111
CONFIG: ../configs/leader.left.json

- id: lcr-to-lcr
build: pip install git+https://github.com/Hennzau/pwm-position-control
path: ../nodes/interpolate_lcr_to_lcr.py
inputs:
leader_position:
source: lcr-leader/position
queue_size: 1
follower_position:
source: lcr-follower/position
queue_size: 1
outputs:
- follower_goal
- leader_goal
env:
LEADER_CONTROL: ../configs/leader.left.json
FOLLOWER_CONTROL: ../configs/follower.left.json

- id: lcr-to-record
build: pip install git+https://github.com/Hennzau/pwm-position-control
path: ../nodes/interpolate_lcr_to_record.py
inputs:
leader_position:
source: lcr-leader/position
queue_size: 1
follower_position:
source: lcr-follower/position
queue_size: 1
outputs:
- logical_goal
- logical_position
env:
LEADER_CONTROL: ../configs/leader.left.json
FOLLOWER_CONTROL: ../configs/follower.left.json

- id: lcr-follower
build: pip install ../../../node-hub/dynamixel-client
path: dynamixel-client
inputs:
pull_position:
source: lerobot-dashboard/tick
queue_size: 1
write_goal_position:
source: lcr-to-lcr/follower_goal
queue_size: 1
end: lerobot-dashboard/end
outputs:
- position
env:
PORT: /dev/tty.usbmodem575E0031141
CONFIG: ../configs/follower.left.json

- id: opencv-video-capture
build: pip install ../../../node-hub/opencv-video-capture
path: opencv-video-capture
inputs:
tick:
source: lerobot-dashboard/tick
queue_size: 1
outputs:
- image
env:
PATH: 1
IMAGE_WIDTH: 860
IMAGE_HEIGHT: 540

- id: video-encoder
build: pip install ../../../node-hub/video-encoder
path: video-encoder
inputs:
image: opencv-video-capture/image
episode_index: lerobot-dashboard/episode
outputs:
- image
env:
VIDEO_NAME: cam_up
FPS: 30

- id: lerobot-dashboard
build: pip install ../../../node-hub/lerobot-dashboard
path: lerobot-dashboard
inputs:
tick:
source: dora/timer/millis/16
queue_size: 1
image_left: opencv-video-capture/image
outputs:
- tick
- episode
- failed
- end
env:
WINDOW_WIDTH: 1720
WINDOW_HEIGHT: 540

- id: dora-record
build: cargo install dora-record
path: dora-record
inputs:
action: lcr-to-record/logical_goal
observation.state: lcr-to-record/logical_position
episode_index: lerobot-dashboard/episode
failed_episode_index: lerobot-dashboard/failed
observation.images.cam_up: video-encoder/image

+ 148
- 0
examples/alexk-lcr/nodes/interpolate_lcr_to_lcr.py View File

@@ -0,0 +1,148 @@
import os
import argparse
import json

import pyarrow as pa
import pyarrow.compute as pc

from dora import Node

from pwm_position_control.transform import (
wrap_joints_and_values,
pwm_to_logical_arrow,
logical_to_pwm_with_offset_arrow,
)
from pwm_position_control.load import load_control_table_from_json_conversion_tables


def main():
parser = argparse.ArgumentParser(
description="Interpolation LCR Node: This Dora node is used to calculates appropriate goal positions for the "
"LCR followers knowing a Leader position and Follower position."
)

parser.add_argument(
"--name",
type=str,
required=False,
help="The name of the node in the dataflow.",
default="lcr-to-lcr",
)
parser.add_argument(
"--leader-control",
type=str,
help="The configuration file for controlling the leader.",
default=None,
)
parser.add_argument(
"--follower-control",
type=str,
help="The configuration file for controlling the follower.",
default=None,
)

args = parser.parse_args()

if not os.environ.get("LEADER_CONTROL") and args.leader_control is None:
raise ValueError(
"The leader control is not set. Please set the configuration of the leader in the environment variables or "
"as an argument."
)

if not os.environ.get("FOLLOWER_CONTROL") and args.follower_control is None:
raise ValueError(
"The follower control is not set. Please set the configuration of the follower in the environment "
"variables or as an argument."
)

with open(
os.environ.get("LEADER_CONTROL")
if args.leader_control is None
else args.leader_control
) as file:
leader_control = json.load(file)
load_control_table_from_json_conversion_tables(leader_control, leader_control)

with open(
os.environ.get("FOLLOWER_CONTROL")
if args.follower_control is None
else args.follower_control
) as file:
follower_control = json.load(file)
load_control_table_from_json_conversion_tables(
follower_control, follower_control
)

initial_mask = [
True if leader_control[joint]["goal_position"] is not None else False
for joint in leader_control.keys()
]
logical_leader_initial_goal = wrap_joints_and_values(
[
joint
for joint in leader_control.keys()
if leader_control[joint]["goal_position"] is not None
],
[
leader_control[joint]["goal_position"]
for joint in leader_control.keys()
if leader_control[joint]["goal_position"] is not None
],
)

node = Node(args.name)

leader_initialized = False
follower_initialized = False

follower_position = None

for event in node:
event_type = event["type"]

if event_type == "INPUT":
event_id = event["id"]

if event_id == "leader_position":
leader_position = event["value"]

if not leader_initialized:
leader_initialized = True

pwm_goal = logical_to_pwm_with_offset_arrow(
leader_position.filter(initial_mask),
logical_leader_initial_goal,
leader_control,
)

node.send_output("leader_goal", pwm_goal, event["metadata"])

if not follower_initialized:
continue

leader_position = pwm_to_logical_arrow(leader_position, leader_control)

interpolation = pa.array([1, 1, 1, 1, 1, 700 / 450], type=pa.float32())

logical_goal = wrap_joints_and_values(
leader_position.field("joints"),
pc.multiply(leader_position.field("values"), interpolation),
)

pwm_goal = logical_to_pwm_with_offset_arrow(
follower_position, logical_goal, follower_control
)

node.send_output("follower_goal", pwm_goal, event["metadata"])

elif event_id == "follower_position":
follower_position = event["value"]
follower_initialized = True

elif event_type == "ERROR":
print("[lcr-to-lcr] error: ", event["error"])
break


if __name__ == "__main__":
main()

+ 114
- 0
examples/alexk-lcr/nodes/interpolate_lcr_to_record.py View File

@@ -0,0 +1,114 @@
import os
import argparse
import json

import pyarrow as pa
import pyarrow.compute as pc

from dora import Node

from pwm_position_control.load import load_control_table_from_json_conversion_tables
from pwm_position_control.transform import (
wrap_joints_and_values,
pwm_to_logical_arrow,
)


def main():
parser = argparse.ArgumentParser(
description="Interpolation LCR Node: This Dora node is used to calculates appropriate goal positions for the "
"LCR followers knowing a Leader position and Follower position."
)

parser.add_argument(
"--name",
type=str,
required=False,
help="The name of the node in the dataflow.",
default="lcr-to-record",
)
parser.add_argument(
"--leader-control",
type=str,
help="The configuration file for controlling the leader.",
default=None,
)
parser.add_argument(
"--follower-control",
type=str,
help="The configuration file for controlling the follower.",
default=None,
)

args = parser.parse_args()

if not os.environ.get("LEADER_CONTROL") and args.leader_control is None:
raise ValueError(
"The leader control is not set. Please set the configuration of the leader in the environment variables or "
"as an argument."
)

if not os.environ.get("FOLLOWER_CONTROL") and args.follower_control is None:
raise ValueError(
"The follower control is not set. Please set the configuration of the follower in the environment "
"variables or as an argument."
)

with open(
os.environ.get("LEADER_CONTROL")
if args.leader_control is None
else args.leader_control
) as file:
leader_control = json.load(file)
load_control_table_from_json_conversion_tables(leader_control, leader_control)

with open(
os.environ.get("FOLLOWER_CONTROL")
if args.follower_control is None
else args.follower_control
) as file:
follower_control = json.load(file)
load_control_table_from_json_conversion_tables(
follower_control, follower_control
)

node = Node(args.name)

for event in node:
event_type = event["type"]

if event_type == "INPUT":
event_id = event["id"]

if event_id == "leader_position":
leader_position = event["value"]

leader_position = pwm_to_logical_arrow(leader_position, leader_control)

interpolation = pa.array([1, 1, 1, 1, 1, 700 / 450], type=pa.float32())

logical_goal = wrap_joints_and_values(
leader_position.field("joints"),
pc.multiply(leader_position.field("values"), interpolation),
)

node.send_output("logical_goal", logical_goal, event["metadata"])

elif event_id == "follower_position":
follower_position = event["value"]

follower_position = pwm_to_logical_arrow(
follower_position, follower_control
)

node.send_output(
"logical_position", follower_position, event["metadata"]
)

elif event_type == "ERROR":
print("[lcr-to-record] error: ", event["error"])
break


if __name__ == "__main__":
main()

+ 133
- 0
examples/alexk-lcr/nodes/interpolate_lcr_to_simu_lcr.py View File

@@ -0,0 +1,133 @@
import os
import argparse
import json

import numpy as np
import pyarrow as pa
import pyarrow.compute as pc

from dora import Node

from pwm_position_control.load import load_control_table_from_json_conversion_tables
from pwm_position_control.transform import (
wrap_joints_and_values,
pwm_to_logical_arrow,
logical_to_pwm_with_offset_arrow,
)


def main():
parser = argparse.ArgumentParser(
description="Interpolation LCR Node: This Dora node is used to calculates appropriate goal positions for the "
"LCR followers knowing a Leader position and Follower position."
)

parser.add_argument(
"--name",
type=str,
required=False,
help="The name of the node in the dataflow.",
default="lcr-to-simu-lcr",
)
parser.add_argument(
"--leader-control",
type=str,
help="The configuration file for controlling the leader.",
default=None,
)

args = parser.parse_args()

if not os.environ.get("LEADER_CONTROL") and args.leader_control is None:
raise ValueError(
"The leader control is not set. Please set the configuration of the leader in the environment variables or "
"as an argument."
)

with open(
os.environ.get("LEADER_CONTROL")
if args.leader_control is None
else args.leader_control
) as file:
leader_control = json.load(file)
load_control_table_from_json_conversion_tables(leader_control, leader_control)

initial_mask = [
True if leader_control[joint]["goal_position"] is not None else False
for joint in leader_control.keys()
]
logical_leader_initial_goal = wrap_joints_and_values(
[
joint
for joint in leader_control.keys()
if leader_control[joint]["goal_position"] is not None
],
[
leader_control[joint]["goal_position"]
for joint in leader_control.keys()
if leader_control[joint]["goal_position"] is not None
],
)

node = Node(args.name)

leader_initialized = False

for event in node:
event_type = event["type"]

if event_type == "INPUT":
event_id = event["id"]

if event_id == "leader_position":
leader_position = event["value"]

if not leader_initialized:
leader_initialized = True

physical_goal = logical_to_pwm_with_offset_arrow(
leader_position.filter(initial_mask),
logical_leader_initial_goal,
leader_control,
)

node.send_output("leader_goal", physical_goal, event["metadata"])

leader_position = pwm_to_logical_arrow(leader_position, leader_control)

interpolation_m = pa.array(
[
np.pi / 180,
np.pi / 180,
np.pi / 180,
np.pi / 180,
np.pi / 180,
np.pi / 180 * 700 / 450,
],
type=pa.float32(),
)

interpolation_a = pa.array([0, 0, 0, 0, 90, 0], type=pa.float32())

logical_goal = wrap_joints_and_values(
pa.array(
[
joint.as_py() + "_joint"
for joint in leader_position.field("joints")
]
),
pc.multiply(
pc.add(leader_position.field("values"), interpolation_a),
interpolation_m,
),
)

node.send_output("follower_goal", logical_goal, event["metadata"])

elif event_type == "ERROR":
print("[lcr-to-simu] error: ", event["error"])
break


if __name__ == "__main__":
main()

+ 83
- 0
examples/alexk-lcr/nodes/interpolate_replay_to_lcr.py View File

@@ -0,0 +1,83 @@
import os
import argparse
import json

from dora import Node

from pwm_position_control.load import load_control_table_from_json_conversion_tables
from pwm_position_control.transform import logical_to_pwm_with_offset_arrow


def main():
parser = argparse.ArgumentParser(
description="Interpolation LCR Node: This Dora node is used to calculates appropriate goal positions for the "
"LCR followers knowing a Leader position and Follower position."
)

parser.add_argument(
"--name",
type=str,
required=False,
help="The name of the node in the dataflow.",
default="replay-to-lcr",
)
parser.add_argument(
"--follower-control",
type=str,
help="The configuration file for controlling the follower.",
default=None,
)

args = parser.parse_args()

if not os.environ.get("FOLLOWER_CONTROL") and args.follower_control is None:
raise ValueError(
"The follower control is not set. Please set the configuration of the follower in the environment "
"variables or as an argument."
)

with open(
os.environ.get("FOLLOWER_CONTROL")
if args.follower_control is None
else args.follower_control
) as file:
follower_control = json.load(file)
load_control_table_from_json_conversion_tables(
follower_control, follower_control
)

node = Node(args.name)

follower_initialized = False

follower_position = None

for event in node:
event_type = event["type"]

if event_type == "INPUT":
event_id = event["id"]

if event_id == "leader_position":
leader_position = event["value"]

if not follower_initialized:
continue

physical_goal = logical_to_pwm_with_offset_arrow(
follower_position, leader_position, follower_control
)

node.send_output("follower_goal", physical_goal, event["metadata"])

elif event_id == "follower_position":
follower_position = event["value"]
follower_initialized = True

elif event_type == "ERROR":
print("[replay-to-lcr] error: ", event["error"])
break


if __name__ == "__main__":
main()

+ 64
- 0
examples/aloha/ASSEMBLING.md View File

@@ -0,0 +1,64 @@
# Dora pipeline Robots

Aloha is a bi manual robot that can be teleoperated using a similar arm. This repository contains
the Dora pipeline to manipulate arms, cameras, and record/replay episodes with LeRobot.

- To check if the robot is connected, install dynamixel
wizard [here](https://emanual.robotis.com/docs/en/software/dynamixel/dynamixel_wizard2/)
- Dynamixel wizard is a very helpful debugging tool that connects to individual motors of the robot. It allows
things such as rebooting the motor (very useful!), torque on/off, and sending commands.
However, it has no knowledge about the kinematics of the robot, so be careful about collisions.
The robot _will_ collapse if motors are torque off i.e. there is no automatically engaged brakes in joints.
- Open Dynamixel wizard, go into `options` and select:
- Protocal 2.0
- All ports
- 1000000 bps
- ID range from 0-10
- Note: repeat above everytime before you scan.
- Then hit `Scan`. There should be 4 devices showing up, each with 9 motors.
- One issue that arises is the port each robot binds to can change over time, e.g. a robot that
is initially `ttyUSB0` might suddenly become `ttyUSB5`. To resolve this, we bind each robot to a fixed symlink
port with the following mapping:
- `ttyDXL_master_right`: right master robot (master: the robot that the operator would be holding)
- `ttyDXL_puppet_right`: right puppet robot (puppet: the robot that performs the task)
- `ttyDXL_master_left`: left master robot
- `ttyDXL_puppet_left`: left puppet robot
- Take `ttyDXL_master_right`: right master robot as an example:

1. Find the port that the right master robot is currently binding to, e.g. `ttyUSB0`
2. run `udevadm info --name=/dev/ttyUSB0 --attribute-walk | grep serial` to obtain the serial number. Use the first
one that shows up, the format should look similar to `FT6S4DSP`.
3. `sudo vim /etc/udev/rules.d/99-fixed-interbotix-udev.rules` and add the following line:

SUBSYSTEM=="tty", ATTRS{serial}=="<serial number here>", ENV{ID_MM_DEVICE_IGNORE}="1",
ATTR{device/latency_timer}="1", SYMLINK+="ttyDXL_master_right"

4. This will make sure the right master robot is _always_ binding to `ttyDXL_master_right`
5. Repeat with the rest of 3 arms.

> You have an example of the given rules in `hardware_config.yml`.

```bash
cd dora-lerobot

sudo cp robots/aloha/hardware_config/99-interbotix-udev.rules /etc/udev/rules.d
sudo cp robots/aloha/hardware_config/99-fixed-interbotix-udev.rules /etc/udev/rules.d
```

- To apply the changes, run `sudo udevadm control --reload && sudo udevadm trigger`
- If successful, you should be able to find `ttyDXL*` in your `/dev`

## Documentation

https://github.com/Interbotix/interbotix_ros_toolboxes/blob/humble/interbotix_xs_toolbox/interbotix_xs_modules/interbotix_xs_modules/xs_robot/core.py

https://github.com/Interbotix/interbotix_ros_toolboxes/blob/c187bcea89b60391244bb19943ebd78f770aa975/interbotix_xs_toolbox/interbotix_xs_modules/interbotix_xs_modules/xs_robot/core.py#L380-L398

## Acknowledgement

This work is inspired from [tonyzhaozh/aloha](https://github.com/tonyzhaozh/aloha) and we're trying to bring perfornance
improvement.

## License

This library is licensed under the [Apache License 2.0](../../LICENSE).

+ 95
- 0
examples/aloha/CONFIGURING.md View File

@@ -0,0 +1,95 @@
# Dora pipeline Robots

Aloha is a bi manual robot that can be teleoperated using a similar arm. This repository contains
the Dora pipeline to manipulate arms, cameras, and record/replay episodes with LeRobot.

## Configuring

Once you have assembled the robot, and installed the required software, you can configure the robot. It's essential to
configure it
correctly for the robot to work as expected. Here are the reasons why you need to configure the robot:

- You may have done some 'mistakes' during the assembly, like inverting the motors, or changing the offsets by rotating
the motors before assembling the robot. So your configuration will be different from the one we used to record the
data set.
- The recording pipeline needs to know the position of the motors to record the data set correctly. If the motors are
calibrated differently, the data set will be incorrect.

**Please read the instructions carefully before configuring the robot.**

The first thing to do is to configure the Servo BUS:

- Setting all the servos to the same baud rate (1M).
- Setting the ID of the servos from the base (1) to the gripper (9) for the Follower and Leader arms.

Those steps can be done using the official wizard provided by the
manufacturer [ROBOTIS](https://emanual.robotis.com/docs/en/software/dynamixel/dynamixel_wizard2/).

After that, you need to configure the homing offsets and drive mode to have the same behavior for every user. We
recommend using our on-board tool to set all of that automatically:

- Connect the Follower arm to your computer.
- Retrieve the device port from the official wizard.
- Run the configuration tool with the following command and follow the instructions:

```bash
cd dora-lerobot/

# If you are using a custom environment, you will have to activate it before running the command
source [your_custom_env_bin]/activate

# If you followed the installation instructions, you can run the following command
source venv/bin/activate # On Linux
source venv/Scripts/activate # On Windows bash
venv\Scripts\activate.bat # On Windows cmd
venv\Scripts\activate.ps1 # On Windows PowerShell

python ./robots/aloha/configure.py --port /dev/ttyUSB0 --follower --left # (or right)
```

**Note:** change `/dev/ttyUSB0` to the device port you retrieved from the official wizard (like `COM3` on Windows).
**Note:** The configuration tool will disable all torque so you can move the arm freely to the Position 1.
**Note:** You will be asked to set the arm in two different positions. The two positions are:

TODO: image for aloha

**Node:** You will be asked the path of the configuration file, you can press enter to use the default one.

- Repeat the same steps for the Leader arm:

```bash
python ./robots/aloha/configure.py --port /dev/ttyUSB1 --leader --left # (or right)
```

**Note:** change `/dev/ttyUSB1` to the device port you retrieved from the official wizard (like `COM4` on Windows).
**Note:** The configuration tool will disable all torque so you can move the arm freely to the Position 1.
**Node:** You will be asked the path of the configuration file, you can press enter to use the default one.

After following the guide, you should have the following configuration:

TODO: image for aloha

This configuration has to be exported into environment variables inside the graph file. Here is an example of the
configuration:

```YAML
nodes:
- id: aloha-follower
env:
PORT: /dev/ttyUSB0
CONFIG: ../configs/follower.left.json # relative path to `./robots/aloha/configs/follower.json`

- id: aloha-to-aloha
env:
LEADER_CONTROL: ../configs/leader.left.json
FOLLOWER_CONTROL: ../configs/follower.left.json
```

## Acknowledgement

This work is inspired from [tonyzhaozh/aloha](https://github.com/tonyzhaozh/aloha) and we're trying to bring perfornance
improvement.

## License

This library is licensed under the [Apache License 2.0](../../LICENSE).

+ 87
- 0
examples/aloha/INSTALLATION.md View File

@@ -0,0 +1,87 @@
# Dora pipeline Robots

Aloha is a bi manual robot that can be teleoperated using a similar arm. This repository contains
the Dora pipeline to manipulate arms, cameras, and record/replay episodes with LeRobot.

## Installation

Dataflow-oriented robotic application (Dora) is a framework that makes creation of robotic applications fast and simple.
See [Dora repository](https://github.com/dora-rs/dora).

**Please read the instructions carefully before installing the required software and environment to run the robot.**

You must install Dora before attempting to run theAloha Robot pipeline. Here are the steps to install Dora:

- Install Rust by following the instructions at [Rustup](https://rustup.rs/). (You may need to install Visual Studio C++
build tools on Windows.)
- Install Dora by running the following command:

```bash
cargo install dora-cli
```

Now you're ready to run Rust dataflow applications! We decided to only make Python dataflow for Aloha Robot, so
you may need to setup your Python environment:

- Install Python 3.12 or later by following the instructions at [Python](https://www.python.org/downloads/).
- Clone this repository by running the following command:

```bash
git clone https://github.com/dora-rs/dora-lerobot
```

- Open a bash terminal and navigate to the repository by running the following command:

```bash
cd dora-lerobot
```

- Create a virtual environment by running the following command (you can find where is all your pythons executable with
the command `whereis python3` on Linux, on default for Windows it's located
in `C:\Users\<User>\AppData\Local\Programs\Python\Python3.12\python.exe)`):

```bash
path_to_your_python3_executable -m venv venv
```

- Activate the virtual environment and install the required Python packages by running the following command:

```bash
# If you are using a custom environment, you will have to activate it before running the command
source [your_custom_env_bin]/activate

# If you followed the installation instructions, you can run the following command
source venv/bin/activate # On Linux
source venv/Scripts/activate # On Windows bash
venv\Scripts\activate.bat # On Windows cmd
venv\Scripts\activate.ps1 # On Windows PowerShell

pip install -r robots/so100/requirements.txt
```

If you want to install the required Python packages in development mode, you can run the following command, but you will
have to avoid using `dora build` during execution procedure:

```bash
pip install -r robots/aloha/development.txt # You **MUST** be inside dora-lerobot to run this command, not robots/aloha
```

**Note**: You're totally free to use your own Python environment, a Conda environment, or whatever you prefer, you will
have to activate
your custom python environment before running `dora up && dora start [graph].yml`.

In order to record episodes, you need ffmpeg installed on your system. You can download it from
the [official website](https://ffmpeg.org/download.html). If you're on Windows, you can download the latest build
from [here](https://www.gyan.dev/ffmpeg/builds/). You can
extract the zip file and add the `bin` folder to your PATH.
If you're on Linux, you can install ffmpeg using the package manager of your distribution. (
e.g `sudo apt install ffmpeg` on Ubuntu, `brew install ffmpeg` on macOS)

## Acknowledgement

This work is inspired from [tonyzhaozh/aloha](https://github.com/tonyzhaozh/aloha) and we're trying to bring perfornance
improvement.

## License

This library is licensed under the [Apache License 2.0](../../LICENSE).

+ 42
- 0
examples/aloha/README.md View File

@@ -0,0 +1,42 @@
# Dora pipeline Robots

Aloha is a bi manual robot that can be teleoperated using a similar arm. This repository contains
the Dora pipeline to manipulate arms, cameras, and record/replay episodes with LeRobot.

## Assembling

Check the [ASSEMBLING.md](ASSEMBLING.md) file for instructions on how to assemble the robot.

## Installation

Check the [INSTALLATION.md](INSTALLATION.md) file for instructions on how to install the required software and
environment
to run the robot.

## Configuring

Check the [CONFIGURING.md](CONFIGURING.md) file for instructions on how to configure the robot to record episodes for
LeRobot and teleoperate the robot.

## Recording

It's probably better to check the [examples](#examples) below before trying to record episodes. It will give you a
better
understanding of how Dora works.

Check the [RECORDING.md](RECORDING.md) file for instructions on how to record episodes for LeRobot.

## Examples

There are also some other example applications in the `graphs` folder. Have fun!

Here is a list of the available examples:

## Acknowledgement

This work is inspired from [tonyzhaozh/aloha](https://github.com/tonyzhaozh/aloha) and we're trying to bring perfornance
improvement.

## License

This library is licensed under the [Apache License 2.0](../../LICENSE).

+ 17
- 0
examples/aloha/RECORDING.md View File

@@ -0,0 +1,17 @@
# Dora pipeline Robots

Aloha is a bi manual robot that can be teleoperated using a similar arm. This repository contains
the Dora pipeline to manipulate arms, cameras, and record/replay episodes with LeRobot.

## Recording

There is not a recording section for the robot at the moment.

## Acknowledgement

This work is inspired from [tonyzhaozh/aloha](https://github.com/tonyzhaozh/aloha) and we're trying to bring perfornance
improvement.

## License

This library is licensed under the [Apache License 2.0](../../LICENSE).

+ 13
- 0
examples/aloha/benchmark/python/README.md View File

@@ -0,0 +1,13 @@
# Python API Powered Aloha inspired by [AlexanderKoch-Koch/low_cost_robot](https://github.com/AlexanderKoch-Koch/low_cost_robot)

## Getting started

Modify port and servo motor configuration.

```bash
python python teleoperate_real_robot.py
```

## Results

On my run, I get about 50Hz teleoperation although the python calls does not block for as long.

+ 325
- 0
examples/aloha/benchmark/python/dynamixel.py View File

@@ -0,0 +1,325 @@
from __future__ import annotations
import math
import os
from dynamixel_sdk import * # Uses Dynamixel SDK library
from dataclasses import dataclass
import enum
import time


class ReadAttribute(enum.Enum):
TEMPERATURE = 146
VOLTAGE = 145
VELOCITY = 128
POSITION = 132
CURRENT = 126
PWM = 124
HARDWARE_ERROR_STATUS = 70
HOMING_OFFSET = 20
BAUDRATE = 8


class OperatingMode(enum.Enum):
VELOCITY = 1
POSITION = 3
CURRENT_CONTROLLED_POSITION = 5
PWM = 16
UNKNOWN = -1


class Dynamixel:
ADDR_TORQUE_ENABLE = 64
ADDR_GOAL_POSITION = 116
ADDR_VELOCITY_LIMIT = 44
ADDR_GOAL_PWM = 100
OPERATING_MODE_ADDR = 11
POSITION_I = 82
POSITION_P = 84
ADDR_ID = 7

@dataclass
class Config:
def instantiate(self):
return Dynamixel(self)

baudrate: int = 57600
protocol_version: float = 2.0
device_name: str = "" # /dev/tty.usbserial-1120'
dynamixel_id: int = 1

def __init__(self, config: Config):
self.config = config
self.connect()

def connect(self):
if self.config.device_name == "":
for port_name in os.listdir("/dev"):
if "ttyUSB" in port_name or "ttyACM" in port_name:
self.config.device_name = "/dev/" + port_name
print(f"using device {self.config.device_name}")
self.portHandler = PortHandler(self.config.device_name)
# self.portHandler.LA
self.packetHandler = PacketHandler(self.config.protocol_version)
if not self.portHandler.openPort():
raise Exception(f"Failed to open port {self.config.device_name}")

if not self.portHandler.setBaudRate(self.config.baudrate):
raise Exception(f"failed to set baudrate to {self.config.baudrate}")

# self.operating_mode = OperatingMode.UNKNOWN
# self.torque_enabled = False
# self._disable_torque()

self.operating_modes = [None for _ in range(32)]
self.torque_enabled = [None for _ in range(32)]
return True

def disconnect(self):
self.portHandler.closePort()

def set_goal_position(self, motor_id, goal_position):
# if self.operating_modes[motor_id] is not OperatingMode.POSITION:
# self._disable_torque(motor_id)
# self.set_operating_mode(motor_id, OperatingMode.POSITION)

# if not self.torque_enabled[motor_id]:
# self._enable_torque(motor_id)

# self._enable_torque(motor_id)
dxl_comm_result, dxl_error = self.packetHandler.write4ByteTxRx(
self.portHandler, motor_id, self.ADDR_GOAL_POSITION, goal_position
)
# self._process_response(dxl_comm_result, dxl_error)
# print(f'set position of motor {motor_id} to {goal_position}')

def set_pwm_value(self, motor_id: int, pwm_value, tries=3):
if self.operating_modes[motor_id] is not OperatingMode.PWM:
self._disable_torque(motor_id)
self.set_operating_mode(motor_id, OperatingMode.PWM)

if not self.torque_enabled[motor_id]:
self._enable_torque(motor_id)
# print(f'enabling torque')
dxl_comm_result, dxl_error = self.packetHandler.write2ByteTxRx(
self.portHandler, motor_id, self.ADDR_GOAL_PWM, pwm_value
)
# self._process_response(dxl_comm_result, dxl_error)
# print(f'set pwm of motor {motor_id} to {pwm_value}')
if dxl_comm_result != COMM_SUCCESS:
if tries <= 1:
raise ConnectionError(
f"dxl_comm_result: {self.packetHandler.getTxRxResult(dxl_comm_result)}"
)
else:
print(
f"dynamixel pwm setting failure trying again with {tries - 1} tries"
)
self.set_pwm_value(motor_id, pwm_value, tries=tries - 1)
elif dxl_error != 0:
print(f"dxl error {dxl_error}")
raise ConnectionError(
f"dynamixel error: {self.packetHandler.getTxRxResult(dxl_error)}"
)

def read_temperature(self, motor_id: int):
return self._read_value(motor_id, ReadAttribute.TEMPERATURE, 1)

def read_velocity(self, motor_id: int):
pos = self._read_value(motor_id, ReadAttribute.VELOCITY, 4)
if pos > 2**31:
pos -= 2**32
# print(f'read position {pos} for motor {motor_id}')
return pos

def read_position(self, motor_id: int):
pos = self._read_value(motor_id, ReadAttribute.POSITION, 4)
if pos > 2**31:
pos -= 2**32
# print(f'read position {pos} for motor {motor_id}')
return pos

def read_position_degrees(self, motor_id: int) -> float:
return (self.read_position(motor_id) / 4096) * 360

def read_position_radians(self, motor_id: int) -> float:
return (self.read_position(motor_id) / 4096) * 2 * math.pi

def read_current(self, motor_id: int):
current = self._read_value(motor_id, ReadAttribute.CURRENT, 2)
if current > 2**15:
current -= 2**16
return current

def read_present_pwm(self, motor_id: int):
return self._read_value(motor_id, ReadAttribute.PWM, 2)

def read_hardware_error_status(self, motor_id: int):
return self._read_value(motor_id, ReadAttribute.HARDWARE_ERROR_STATUS, 1)

def set_id(self, old_id, new_id, use_broadcast_id: bool = False):
"""
sets the id of the dynamixel servo
@param old_id: current id of the servo
@param new_id: new id
@param use_broadcast_id: set ids of all connected dynamixels if True.
If False, change only servo with self.config.id
@return:
"""
if use_broadcast_id:
current_id = 254
else:
current_id = old_id
dxl_comm_result, dxl_error = self.packetHandler.write1ByteTxRx(
self.portHandler, current_id, self.ADDR_ID, new_id
)
self._process_response(dxl_comm_result, dxl_error, old_id)
self.config.id = id

def _enable_torque(self, motor_id):
dxl_comm_result, dxl_error = self.packetHandler.write1ByteTxRx(
self.portHandler, motor_id, self.ADDR_TORQUE_ENABLE, 1
)
self._process_response(dxl_comm_result, dxl_error, motor_id)
self.torque_enabled[motor_id] = True

def _disable_torque(self, motor_id):
dxl_comm_result, dxl_error = self.packetHandler.write1ByteTxRx(
self.portHandler, motor_id, self.ADDR_TORQUE_ENABLE, 0
)
self._process_response(dxl_comm_result, dxl_error, motor_id)
self.torque_enabled[motor_id] = False

def _process_response(self, dxl_comm_result: int, dxl_error: int, motor_id: int):
if dxl_comm_result != COMM_SUCCESS:
raise ConnectionError(
f"dxl_comm_result for motor {motor_id}: {self.packetHandler.getTxRxResult(dxl_comm_result)}"
)
elif dxl_error != 0:
print(f"dxl error {dxl_error}")
raise ConnectionError(
f"dynamixel error for motor {motor_id}: {self.packetHandler.getTxRxResult(dxl_error)}"
)

def set_operating_mode(self, motor_id: int, operating_mode: OperatingMode):
dxl_comm_result, dxl_error = self.packetHandler.write2ByteTxRx(
self.portHandler, motor_id, self.OPERATING_MODE_ADDR, operating_mode.value
)
self._process_response(dxl_comm_result, dxl_error, motor_id)
self.operating_modes[motor_id] = operating_mode

def set_pwm_limit(self, motor_id: int, limit: int):
dxl_comm_result, dxl_error = self.packetHandler.write2ByteTxRx(
self.portHandler, motor_id, 36, limit
)
self._process_response(dxl_comm_result, dxl_error, motor_id)

def set_velocity_limit(self, motor_id: int, velocity_limit):
dxl_comm_result, dxl_error = self.packetHandler.write4ByteTxRx(
self.portHandler, motor_id, self.ADDR_VELOCITY_LIMIT, velocity_limit
)
self._process_response(dxl_comm_result, dxl_error, motor_id)

def set_P(self, motor_id: int, P: int):
dxl_comm_result, dxl_error = self.packetHandler.write2ByteTxRx(
self.portHandler, motor_id, self.POSITION_P, P
)
self._process_response(dxl_comm_result, dxl_error, motor_id)

def set_I(self, motor_id: int, I: int):
dxl_comm_result, dxl_error = self.packetHandler.write2ByteTxRx(
self.portHandler, motor_id, self.POSITION_I, I
)
self._process_response(dxl_comm_result, dxl_error, motor_id)

def read_home_offset(self, motor_id: int):
self._disable_torque(motor_id)
# dxl_comm_result, dxl_error = self.packetHandler.write4ByteTxRx(self.portHandler, motor_id,
# ReadAttribute.HOMING_OFFSET.value, home_position)
home_offset = self._read_value(motor_id, ReadAttribute.HOMING_OFFSET, 4)
# self._process_response(dxl_comm_result, dxl_error)
self._enable_torque(motor_id)
return home_offset

def set_home_offset(self, motor_id: int, home_position: int):
self._disable_torque(motor_id)
dxl_comm_result, dxl_error = self.packetHandler.write4ByteTxRx(
self.portHandler, motor_id, ReadAttribute.HOMING_OFFSET.value, home_position
)
self._process_response(dxl_comm_result, dxl_error, motor_id)
self._enable_torque(motor_id)

def set_baudrate(self, motor_id: int, baudrate):
# translate baudrate into dynamixel baudrate setting id
if baudrate == 57600:
baudrate_id = 1
elif baudrate == 1_000_000:
baudrate_id = 3
elif baudrate == 2_000_000:
baudrate_id = 4
elif baudrate == 3_000_000:
baudrate_id = 5
elif baudrate == 4_000_000:
baudrate_id = 6
else:
raise Exception("baudrate not implemented")

self._disable_torque(motor_id)
dxl_comm_result, dxl_error = self.packetHandler.write1ByteTxRx(
self.portHandler, motor_id, ReadAttribute.BAUDRATE.value, baudrate_id
)
self._process_response(dxl_comm_result, dxl_error, motor_id)

def _read_value(self, motor_id, attribute: ReadAttribute, num_bytes: int, tries=10):
try:
if num_bytes == 1:
value, dxl_comm_result, dxl_error = self.packetHandler.read1ByteTxRx(
self.portHandler, motor_id, attribute.value
)
elif num_bytes == 2:
value, dxl_comm_result, dxl_error = self.packetHandler.read2ByteTxRx(
self.portHandler, motor_id, attribute.value
)
elif num_bytes == 4:
value, dxl_comm_result, dxl_error = self.packetHandler.read4ByteTxRx(
self.portHandler, motor_id, attribute.value
)
except Exception:
if tries == 0:
raise Exception
else:
return self._read_value(motor_id, attribute, num_bytes, tries=tries - 1)
if dxl_comm_result != COMM_SUCCESS:
if tries <= 1:
# print("%s" % self.packetHandler.getTxRxResult(dxl_comm_result))
raise ConnectionError(
f"dxl_comm_result {dxl_comm_result} for servo {motor_id} value {value}"
)
else:
print(
f"dynamixel read failure for servo {motor_id} trying again with {tries - 1} tries"
)
time.sleep(0.02)
return self._read_value(motor_id, attribute, num_bytes, tries=tries - 1)
elif (
dxl_error != 0
): # # print("%s" % self.packetHandler.getRxPacketError(dxl_error))
# raise ConnectionError(f'dxl_error {dxl_error} binary ' + "{0:b}".format(37))
if tries == 0 and dxl_error != 128:
raise Exception(
f"Failed to read value from motor {motor_id} error is {dxl_error}"
)
else:
return self._read_value(motor_id, attribute, num_bytes, tries=tries - 1)
return value

def set_home_position(self, motor_id: int):
print(f"setting home position for motor {motor_id}")
self.set_home_offset(motor_id, 0)
current_position = self.read_position(motor_id)
print(f"position before {current_position}")
self.set_home_offset(motor_id, -current_position)
# dynamixel.set_home_offset(motor_id, -4096)
# dynamixel.set_home_offset(motor_id, -4294964109)
current_position = self.read_position(motor_id)
# print(f'signed position {current_position - 2** 32}')
print(f"position after {current_position}")

+ 191
- 0
examples/aloha/benchmark/python/robot.py View File

@@ -0,0 +1,191 @@
import numpy as np
from dynamixel import Dynamixel, OperatingMode, ReadAttribute
import time
from dynamixel_sdk import (
GroupSyncRead,
GroupSyncWrite,
DXL_LOBYTE,
DXL_HIBYTE,
DXL_LOWORD,
DXL_HIWORD,
)
from enum import Enum, auto
from typing import Union


class MotorControlType(Enum):
PWM = auto()
POSITION_CONTROL = auto()
DISABLED = auto()
UNKNOWN = auto()


class Robot:
# def __init__(self, device_name: str, baudrate=1_000_000, servo_ids=[1, 2, 3, 4, 5]):
def __init__(self, dynamixel, baudrate=1_000_000, servo_ids=[1, 2, 3, 4, 5]):
self.servo_ids = servo_ids
self.dynamixel = dynamixel
# self.dynamixel = Dynamixel.Config(baudrate=baudrate, device_name=device_name).instantiate()
self.position_reader = GroupSyncRead(
self.dynamixel.portHandler,
self.dynamixel.packetHandler,
ReadAttribute.POSITION.value,
4,
)
for id in self.servo_ids:
self.position_reader.addParam(id)

self.velocity_reader = GroupSyncRead(
self.dynamixel.portHandler,
self.dynamixel.packetHandler,
ReadAttribute.VELOCITY.value,
4,
)
for id in self.servo_ids:
self.velocity_reader.addParam(id)

self.pos_writer = GroupSyncWrite(
self.dynamixel.portHandler,
self.dynamixel.packetHandler,
self.dynamixel.ADDR_GOAL_POSITION,
4,
)
for id in self.servo_ids:
self.pos_writer.addParam(id, [2048])

self.pwm_writer = GroupSyncWrite(
self.dynamixel.portHandler,
self.dynamixel.packetHandler,
self.dynamixel.ADDR_GOAL_PWM,
2,
)
for id in self.servo_ids:
self.pwm_writer.addParam(id, [2048])
self._disable_torque()
self.motor_control_state = MotorControlType.DISABLED

def read_position(self, tries=2):
"""
Reads the joint positions of the robot. 2048 is the center position. 0 and 4096 are 180 degrees in each direction.
:param tries: maximum number of tries to read the position
:return: list of joint positions in range [0, 4096]
"""
result = self.position_reader.txRxPacket()
if result != 0:
if tries > 0:
return self.read_position(tries=tries - 1)
else:
print(f"failed to read position!!!!!!!!!!!!!!!!!!!!!!!!!!!!!")
positions = []
for id in self.servo_ids:
position = self.position_reader.getData(id, ReadAttribute.POSITION.value, 4)
if position > 2**31:
position -= 2**32
positions.append(position)
return positions

def read_velocity(self):
"""
Reads the joint velocities of the robot.
:return: list of joint velocities,
"""
self.velocity_reader.txRxPacket()
velocties = []
for id in self.servo_ids:
velocity = self.velocity_reader.getData(id, ReadAttribute.VELOCITY.value, 4)
if velocity > 2**31:
velocity -= 2**32
velocties.append(velocity)
return velocties

def set_goal_pos(self, action):
"""

:param action: list or numpy array of target joint positions in range [0, 4096]
"""
if not self.motor_control_state is MotorControlType.POSITION_CONTROL:
self._set_position_control()
for i, motor_id in enumerate(self.servo_ids):
data_write = [
DXL_LOBYTE(DXL_LOWORD(action[i])),
DXL_HIBYTE(DXL_LOWORD(action[i])),
DXL_LOBYTE(DXL_HIWORD(action[i])),
DXL_HIBYTE(DXL_HIWORD(action[i])),
]
self.pos_writer.changeParam(motor_id, data_write)

self.pos_writer.txPacket()

def set_pwm(self, action):
"""
Sets the pwm values for the servos.
:param action: list or numpy array of pwm values in range [0, 885]
"""
if not self.motor_control_state is MotorControlType.PWM:
self._set_pwm_control()
for i, motor_id in enumerate(self.servo_ids):
data_write = [
DXL_LOBYTE(DXL_LOWORD(action[i])),
DXL_HIBYTE(DXL_LOWORD(action[i])),
]
self.pwm_writer.changeParam(motor_id, data_write)

self.pwm_writer.txPacket()

def set_trigger_torque(self):
"""
Sets a constant torque torque for the last servo in the chain. This is useful for the trigger of the leader arm
"""
self.dynamixel._enable_torque(self.servo_ids[-1])
self.dynamixel.set_pwm_value(self.servo_ids[-1], 200)

def limit_pwm(self, limit: Union[int, list, np.ndarray]):
"""
Limits the pwm values for the servos in for position control
@param limit: 0 ~ 885
@return:
"""
if isinstance(limit, int):
limits = [
limit,
] * 5
else:
limits = limit
self._disable_torque()
for motor_id, limit in zip(self.servo_ids, limits):
self.dynamixel.set_pwm_limit(motor_id, limit)
self._enable_torque()

def _disable_torque(self):
print(f"disabling torque for servos {self.servo_ids}")
for motor_id in self.servo_ids:
self.dynamixel._disable_torque(motor_id)

def _enable_torque(self):
print(f"enabling torque for servos {self.servo_ids}")
for motor_id in self.servo_ids:
self.dynamixel._enable_torque(motor_id)

def _set_pwm_control(self):
self._disable_torque()
for motor_id in self.servo_ids:
self.dynamixel.set_operating_mode(motor_id, OperatingMode.PWM)
self._enable_torque()
self.motor_control_state = MotorControlType.PWM

def _set_position_control(self):
self._disable_torque()
for motor_id in self.servo_ids:
self.dynamixel.set_operating_mode(motor_id, OperatingMode.POSITION)
self._enable_torque()
self.motor_control_state = MotorControlType.POSITION_CONTROL


if __name__ == "__main__":
robot = Robot(device_name="/dev/tty.usbmodem57380045631")
robot._disable_torque()
for _ in range(10000):
s = time.time()
pos = robot.read_position()
elapsed = time.time() - s
print(f"read took {elapsed} pos {pos}")

+ 24
- 0
examples/aloha/benchmark/python/teleoperate_real_robot.py View File

@@ -0,0 +1,24 @@
from robot import Robot
from dynamixel import Dynamixel

leader_dynamixel = Dynamixel.Config(
baudrate=1_000_000, device_name="/dev/ttyDXL_master_right"
).instantiate()
follower_dynamixel = Dynamixel.Config(
baudrate=1_000_000, device_name="/dev/ttyDXL_puppet_right"
).instantiate()
follower = Robot(follower_dynamixel, servo_ids=[1, 2, 3, 4, 5, 6, 7, 8])
leader = Robot(leader_dynamixel, servo_ids=[1, 2, 3, 4, 5, 6, 7, 8])
# leader.set_trigger_torque()

import time

times_rec = []
times_send = []

while True:
now = time.time()
pos = leader.read_position()
# print(f"Time to rec pos: {(time.time() - now) * 1000}")
follower.set_goal_pos(pos)
print(f"Time to send pos: {(time.time() - now) * 1000}")

+ 103
- 0
examples/aloha/benchmark/ros2/README.md View File

@@ -0,0 +1,103 @@
# `dora-rs` powered ALOHA

## Getting started

```bash
docker run --privileged --name ros2-aloha --network=host -e DISPLAY=${DISPLAY} -v /dev:/dev -v $(pwd):/dora-aloha -it osrf/ros:humble-desktop


## In the container
./dora-aloha/setup_ros2.sh # Run it once
```

## To activate the controller, just do:

```bash
## In the container
ros2 launch ~/interbotix_ws/src/interbotix_ros_manipulators/interbotix_ros_xsarms/interbotix_xsarm_control/launch/xsarm_control.launch.py robot_name:=robot_model_master robot_model:=aloha_wx250s mode_configs:=/dora-aloha/benchmark/ros2/config/master_modes_right.yaml &
ros2 launch ~/interbotix_ws/src/interbotix_ros_manipulators/interbotix_ros_xsarms/interbotix_xsarm_control/launch/xsarm_control.launch.py robot_name:=robot_model_puppet robot_model:=vx300s mode_configs:=/dora-aloha/benchmark/ros2/config/puppet_modes_right.yaml &

## In case you want to restart the controller, just do:
pkill -f robot
```

## To run dora

Outside of the controller docker:

### Setup

```bash
## Setup
git clone https://github.com/haixuanTao/ament_prefix_path.git $HOME/ros2-ament-prefix-path
export AMENT_PREFIX_PATH=$HOME/ros2-ament-prefix-path # <- holding ros2 message deserialization
```

### Start

```bash
## Start
dora up
dora start dataflow.yml --attach
```

## Result

I'm able to get about 100 Hz for teleoperation.

The improvement probably comes from the ros2 read/write written in C++.

## Hardware Installation

- To check if the robot is connected, install dynamixel wizard [here](https://emanual.robotis.com/docs/en/software/dynamixel/dynamixel_wizard2/)
- Dynamixel wizard is a very helpful debugging tool that connects to individual motors of the robot. It allows
things such as rebooting the motor (very useful!), torque on/off, and sending commands.
However, it has no knowledge about the kinematics of the robot, so be careful about collisions.
The robot _will_ collapse if motors are torque off i.e. there is no automatically engaged brakes in joints.
- Open Dynamixel wizard, go into `options` and select:
- Protocal 2.0
- All ports
- 1000000 bps
- ID range from 0-10
- Note: repeat above everytime before you scan.
- Then hit `Scan`. There should be 4 devices showing up, each with 9 motors.
- One issue that arises is the port each robot binds to can change over time, e.g. a robot that
is initially `ttyUSB0` might suddenly become `ttyUSB5`. To resolve this, we bind each robot to a fixed symlink
port with the following mapping:
- `ttyDXL_master_right`: right master robot (master: the robot that the operator would be holding)
- `ttyDXL_puppet_right`: right puppet robot (puppet: the robot that performs the task)
- `ttyDXL_master_left`: left master robot
- `ttyDXL_puppet_left`: left puppet robot
- Take `ttyDXL_master_right`: right master robot as an example:

1. Find the port that the right master robot is currently binding to, e.g. `ttyUSB0`
2. run `udevadm info --name=/dev/ttyUSB0 --attribute-walk | grep serial` to obtain the serial number. Use the first one that shows up, the format should look similar to `FT6S4DSP`.
3. `sudo vim /etc/udev/rules.d/99-fixed-interbotix-udev.rules` and add the following line:

SUBSYSTEM=="tty", ATTRS{serial}=="<serial number here>", ENV{ID_MM_DEVICE_IGNORE}="1", ATTR{device/latency_timer}="1", SYMLINK+="ttyDXL_master_right"

4. This will make sure the right master robot is _always_ binding to `ttyDXL_master_right`
5. Repeat with the rest of 3 arms.

- To apply the changes, run `sudo udevadm control --reload && sudo udevadm trigger`
- If successful, you should be able to find `ttyDXL*` in your `/dev`

## Hardware troubleshoot

- In case you're having `xsarm_control.launch.py` that is not launching for the reason: ` Can't find Dynamixel ID 'X'`, one of the issue I faced in my demo was Overload Error(OL). The fix was to go on Dynamixel Wizard, click on the motor ID `X` and clicking the reboot button the right side of the window. This error happens when the torque is too high. I had the issue when I try to set a closing position for the gripper that did not take into account the fingers.

## Support Matrix

| | dora-aloha |
| ----------------------------- | --------------------- |
| **Supported Platforms (x86)** | Windows, macOS, Linux |
| **Supported Platforms (ARM)** | Linux(RPI4) |

## Documentation

https://github.com/Interbotix/interbotix_ros_toolboxes/blob/humble/interbotix_xs_toolbox/interbotix_xs_modules/interbotix_xs_modules/xs_robot/core.py
https://github.com/Interbotix/interbotix_ros_toolboxes/blob/c187bcea89b60391244bb19943ebd78f770aa975/interbotix_xs_toolbox/interbotix_xs_modules/interbotix_xs_modules/xs_robot/core.py#L380-L398

## Acknowledgement

This work is heavily inspired from [tonyzhaozh/aloha](https://github.com/tonyzhaozh/aloha) and we're trying to bring perfornance improvement.

+ 9
- 0
examples/aloha/benchmark/ros2/config/master_modes_left.yaml View File

@@ -0,0 +1,9 @@
port: /dev/ttyDXL_master_left

groups:
arm:
torque_enable: false

singles:
gripper:
torque_enable: false

+ 9
- 0
examples/aloha/benchmark/ros2/config/master_modes_right.yaml View File

@@ -0,0 +1,9 @@
port: /dev/ttyDXL_master_right

groups:
arm:
torque_enable: false

singles:
gripper:
torque_enable: false

+ 17
- 0
examples/aloha/benchmark/ros2/config/puppet_modes_left.yaml View File

@@ -0,0 +1,17 @@
port: /dev/ttyDXL_puppet_left

groups:
arm:
operating_mode: position
profile_type: velocity
profile_velocity: 0
profile_acceleration: 0
torque_enable: true

singles:
gripper:
operating_mode: linear_position
profile_type: velocity
profile_velocity: 0
profile_acceleration: 0
torque_enable: true

+ 17
- 0
examples/aloha/benchmark/ros2/config/puppet_modes_right.yaml View File

@@ -0,0 +1,17 @@
port: /dev/ttyDXL_puppet_right

groups:
arm:
operating_mode: position
profile_type: velocity
profile_velocity: 0
profile_acceleration: 0
torque_enable: true

singles:
gripper:
operating_mode: linear_position
profile_type: velocity
profile_velocity: 0
profile_acceleration: 0
torque_enable: true

+ 4
- 0
examples/aloha/benchmark/ros2/dataflow.yml View File

@@ -0,0 +1,4 @@
nodes:
- id: control
custom:
source: teleop.py

+ 22
- 0
examples/aloha/benchmark/ros2/setup_ros2.sh View File

@@ -0,0 +1,22 @@
# setup ros2

## Source: https://docs.trossenrobotics.com/interbotix_xsarms_docs/ros_interface/ros2/software_setup.html

sudo apt update

sudo apt install curl vim git -y

curl 'https://raw.githubusercontent.com/Interbotix/interbotix_ros_manipulators/humble/interbotix_ros_xsarms/install/amd64/xsarm_amd64_install.sh' > xsarm_amd64_install.sh

source /opt/ros/$ROS_DISTRO/setup.bash

chmod +x xsarm_amd64_install.sh

./xsarm_amd64_install.sh -d humble -n

source /opt/ros/$ROS_DISTRO/setup.bash

source ~/interbotix_ws/install/setup.bash

ros2 pkg list | grep interbotix


+ 90
- 0
examples/aloha/benchmark/ros2/teleop.py View File

@@ -0,0 +1,90 @@
import dora
from dora import Node
import pyarrow as pa
import numpy as np


ros2_context = dora.experimental.ros2_bridge.Ros2Context()
ros2_node = ros2_context.new_node(
"robot_model_master",
"/dora",
dora.experimental.ros2_bridge.Ros2NodeOptions(rosout=True),
)

# Define a ROS2 QOS
topic_qos = dora.experimental.ros2_bridge.Ros2QosPolicies(
reliable=True, max_blocking_time=0.1
)

# Create a publisher to cmd_vel topic
puppet_arm_command = ros2_node.create_publisher(
ros2_node.create_topic(
"/robot_model_puppet/commands/joint_group",
"interbotix_xs_msgs/JointGroupCommand",
topic_qos,
)
)

gripper_command = ros2_node.create_publisher(
ros2_node.create_topic(
"/robot_model_puppet/commands/joint_single",
"interbotix_xs_msgs/JointSingleCommand",
topic_qos,
)
)
# Create a listener to pose topic
master_state = ros2_node.create_subscription(
ros2_node.create_topic(
"/robot_model_master/joint_states", "sensor_msgs/JointState", topic_qos
)
)

# Create a dora node
dora_node = Node()

# Listen for both stream on the same loop as Python does not handle well multiprocessing
dora_node.merge_external_events(master_state)

PUPPET_GRIPPER_MAX = 0.115
PUPPET_GRIPPER_MIN = 0.0965

MASTER_GRIPPER_MAX = 0.8
MASTER_GRIPPER_MIN = -0.1

RATIO = (PUPPET_GRIPPER_MAX - PUPPET_GRIPPER_MIN) / (
MASTER_GRIPPER_MAX - MASTER_GRIPPER_MIN
)

for event in dora_node:
event_kind = event["kind"]

# ROS2 Event
if event_kind == "external":
pose = event.inner()[0]

values = np.array(pose["position"].values, dtype=np.float32)
values[6] = np.clip(
(values[6] - MASTER_GRIPPER_MIN) * RATIO + PUPPET_GRIPPER_MIN,
PUPPET_GRIPPER_MIN,
PUPPET_GRIPPER_MAX,
)
gripper_command.publish(
pa.array(
[
{
"name": "gripper",
"cmd": np.float32(values[6]),
}
]
),
)
puppet_arm_command.publish(
pa.array(
[
{
"name": "arm",
"cmd": values[:6],
}
]
),
)

+ 9
- 0
examples/aloha/benchmark/rust/README.md View File

@@ -0,0 +1,9 @@
# Rust powered ALOHA

## Getting started

Modify port and servo motor configuration for your device within `aloha-teleop`

```bash
cargo run -p aloha-teleop --release
```

+ 141
- 0
examples/aloha/graphs/eval.yml View File

@@ -0,0 +1,141 @@
nodes:
- id: aloha-client
custom:
source: cargo
args: run --release -p aloha-client
inputs:
puppet_goal_position: eval/action
tick: dora/timer/millis/5
outputs:
- puppet_position

- id: plot
custom:
source: ../nodes/plot_node.py
inputs:
image: cam_right_wrist/image
action: eval/action
puppet_position: aloha-client/puppet_position
envs:
IMAGE_WIDTH: 640
IMAGE_HEIGHT: 480

- id: cam_left_wrist
custom:
source: ../nodes/webcam.py
inputs:
tick: dora/timer/millis/33
outputs:
- image
envs:
CAMERA_ID: 2

- id: cam_right_wrist
custom:
source: ../nodes/webcam.py
inputs:
tick: dora/timer/millis/33
outputs:
- image
envs:
CAMERA_ID: 22

- id: cam_low
custom:
source: ../nodes/webcam.py
inputs:
tick: dora/timer/millis/33
outputs:
- image
envs:
CAMERA_ID: 14

- id: cam_high
custom:
source: ../nodes/webcam.py
inputs:
tick: dora/timer/millis/33
outputs:
- image
envs:
CAMERA_ID: 8

- id: eval
custom:
source: python
args: /home/rcadene/dora_lerobot/dora_lerobot/scripts/eval.py -p cadene/aloha_act_no_state_aloha_v2_static_dora_test_wrist_gripper eval.n_episodes=1 eval.batch_size=1 env.episode_length=20000
inputs:
agent_pos: aloha-client/puppet_position
cam_left_wrist: cam_left_wrist/image
cam_right_wrist: cam_right_wrist/image
cam_low: cam_low/image
cam_high: cam_high/image
outputs:
- action

- id: keyboard
custom:
source: ../nodes/keyboard_node.py
inputs:
heartbeat: dora/timer/millis/20
outputs:
- space
- failed

- id: cam_saver_left_wrist
custom:
source: ../nodes/lerobot_webcam_saver.py
inputs:
image: cam_left_wrist/image
record_episode: keyboard/space
outputs:
- saved_image
envs:
CAMERA_NAME: observation.images.cam_left_wrist

- id: cam_saver_right_wrist
custom:
source: ../nodes/lerobot_webcam_saver.py
inputs:
image: cam_right_wrist/image
record_episode: keyboard/space
outputs:
- saved_image
envs:
CAMERA_NAME: observation.images.cam_right_wrist

- id: cam_saver_low
custom:
source: ../nodes/lerobot_webcam_saver.py
inputs:
image: cam_low/image
record_episode: keyboard/space
outputs:
- saved_image
envs:
CAMERA_NAME: observation.images.cam_low

- id: cam_saver_high
custom:
source: ../nodes/lerobot_webcam_saver.py
inputs:
image: cam_high/image
record_episode: keyboard/space
outputs:
- saved_image
envs:
CAMERA_NAME: observation.images.cam_high

- id: dora-record
custom:
build: cargo install --git https://github.com/dora-rs/dora dora-record
source: dora-record
inputs:
action: eval/action
observation.state: aloha-client/puppet_position
episode_index: keyboard/space
failed_episode_index: keyboard/failed
observation.images.cam_left_wrist: cam_saver_left_wrist/saved_image
observation.images.cam_right_wrist: cam_saver_right_wrist/saved_image
observation.images.cam_low: cam_saver_low/saved_image
observation.images.cam_high: cam_saver_high/saved_image

+ 38
- 0
examples/aloha/graphs/gym.yml View File

@@ -0,0 +1,38 @@
nodes:
- id: aloha-client
custom:
source: cargo
args: run -p aloha-client --release
inputs:
puppet_goal_position: dora-gym/action
tick: dora/timer/millis/33
outputs:
- puppet_position

- id: plot
custom:
source: ../nodes/plot_node.py
inputs:
image: cam_left_wrist/image
envs:
IMAGE_WIDTH: 640
IMAGE_HEIGHT: 480

- id: cam_left_wrist
custom:
source: ../nodes/webcam.py
inputs:
tick: dora/timer/millis/33
outputs:
- image
envs:
CAMERA_ID: 2

- id: dora-gym
custom:
source: ../nodes/gym_dora_node.py
inputs:
agent_pos: aloha-client/puppet_position
cam_left_wrist: cam_left_wrist/image
outputs:
- action

+ 162
- 0
examples/aloha/graphs/record_2arms_teleop.yml View File

@@ -0,0 +1,162 @@
nodes:
- id: teleop_right
custom:
source: cargo
args: run --release -p aloha-teleop
inputs:
heartbeat: dora/timer/millis/20
outputs:
- puppet_goal_position
- puppet_position
- puppet_velocity
- puppet_current

- id: dora-record
custom:
build: cargo install --git https://github.com/dora-rs/dora dora-record
source: dora-record
inputs:
action: teleop_right/puppet_goal_position
observation.state: teleop_right/puppet_position
observation.velocity: teleop_right/puppet_velocity
observation.effort: teleop_right/puppet_current
episode_index: keyboard/space
failed_episode_index: keyboard/failed
observation.images.cam_left_wrist: cam_saver_left_wrist/saved_image
observation.images.cam_right_wrist: cam_saver_right_wrist/saved_image
observation.images.cam_low: cam_saver_low/saved_image
observation.images.cam_high: cam_saver_high/saved_image

- id: cam_left_wrist
custom:
source: ../nodes/webcam.py
inputs:
tick: dora/timer/millis/33
outputs:
- image
envs:
CAMERA_ID: 2

- id: cam_right_wrist
custom:
source: ../nodes/webcam.py
inputs:
tick: dora/timer/millis/33
outputs:
- image
envs:
CAMERA_ID: 22

- id: cam_low
custom:
source: ../nodes/webcam.py
inputs:
tick: dora/timer/millis/33
outputs:
- image
envs:
CAMERA_ID: 14

- id: cam_high
custom:
source: ../nodes/webcam.py
inputs:
tick: dora/timer/millis/33
outputs:
- image
envs:
CAMERA_ID: 8

- id: keyboard
custom:
source: ../nodes/keyboard_node.py
inputs:
heartbeat: dora/timer/millis/20
outputs:
- space
- failed

- id: cam_saver_left_wrist
custom:
source: ../nodes/lerobot_webcam_saver.py
inputs:
image: cam_left_wrist/image
record_episode: keyboard/space
outputs:
- saved_image
envs:
CAMERA_NAME: observation.images.cam_left_wrist

- id: cam_saver_right_wrist
custom:
source: ../nodes/lerobot_webcam_saver.py
inputs:
image: cam_right_wrist/image
record_episode: keyboard/space
outputs:
- saved_image
envs:
CAMERA_NAME: observation.images.cam_right_wrist

- id: cam_saver_low
custom:
source: ../nodes/lerobot_webcam_saver.py
inputs:
image: cam_low/image
record_episode: keyboard/space
outputs:
- saved_image
envs:
CAMERA_NAME: observation.images.cam_low

- id: cam_saver_high
custom:
source: ../nodes/lerobot_webcam_saver.py
inputs:
image: cam_high/image
record_episode: keyboard/space
outputs:
- saved_image
envs:
CAMERA_NAME: observation.images.cam_high

# Realsense seems to require specific power that makes it unreliable in our current setup
# - id: cam_left_wrist
# custom:
# source: ../nodes/realsense_node.py
# inputs:
# tick: dora/timer/millis/2
# outputs:
# - image
# envs:
# CAMERA_ID: 128422271614

# - id: cam_right_wrist
# custom:
# source: ../nodes/realsense_node.py
# inputs:
# tick: dora/timer/millis/2
# outputs:
# - image
# envs:
# CAMERA_ID: 128422270109

# - id: cam_low
# custom:
# source: ../nodes/realsense_node.py
# inputs:
# tick: dora/timer/millis/2
# outputs:
# - image
# envs:
# CAMERA_ID: 128422271393

# - id: cam_high
# custom:
# source: ../nodes/realsense_node.py
# inputs:
# tick: dora/timer/millis/2
# outputs:
# - image
# envs:
# CAMERA_ID: 128422271609

+ 32
- 0
examples/aloha/graphs/record_teleop.yml View File

@@ -0,0 +1,32 @@
nodes:
- id: teleop
custom:
source: cargo
args: run -p aloha-teleop --release
outputs:
- puppet_goal_position

- id: dora-record
custom:
build: cargo install --git https://github.com/dora-rs/dora dora-record
source: dora-record
inputs:
puppet_goal_position: teleop/puppet_goal_position

- id: plot
custom:
source: ../nodes/plot_node.py
inputs:
image: webcam/image
position: teleop/puppet_goal_position
envs:
IMAGE_WIDTH: 1280
IMAGE_HEIGHT: 720
- id: webcam
custom:
source: ../nodes/realsense_node.py
inputs:
tick: dora/timer/millis/20
outputs:
- image

+ 61
- 0
examples/aloha/graphs/replay.yml View File

@@ -0,0 +1,61 @@
nodes:
- id: aloha-client
custom:
source: cargo
args: run -p aloha-client --release
inputs:
puppet_goal_position: replay/puppet_goal_position
tick: dora/timer/millis/20
outputs:
- puppet_position

- id: replay
custom:
source: ../nodes/replay.py
inputs:
action: policy/action
outputs:
- puppet_goal_position

- id: whisper
custom:
source: ../nodes/whisper_node.py
inputs:
tick: dora/timer/millis/20
outputs:
- text_llm
- text_policy

- id: llm
operator:
python: ../nodes/llm_op.py
inputs:
text: whisper/text_llm

- id: policy
operator:
python: ../nodes/policy.py
inputs:
speech: whisper/text_policy
outputs:
- action

- id: plot
custom:
source: ../nodes/plot_node.py
inputs:
image: webcam/image
position: aloha-client/puppet_position
text_policy: whisper/text_policy
text_llm: whisper/text_llm
envs:
IMAGE_WIDTH: 1280
IMAGE_HEIGHT: 720
- id: webcam
custom:
source: ../nodes/realsense_node.py
inputs:
tick: dora/timer/millis/20
outputs:
- image

+ 4
- 0
examples/aloha/hardware_config/99-fixed-interbotix-udev.rules View File

@@ -0,0 +1,4 @@
SUBSYSTEM=="tty", ATTRS{serial}=="FT891KPN", ENV{ID_MM_DEVICE_IGNORE}="1", ATTR{device/latency_timer}="1", SYMLINK+="ttyDXL_master_left"
SUBSYSTEM=="tty", ATTRS{serial}=="FT89FM77", ENV{ID_MM_DEVICE_IGNORE}="1", ATTR{device/latency_timer}="1", SYMLINK+="ttyDXL_master_right"
SUBSYSTEM=="tty", ATTRS{serial}=="FT89FM09", ENV{ID_MM_DEVICE_IGNORE}="1", ATTR{device/latency_timer}="1", SYMLINK+="ttyDXL_puppet_left"
SUBSYSTEM=="tty", ATTRS{serial}=="FT891KBG", ENV{ID_MM_DEVICE_IGNORE}="1", ATTR{device/latency_timer}="1", SYMLINK+="ttyDXL_puppet_right"

+ 24
- 0
examples/aloha/hardware_config/99-interbotix-udev.rules View File

@@ -0,0 +1,24 @@
# Place this file in /etc/udev/rules.d/
# Then reload udev by typing 'udevadm control --reload-rules && udevadm trigger'
# Sets up rules to give permanent names to devices

# Allow serial devices to be read by anyone
KERNEL=="ttyUSB*", MODE:="0666"
KERNEL=="ttyACM*", MODE:="0666"
KERNEL=="js*", MODE:="0666"
KERNEL=="video*", MODE:="0666"

# OpenCM9.04C board
SUBSYSTEM=="tty", ATTRS{idVendor}=="fff1", ATTRS{idProduct}=="ff48", ENV{ID_MM_DEVICE_IGNORE}="1", SYMLINK+="ttyDXL"

# U2D2 board (also sets latency timer to 1ms for faster communication)
SUBSYSTEM=="tty", ATTRS{idVendor}=="0403", ATTRS{idProduct}=="6014", ENV{ID_MM_DEVICE_IGNORE}="1", ATTR{device/latency_timer}="1", SYMLINK+="ttyDXL"

# RPLidar
SUBSYSTEM=="tty", ATTRS{idVendor}=="10c4", ATTRS{idProduct}=="ea60", SYMLINK+="rplidar"

# Kobuki base
SUBSYSTEM=="tty", ATTRS{idVendor}=="0403", ATTRS{idProduct}=="6001", ATTRS{serial}=="kobuki*", ATTR{device/latency_timer}="1", MODE:="0666", GROUP:="dialout", SYMLINK+="kobuki"

# LifeCam Cinema
SUBSYSTEM=="video4linux", ATTRS{idVendor}=="045e", ATTRS{idProduct}=="0812", ATTR{index}=="0", SYMLINK+="lifecam"

+ 13
- 0
examples/aloha/nodes/aloha-client/Cargo.toml View File

@@ -0,0 +1,13 @@
[package]
name = "aloha-client"
version = "0.1.0"
edition = "2021"

# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html

[dependencies]

serialport = "4.2.0"
rustypot = { git = "https://github.com/haixuanTao/rustypot", branch = "angular-conversion" }
eyre = "0.6.12"
dora-node-api = "0.3.4"

+ 104
- 0
examples/aloha/nodes/aloha-client/src/main.rs View File

@@ -0,0 +1,104 @@
use dora_node_api::{
arrow::array::Float64Array, dora_core::config::DataId, DoraNode, Event, IntoArrow,
};
use eyre::{Context, Result};
use rustypot::{device::xm, DynamixelSerialIO};
use std::time::Duration;

fn main() -> Result<()> {
let (mut node, mut events) = DoraNode::init_from_env()?;
let mut puppet_left_serial_port = serialport::new("/dev/ttyDXL_puppet_left", 1_000_000)
.timeout(Duration::from_millis(2))
.open()
.expect("Failed to open port");
let mut puppet_right_serial_port = serialport::new("/dev/ttyDXL_puppet_right", 1_000_000)
.timeout(Duration::from_millis(2))
.open()
.expect("Failed to open port");
let io = DynamixelSerialIO::v2();
xm::sync_write_torque_enable(
&io,
puppet_left_serial_port.as_mut(),
&[1, 2, 3, 4, 5, 6, 7, 8, 9],
&[1; 9],
)
.expect("Communication error");
xm::sync_write_torque_enable(
&io,
puppet_right_serial_port.as_mut(),
&[1, 2, 3, 4, 5, 6, 7, 8, 9],
&[1; 9],
)
.expect("Communication error");
xm::sync_write_operating_mode(&io, puppet_left_serial_port.as_mut(), &[9], &[5])
.expect("Communication error");
xm::sync_write_operating_mode(&io, puppet_right_serial_port.as_mut(), &[9], &[5])
.expect("Communication error");

while let Some(Event::Input {
id,
metadata: _,
data,
}) = events.recv()
{
match id.as_str() {
"puppet_goal_position" => {
let buffer: Float64Array = data
.to_data()
.try_into()
.context("Could not parse `puppet_goal_position` as float64")?;
let target: &[f64] = buffer.values();
let mut angular = target
.iter()
.map(|&x| xm::conv::radians_to_pos(x))
.collect::<Vec<_>>();
angular.insert(2, angular[1]);
angular.insert(4, angular[3]);
angular.insert(11, angular[10]);
angular.insert(13, angular[12]);
xm::sync_write_goal_position(
&io,
puppet_left_serial_port.as_mut(),
&[1, 2, 3, 4, 5, 6, 7, 8, 9],
&angular[..9],
)
.expect("Communication error");
xm::sync_write_goal_position(
&io,
puppet_right_serial_port.as_mut(),
&[1, 2, 3, 4, 5, 6, 7, 8, 9],
&angular[9..18],
)
.expect("Communication error");
}
"tick" => {
let mut pos_left = xm::sync_read_present_position(
&io,
puppet_left_serial_port.as_mut(),
&[1, 2, 4, 6, 7, 8, 9],
)
.expect("Communication error");
let pos_right = xm::sync_read_present_position(
&io,
puppet_right_serial_port.as_mut(),
&[1, 2, 4, 6, 7, 8, 9],
)
.expect("Communication error");
pos_left.extend_from_slice(&pos_right);

let pos: Vec<f64> = pos_left
.iter()
.map(|&x| xm::conv::pos_to_radians(x))
.collect();
node.send_output(
DataId::from("puppet_position".to_owned()),
Default::default(),
pos.into_arrow(),
)?;
}
_ => todo!(),
};
}

Ok(())
}

+ 15
- 0
examples/aloha/nodes/aloha-teleop/Cargo.toml View File

@@ -0,0 +1,15 @@
[package]
name = "aloha-teleop"
version = "0.1.0"
edition = "2021"

# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html

[dependencies]

serialport = "4.2.0"
rustypot = { git = "https://github.com/haixuanTao/rustypot", branch = "angular-conversion" }
eyre = "0.6.12"
clap = { version = "4.5.4", features = ["derive"] }
dora-node-api = "0.3.4"
lazy_static = "1.4.0"

+ 38
- 0
examples/aloha/nodes/aloha-teleop/src/_main.rs View File

@@ -0,0 +1,38 @@
use rustypot::{device::xm, DynamixelSerialIO};
use std::time::{Duration, Instant};

fn main() {
let mut master_serial_port = serialport::new("/dev/ttyDXL_master_right", 1_000_000)
.timeout(Duration::from_millis(200))
.open()
.expect("Failed to open port");
let mut puppet_serial_port = serialport::new("/dev/ttyDXL_puppet_right", 1_000_000)
.timeout(Duration::from_millis(200))
.open()
.expect("Failed to open port");

let io = DynamixelSerialIO::v2();
xm::sync_write_torque_enable(
&io,
puppet_serial_port.as_mut(),
&[1, 2, 3, 4, 5, 6, 7, 8],
&[1; 8],
)
.expect("Communication error");

loop {
let pos = xm::sync_read_present_position(
&io,
master_serial_port.as_mut(),
&[1, 2, 3, 4, 5, 6, 7, 8],
)
.expect("Communication error");
xm::sync_write_goal_position(
&io,
puppet_serial_port.as_mut(),
&[1, 2, 3, 4, 5, 6, 7, 8],
&pos,
)
.expect("Communication error");
}
}

+ 242
- 0
examples/aloha/nodes/aloha-teleop/src/main.rs View File

@@ -0,0 +1,242 @@
use dora_node_api::{dora_core::config::DataId, DoraNode, IntoArrow, MetadataParameters};
use eyre::{Context, Result};
use rustypot::{device::xm, DynamixelSerialIO};
use serialport::SerialPort;
use std::{
sync::mpsc,
thread::JoinHandle,
time::{Duration, Instant},
};
const MAX_MASTER_GRIPER: f64 = 0.7761942786701344;
const MAX_PUPPET_GRIPER: f64 = 1.6827769243105486;

const MIN_MASTER_GRIPER: f64 = -0.12732040539450828;
const MIN_PUPPET_GRIPER: f64 = 0.6933593161243099;

use clap::Parser;

/// Simple aloha teleop program for recording data
#[derive(Parser, Debug)]
#[command(version, about, long_about = None)]
struct Args {
#[arg(short, long, default_value = "/dev/ttyDXL_master_left")]
master_left_path: String,

#[arg(short, long, default_value = "/dev/ttyDXL_puppet_left")]
puppet_left_path: String,

#[arg(short, long, default_value = "/dev/ttyDXL_master_right")]
master_right_path: Option<String>,

#[arg(short, long, default_value = "/dev/ttyDXL_puppet_right")]
puppet_right_path: Option<String>,
}

enum State {
Position(Vec<f64>),
Velocity(Vec<f64>),
Current(Vec<u16>),
GoalPosition(Vec<f64>),
}
#[derive(Clone, Copy)]
enum Side {
Left,
Right,
}

fn main_multithreaded(
io: DynamixelSerialIO,
mut master_serial_port: Box<dyn SerialPort>,
mut puppet_serial_port: Box<dyn SerialPort>,
side: Side,
tx_dora: mpsc::Sender<(Side, State)>,
) -> Result<JoinHandle<()>> {
let (tx, rx) = mpsc::channel();
let tx_dora_read = tx_dora.clone();
std::thread::spawn(move || loop {
let now = Instant::now();
let pos = xm::sync_read_present_position(
&io,
master_serial_port.as_mut(),
&[1, 2, 4, 6, 7, 8, 9], // 3 and 5 are skipped as thery share the same position as 2 and 4
)
.expect("Read Communication error");
let radians: Vec<f64> = pos.iter().map(|&x| xm::conv::pos_to_radians(x)).collect();
tx.send((now, radians.clone())).unwrap();
tx_dora.send((side, State::Position(radians))).unwrap();
let vel = xm::sync_read_present_velocity(
&io,
master_serial_port.as_mut(),
&[1, 2, 4, 6, 7, 8, 9], // 3 and 5 are skipped as thery share the same position as 2 and 4
)
.expect("Read Communication error");
let rad_per_sec: Vec<f64> = vel
.iter()
.map(|&x| xm::conv::abs_speed_to_rad_per_sec(x))
.collect();
tx_dora.send((side, State::Velocity(rad_per_sec))).unwrap();
let load =
xm::sync_read_present_current(&io, master_serial_port.as_mut(), &[1, 2, 4, 6, 7, 8, 9]) // 3 and 5 are skipped as thery share the same position as 2 and 4
.expect("Read Communication error");
tx_dora.send((side, State::Current(load))).unwrap();
});

let io = DynamixelSerialIO::v2();
let join = std::thread::spawn(move || {
while let Ok((_now, mut pos)) = rx.recv() {
pos[6] = (pos[6] - MIN_MASTER_GRIPER) * (MAX_PUPPET_GRIPER - MIN_PUPPET_GRIPER)
/ (MAX_MASTER_GRIPER - MIN_MASTER_GRIPER)
+ MIN_PUPPET_GRIPER;
tx_dora_read
.send((side, State::GoalPosition(pos.clone())))
.unwrap();
pos.insert(2, pos[1]);
pos.insert(4, pos[3]);
let pos: Vec<u32> = pos.iter().map(|&x| xm::conv::radians_to_pos(x)).collect();
// Compute linear interpolation for gripper as input and output range missmatch
xm::sync_write_goal_position(
&io,
puppet_serial_port.as_mut(),
&[1, 2, 3, 4, 5, 6, 7, 8, 9],
&pos,
)
.expect("Write Communication error");
// println!("elapsed time: {:?}", now.elapsed());
}
});

Ok(join)
}

fn main() -> Result<()> {
let args = Args::parse();
let (tx_dora, rx_dora) = mpsc::channel();
// right arm
let join_right = if let (Some(master_right_path), Some(puppet_right_path)) =
(args.master_right_path.clone(), args.puppet_right_path)
{
let master_right_serial_port = serialport::new(master_right_path, 1000000)
.timeout(Duration::from_millis(2))
.open()
.context("Failed to open port")?;
let mut puppet_right_serial_port = serialport::new(puppet_right_path, 1000000)
.timeout(Duration::from_millis(2))
.open()
.context("Failed to open port")?;

let io = DynamixelSerialIO::v2();
xm::sync_write_torque_enable(
&io,
puppet_right_serial_port.as_mut(),
&[1, 2, 3, 4, 5, 6, 7, 8, 9],
&[1; 9],
)
.expect("Communication error");
// Set operating mode to current based position control to not overload the gripper
xm::sync_write_operating_mode(&io, puppet_right_serial_port.as_mut(), &[9], &[5])
.expect("Communication error");
Some(main_multithreaded(
io,
master_right_serial_port,
puppet_right_serial_port,
Side::Right,
tx_dora.clone(),
)?)
} else {
None
};

// Left arm
let master_left_serial_port = serialport::new(args.master_left_path, 1000000)
.timeout(Duration::from_millis(2))
.open()
.context("Failed to open port")?;
let mut puppet_left_serial_port = serialport::new(args.puppet_left_path, 1000000)
.timeout(Duration::from_millis(2))
.open()
.context("Failed to open port")?;
let io = DynamixelSerialIO::v2();
xm::sync_write_torque_enable(
&io,
puppet_left_serial_port.as_mut(),
&[1, 2, 3, 4, 5, 6, 7, 8, 9],
&[1; 9],
)
.expect("Communication error");

// Set operating mode to current based position control to not overload the gripper
xm::sync_write_operating_mode(&io, puppet_left_serial_port.as_mut(), &[9], &[5])
.expect("Communication error");

let join_left = main_multithreaded(
io,
master_left_serial_port,
puppet_left_serial_port,
Side::Left,
tx_dora,
)?;

if std::env::var("DORA_NODE_CONFIG").is_ok() {
let (mut node, mut events) = DoraNode::init_from_env()?;
let mut pos_right = Vec::new();
let mut vel_right = Vec::new();
let mut current_right = Vec::new();
let mut goal_right = Vec::new();
while let Ok((side, target)) = rx_dora.recv() {
match side {
Side::Left => {
let parameters = MetadataParameters::default();

// Skip if we don't have any data from the right arm
if args.master_right_path.is_some() && pos_right.is_empty() {
continue;
}
match target {
State::Position(mut pos) => {
pos.extend_from_slice(&pos_right);
let output = DataId::from("puppet_position".to_owned());
node.send_output(output.clone(), parameters, pos.into_arrow())?;
}
State::Velocity(mut vel) => {
vel.extend_from_slice(&vel_right);
let output = DataId::from("puppet_velocity".to_owned());
node.send_output(output.clone(), parameters, vel.into_arrow())?;
}
State::Current(mut load) => {
load.extend_from_slice(&current_right);
let output = DataId::from("puppet_current".to_owned());
node.send_output(output.clone(), parameters, load.into_arrow())?;
}
State::GoalPosition(mut pos) => {
pos.extend_from_slice(&goal_right);
let output = DataId::from("puppet_goal_position".to_owned());
node.send_output(output.clone(), parameters, pos.into_arrow())?;
}
}
if events.recv_timeout(Duration::from_nanos(100)).is_none() {
println!("Events channel finished");
break;
}
}
Side::Right => match target {
State::Position(pos) => {
pos_right = pos;
}
State::Velocity(vel) => {
vel_right = vel;
}
State::Current(load) => {
current_right = load;
}
State::GoalPosition(pos) => {
goal_right = pos;
}
},
}
}
} else {
join_left.join().unwrap();
join_right.map(|join| join.join().unwrap());
};
Ok(())
}

+ 66
- 0
examples/aloha/nodes/gym_dora_node.py View File

@@ -0,0 +1,66 @@
import gymnasium as gym

import gym_dora # noqa: F401
import pandas as pd
import time
from pathlib import Path

env = gym.make(
"gym_dora/DoraAloha-v0", disable_env_checker=True, max_episode_steps=10000
)
observation = env.reset()


class ReplayPolicy:
def __init__(self, example_path, epidode=0):
df_action = pd.read_parquet(example_path / "action.parquet")
df_episode_index = pd.read_parquet(example_path / "episode_index.parquet")
self.df = pd.merge_asof(
df_action[["timestamp_utc", "action"]],
df_episode_index[["timestamp_utc", "episode_index"]],
on="timestamp_utc",
direction="backward",
)
# self.df["episode_index"] = self.df["episode_index"].map(lambda x: x[0])
self.df = self.df[self.df["episode_index"] == epidode]
self.current_time = self.df["timestamp_utc"].iloc[0]
self.topic = "action"
self.index = 0
self.finished = False

def select_action(self, obs):
if self.index < len(self.df):
self.index += 1
else:
self.finished = True
row = self.df.iloc[self.index]
delta_time = (row["timestamp_utc"] - self.current_time).microseconds
self.current_time = row["timestamp_utc"]
if delta_time > 0:
time.sleep(delta_time / 1_000_000)
return row[self.topic], self.finished


# policy = ReplayPolicy(
# Path(
# "/home/rcadene/dora-aloha/aloha/graphs/out/018fa076-ad19-7c77-afa4-49f7f072e86f"
# )
# )

policy = ReplayPolicy(
Path(
"/home/rcadene/dora-aloha/aloha/graphs/out/018fa4ad-5942-7235-93d3-3efebe9b8a12"
)
)


done = False
while not done:
actions, finished = policy.select_action(observation)

observation, reward, terminated, truncated, info = env.step(actions)
if terminated:
print(observation, reward, terminated, truncated, info, flush=True)
done = terminated | truncated | done | finished

env.close()

+ 31
- 0
examples/aloha/nodes/keyboard_node.py View File

@@ -0,0 +1,31 @@
from pynput import keyboard
from pynput.keyboard import Key, Events
import pyarrow as pa
from dora import Node


node = Node()
buffer_text = ""
space = False
submitted_text = []
cursor = -1
with keyboard.Events() as events:
while True:
event = events.get(0.1)
if event is not None and isinstance(event, Events.Press):
if event.key == Key.space and space == False:
cursor += 1
node.send_output("space", pa.array([cursor]))
space = True

elif event is not None and isinstance(event, Events.Release):
if event.key == Key.space:
node.send_output("space", pa.array([-1]))
space = False
elif event.key == Key.backspace:
node.send_output("failed", pa.array([cursor]))

if node.next(0.001) is None:
break

+ 78
- 0
examples/aloha/nodes/lerobot_webcam_saver.py View File

@@ -0,0 +1,78 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-

import os
import time
import numpy as np
import cv2
import pyarrow as pa
from pathlib import Path
from dora import Node
import subprocess

node = Node()

CAMERA_NAME = os.getenv("CAMERA_NAME", "camera")
CAMERA_WIDTH = 640
CAMERA_HEIGHT = 480
FPS = 30

i = 0
episode = -1
dataflow_id = node.dataflow_id()

BASE = Path("out") / dataflow_id / "videos"
out_dir = BASE / f"{CAMERA_NAME}_episode_{episode:06d}"

for event in node:
event_type = event["type"]
if event_type == "INPUT":
if event["id"] == "record_episode":
record_episode = event["value"].to_numpy()[0]
print(f"Recording episode {record_episode}", flush=True)
# Save Episode Video
if episode != -1 and record_episode == -1:
out_dir = BASE / f"{CAMERA_NAME}_episode_{episode:06d}"
fname = f"{CAMERA_NAME}_episode_{episode:06d}.mp4"
video_path = BASE / fname
# Save video
ffmpeg_cmd = (
f"ffmpeg -r {FPS} "
"-f image2 "
"-loglevel error "
f"-i {str(out_dir / 'frame_%06d.png')} "
"-vcodec libx264 "
"-g 2 "
"-pix_fmt yuv444p "
f"{str(video_path)} &&"
f"rm -r {str(out_dir)}"
)
print(ffmpeg_cmd, flush=True)
subprocess.Popen([ffmpeg_cmd], start_new_session=True, shell=True)
episode = record_episode

# Make new directory and start saving images
elif episode == -1 and record_episode != -1:
episode = record_episode
out_dir = BASE / f"{CAMERA_NAME}_episode_{episode:06d}"
out_dir.mkdir(parents=True, exist_ok=True)
i = 0
else:
continue

elif event["id"] == "image":
# Only record image when in episode.
# Episode 0 is for not recording periods.
if episode == -1:
continue

fname = f"{CAMERA_NAME}_episode_{episode:06d}.mp4"
node.send_output(
"saved_image",
pa.array([{"path": f"videos/{fname}", "timestamp": i / FPS}]),
event["metadata"],
)
image = event["value"].to_numpy().reshape((CAMERA_HEIGHT, CAMERA_WIDTH, 3))
path = str(out_dir / f"frame_{i:06d}.png")
cv2.imwrite(path, image)
i += 1

+ 234
- 0
examples/aloha/nodes/llm_op.py View File

@@ -0,0 +1,234 @@
from dora import DoraStatus
import pylcs
import os
import pyarrow as pa
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

import gc # garbage collect library
import re
import time

CHATGPT = False
MODEL_NAME_OR_PATH = "TheBloke/deepseek-coder-6.7B-instruct-GPTQ"

CODE_MODIFIER_TEMPLATE = """
### Instruction
Respond with one block of modified code only in ```python block. No explaination.

```python
{code}
```

{user_message}

### Response:
"""


model = AutoModelForCausalLM.from_pretrained(
MODEL_NAME_OR_PATH,
device_map="auto",
trust_remote_code=True,
revision="main",
max_length=1024,
).to("cuda:0")


tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME_OR_PATH, use_fast=True)


def extract_python_code_blocks(text):
"""
Extracts Python code blocks from the given text that are enclosed in triple backticks with a python language identifier.

Parameters:
- text: A string that may contain one or more Python code blocks.

Returns:
- A list of strings, where each string is a block of Python code extracted from the text.
"""
pattern = r"```python\n(.*?)\n```"
matches = re.findall(pattern, text, re.DOTALL)
if len(matches) == 0:
pattern = r"```python\n(.*?)(?:\n```|$)"
matches = re.findall(pattern, text, re.DOTALL)
if len(matches) == 0:
return [text]
else:
matches = [remove_last_line(matches[0])]

return matches


def remove_last_line(python_code):
"""
Removes the last line from a given string of Python code.

Parameters:
- python_code: A string representing Python source code.

Returns:
- A string with the last line removed.
"""
lines = python_code.split("\n") # Split the string into lines
if lines: # Check if there are any lines to remove
lines.pop() # Remove the last line
return "\n".join(lines) # Join the remaining lines back into a string


def calculate_similarity(source, target):
"""
Calculate a similarity score between the source and target strings.
This uses the edit distance relative to the length of the strings.
"""
edit_distance = pylcs.edit_distance(source, target)
max_length = max(len(source), len(target))
# Normalize the score by the maximum possible edit distance (the length of the longer string)
similarity = 1 - (edit_distance / max_length)
return similarity


def find_best_match_location(source_code, target_block):
"""
Find the best match for the target_block within the source_code by searching line by line,
considering blocks of varying lengths.
"""
source_lines = source_code.split("\n")
target_lines = target_block.split("\n")

best_similarity = 0
best_start_index = 0
best_end_index = -1

# Iterate over the source lines to find the best matching range for all lines in target_block
for start_index in range(len(source_lines) - len(target_lines) + 1):
for end_index in range(start_index + len(target_lines), len(source_lines) + 1):
current_window = "\n".join(source_lines[start_index:end_index])
current_similarity = calculate_similarity(current_window, target_block)
if current_similarity > best_similarity:
best_similarity = current_similarity
best_start_index = start_index
best_end_index = end_index

# Convert line indices back to character indices for replacement
char_start_index = len("\n".join(source_lines[:best_start_index])) + (
1 if best_start_index > 0 else 0
)
char_end_index = len("\n".join(source_lines[:best_end_index]))

return char_start_index, char_end_index


def replace_code_in_source(source_code, replacement_block: str):
"""
Replace the best matching block in the source_code with the replacement_block, considering variable block lengths.
"""
replacement_block = extract_python_code_blocks(replacement_block)[0]
start_index, end_index = find_best_match_location(source_code, replacement_block)
if start_index != -1 and end_index != -1:
# Replace the best matching part with the replacement block
new_source = (
source_code[:start_index] + replacement_block + source_code[end_index:]
)
return new_source
else:
return source_code


class Operator:
def __init__(self) -> None:
self.policy_init = False

def on_event(
self,
dora_event,
send_output,
) -> DoraStatus:
global model, tokenizer
if dora_event["type"] == "INPUT" and dora_event["id"] == "text":
input = dora_event["value"][0].as_py()
# Path to the current file
current_file_path = __file__

# Directory of the current file
current_directory = os.path.dirname(current_file_path)
path = current_directory + "/policy.py"

with open(path, "r", encoding="utf8") as f:
code = f.read()

user_message = input
start_llm = time.time()

output = self.ask_llm(
CODE_MODIFIER_TEMPLATE.format(code=code, user_message=user_message)
)

source_code = replace_code_in_source(code, output)
print("response time:", time.time() - start_llm, flush=True)

print("response: ", output, flush=True)
with open(path, "w") as file:
file.write(source_code)

gc.collect()
torch.cuda.empty_cache()

return DoraStatus.CONTINUE

def ask_llm(self, prompt):

# Generate output
# prompt = PROMPT_TEMPLATE.format(system_message=system_message, prompt=prompt))
input = tokenizer(prompt, return_tensors="pt")
input_ids = input.input_ids.cuda()

# add attention mask here
attention_mask = input.attention_mask.cuda()

output = model.generate(
inputs=input_ids,
temperature=0.7,
do_sample=True,
top_p=0.95,
top_k=40,
max_new_tokens=512,
attention_mask=attention_mask,
eos_token_id=tokenizer.eos_token_id,
)
# Get the tokens from the output, decode them, print them

# Get text between im_start and im_end
return tokenizer.decode(output[0], skip_special_tokens=True)[len(prompt) :]


if __name__ == "__main__":
op = Operator()

# Path to the current file
current_file_path = __file__

# Directory of the current file
current_directory = os.path.dirname(current_file_path)

path = current_directory + "/policy.py"
with open(path, "r", encoding="utf8") as f:
raw = f.read()

op.on_event(
{
"type": "INPUT",
"id": "text",
"value": pa.array(
[
{
"path": path,
"user_message": "When I say suit up, get the hat and the get the food.",
},
]
),
"metadata": [],
},
print,
)

+ 53
- 0
examples/aloha/nodes/plot_node.py View File

@@ -0,0 +1,53 @@
import os
import cv2
from dora import Node


IMAGE_WIDTH = int(os.getenv("IMAGE_WIDTH", "1280"))
IMAGE_HEIGHT = int(os.getenv("IMAGE_HEIGHT", "720"))
FONT = cv2.FONT_HERSHEY_SIMPLEX


node = Node()

joint = None
text = None

for event in node:
if event["type"] == "INPUT":
dora_id = event["id"]

if dora_id == "position":
joint = event["value"].to_numpy()
if "text" in dora_id:
text = event["value"][0].as_py()

if dora_id == "image":
image = (
event["value"].to_numpy().reshape((IMAGE_HEIGHT, IMAGE_WIDTH, 3)).copy()
)
if text is not None:
cv2.putText(
image,
f"Speech: {text}",
(20, 40),
FONT,
0.5,
(190, 250, 0),
2,
)

if joint is not None:
cv2.putText(
image,
f"pos: {joint}",
(20, 20),
FONT,
0.5,
(190, 250, 100),
2,
)

cv2.imshow("frame", image)
if cv2.waitKey(1) & 0xFF == ord("q"):
break

+ 17
- 0
examples/aloha/nodes/policy.py View File

@@ -0,0 +1,17 @@
import pyarrow as pa
from dora import DoraStatus


class Operator:
def __init__(self):
self.actions = ["get_food", "get_hat"]

def on_event(self, event: dict, send_output) -> DoraStatus:
if event["type"] == "INPUT":
id = event["id"]
# On initialization
if id == "speech":
text: str = event["value"][0].as_py().lower()
# send_output("action", pa.array([""]))

return DoraStatus.CONTINUE

+ 28
- 0
examples/aloha/nodes/realsense_node.py View File

@@ -0,0 +1,28 @@
import pyrealsense2 as rs
import numpy as np
from dora import Node
import pyarrow as pa
import os
import cv2

IMAGE_WIDTH = int(os.getenv("IMAGE_WIDTH", "640"))
IMAGE_HEIGHT = int(os.getenv("IMAGE_HEIGHT", "480"))
FPS = 30
CAMERA_ID = os.getenv("CAMERA_ID")
print("camera ID:", CAMERA_ID)
pipe = rs.pipeline()
config = rs.config()
config.enable_device(CAMERA_ID)
config.enable_stream(rs.stream.color, IMAGE_WIDTH, IMAGE_HEIGHT, rs.format.bgr8, FPS)
profile = pipe.start(config)

node = Node()

for event in node:
frames = pipe.wait_for_frames()
color_frame = frames.get_color_frame()
color_images = np.asanyarray(color_frame.get_data())
node.send_output("image", pa.array(color_images.ravel()))
cv2.imshow(CAMERA_ID, color_images)
if cv2.waitKey(1) & 0xFF == ord("q"):
break

+ 27
- 0
examples/aloha/nodes/replay.py View File

@@ -0,0 +1,27 @@
from dora import Node
import pandas as pd
import pyarrow as pa
import time


TOPIC = "puppet_goal_position"


if __name__ == "__main__":
node = Node()

for event in node:
if event["type"] == "INPUT":
for action in event["value"]:
print(action, flush=True)
action = action.as_py()

df = pd.read_parquet(action + ".parquet")

initial_time = df["timestamp_utc"].iloc[0]
current_time = initial_time
for index, row in df.iterrows():
delta_time = (row["timestamp_utc"] - current_time).microseconds
current_time = row["timestamp_utc"]
time.sleep(delta_time / 1_000_000)
node.send_output(TOPIC, pa.array(row[TOPIC], type=pa.uint32()))

+ 45
- 0
examples/aloha/nodes/webcam.py View File

@@ -0,0 +1,45 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-

import os
import time
import numpy as np
import cv2
import pyarrow as pa

from dora import Node

node = Node()

CAMERA_ID = int(os.getenv("CAMERA_ID", 0))
CAMERA_WIDTH = 640
CAMERA_HEIGHT = 480
video_capture = cv2.VideoCapture(CAMERA_ID)
font = cv2.FONT_HERSHEY_SIMPLEX


for event in node:
event_type = event["type"]
if event_type == "INPUT":
ret, frame = video_capture.read()
if not ret:
frame = np.zeros((CAMERA_HEIGHT, CAMERA_WIDTH, 3), dtype=np.uint8)
cv2.putText(
frame,
"No Webcam was found at index %d" % (CAMERA_ID),
(int(30), int(30)),
font,
0.75,
(255, 255, 255),
2,
1,
)
node.send_output(
"image",
pa.array(frame.ravel()),
event["metadata"],
)
cv2.imshow(str(CAMERA_ID), frame)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
video_capture.release()

+ 59
- 0
examples/aloha/nodes/whisper_node.py View File

@@ -0,0 +1,59 @@
import pyarrow as pa
import whisper
from pynput import keyboard
from pynput.keyboard import Key, Events
from dora import Node

import torch
import numpy as np
import pyarrow as pa
import sounddevice as sd
import gc # garbage collect library

model = whisper.load_model("base")

SAMPLE_RATE = 16000

node = Node()


def get_text(duration) -> str:

## Microphone
audio_data = sd.rec(
int(SAMPLE_RATE * duration),
samplerate=SAMPLE_RATE,
channels=1,
dtype=np.int16,
blocking=True,
)

audio = audio_data.ravel().astype(np.float32) / 32768.0

## Speech to text
audio = whisper.pad_or_trim(audio)
return model.transcribe(audio, language="en")


## Check for keyboard event
with keyboard.Events() as events:
for dora_event in node:
if dora_event["type"] == "INPUT":
event = events.get(0.1)
if (
event is not None
and (event.key == Key.alt_r or event.key == Key.ctrl_r)
and isinstance(event, Events.Press)
):
if event.key == Key.alt_r:
result = get_text(5)
node.send_output(
"text_llm", pa.array([result["text"]]), dora_event["metadata"]
)
elif event.key == Key.ctrl_r:
result = get_text(3)
node.send_output(
"text_policy",
pa.array([result["text"]]),
dora_event["metadata"],
)

+ 24
- 0
examples/aloha/tests/test_realsense.py View File

@@ -0,0 +1,24 @@
import pyrealsense2 as rs
import cv2
import numpy as np
import os

CAMERA_ID = os.getenv("CAMERA_ID")
pipe = rs.pipeline()
config = rs.config()
config.enable_device(CAMERA_ID)
config.enable_stream(rs.stream.color, 640, 480, rs.format.rgb8, 60)
profile = pipe.start(config)


try:
for i in range(0, 1000):
frames = pipe.wait_for_frames()
color_frame = frames.get_color_frame()
color_images = np.asanyarray(color_frame.get_data())
color_images = cv2.cvtColor(color_images, cv2.COLOR_BGR2RGB)
cv2.imshow("frame", color_images)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
finally:
pipe.stop()

+ 63
- 0
examples/lebai/graphs/dataflow.yml View File

@@ -0,0 +1,63 @@
nodes:
- id: lebai-client
build: pip install -e ../../../node-hub/lebai-client
path: lebai-client
inputs:
movec:
source: interpolation/movec
queue_size: 1
movej:
source: interpolation/movej
queue_size: 1
claw: interpolation/claw
stop: interpolation/stop
save: interpolation/save
go_to: interpolation/go_to
record: interpolation/record
cut: interpolation/cut
play: interpolation/play
teach: interpolation/teach
end_teach: interpolation/end_teach

- id: keyboard-listener
build: pip install -e ../../../../dora/node-hub/keyboard-listener
path: keyboard-listener
outputs:
- char

- id: dora-microphone
build: pip install -e ../../../../dora/node-hub/dora-microphone
path: dora-microphone
outputs:
- audio

- id: dora-distil-whisper
build: pip install -e ../../../../dora/node-hub/dora-distil-whisper
path: dora-distil-whisper
inputs:
input: dora-microphone/audio
outputs:
- text

- id: terminal-print
path: dynamic
inputs:
text: dora-distil-whisper/text

- id: interpolation
path: ../nodes/interpolation.py
inputs:
keyboard: keyboard-listener/char
text: dora-distil-whisper/text
outputs:
- movec
- movej
- claw
- stop
- save
- record
- play
- go_to
- cut
- teach
- end_teach

+ 128
- 0
examples/lebai/graphs/dataflow_full.yml View File

@@ -0,0 +1,128 @@
nodes:
- id: lebai-client
build: pip install -e ../../../node-hub/lebai-client
path: lebai-client
inputs:
movec:
source: interpolation/movec
queue_size: 1
movej:
source: interpolation/movej
queue_size: 1
claw: interpolation/claw
stop: interpolation/stop
save: interpolation/save
go_to: interpolation/go_to
record: interpolation/record
cut: interpolation/cut
play: interpolation/play
teach: interpolation/teach
end_teach: interpolation/end_teach

- id: keyboard-listener
build: pip install dora-keyboard
path: dora-keyboard
outputs:
- char

- id: dora-microphone
build: pip install dora-microphone
path: dora-microphone
outputs:
- audio

- id: dora-distil-whisper
build: pip install dora-distil-whisper
path: dora-distil-whisper
inputs:
input: dora-microphone/audio
outputs:
- text

- id: key-interpolation
path: ../nodes/key_interpolation.py
inputs:
keyboard: keyboard-listener/char
outputs:
- text

- id: interpolation
path: ../nodes/interpolation.py
inputs:
# keyboard: keyboard-listener/char
# text: dora-distil-whisper/text
vlm: dora-qwenvl/text
text: dora-qwenvl-recorder/text
outputs:
- movec
- movej
- claw
- stop
- save
- record
- play
- go_to
- cut
- teach
- end_teach

- id: camera
build: pip install opencv-video-capture
path: opencv-video-capture
inputs:
tick: dora/timer/millis/16
outputs:
- image
env:
CAPTURE_PATH: 8
IMAGE_WIDTH: 640
IMAGE_HEIGHT: 480

- id: plot
build: cargo install dora-rerun --locked
path: dora-rerun
inputs:
image:
source: camera/image
queue_size: 1
text_vlm: dora-qwenvl/text
env:
IMAGE_WIDTH: 640
IMAGE_HEIGHT: 480

- id: dora-qwenvl-recorder
build: pip install -e ../../../dora/node-hub/llama-factory-recorder
path: llama-factory-recorder
inputs:
image:
source: camera/image
queue_size: 1
ground_truth: key-interpolation/text
outputs:
- text
env:
DEFAULT_QUESTION: Respond with left, right, forward, back, up or go home in order for the robotic arm to get the cup.
LLAMA_FACTORY_ROOT_PATH: /home/peter/Documents/work/LLaMA-Factory/data

- id: dora-qwenvl
build: pip install -e ../../../dora/node-hub/dora-qwenvl
path: dora-qwenvl
inputs:
image:
source: camera/image
queue_size: 1
tick:
source: key-interpolation/text
queue_size: 1
text: terminal-input/data
outputs:
- text
env:
CUSTOM_MODEL_PATH: /home/peter/Documents/work/LLaMA-Factory/saves/qwen2_vl-2b/lora-dora-demo/sft
DEFAULT_QUESTION: Respond with left, right, forward, back, up or go home in order for the robotic arm to get the cup.

- id: terminal-input
build: pip install ../../node-hub/terminal-input
path: dynamic
outputs:
- data

+ 88
- 0
examples/lebai/graphs/keyboard_teleop.yml View File

@@ -0,0 +1,88 @@
nodes:
- id: lebai-client
build: pip install -e ../../../node-hub/lebai-client
path: lebai-client
inputs:
movec:
source: interpolation/movec
queue_size: 1
movej:
source: interpolation/movej
queue_size: 1
claw: interpolation/claw
stop: interpolation/stop
save: interpolation/save
go_to: interpolation/go_to
record: interpolation/record
cut: interpolation/cut
play: interpolation/play
teach: interpolation/teach
end_teach: interpolation/end_teach

- id: keyboard-listener
build: pip install dora-keyboard
path: dora-keyboard
outputs:
- char

- id: key-interpolation
path: ../nodes/key_interpolation.py
inputs:
keyboard: keyboard-listener/char
outputs:
- text

- id: interpolation
path: ../nodes/interpolation.py
inputs:
text: dora-qwenvl-recorder/text
outputs:
- movec
- movej
- claw
- stop
- save
- record
- play
- go_to
- cut
- teach
- end_teach

- id: camera
build: pip install opencv-video-capture
path: opencv-video-capture
inputs:
tick: dora/timer/millis/16
outputs:
- image
env:
CAPTURE_PATH: 8
IMAGE_WIDTH: 640
IMAGE_HEIGHT: 480

- id: plot
build: cargo install dora-rerun --locked
path: dora-rerun
inputs:
image:
source: camera/image
queue_size: 1
text_vlm: dora-qwenvl-recorder/text
env:
IMAGE_WIDTH: 640
IMAGE_HEIGHT: 480

- id: dora-qwenvl-recorder
build: pip install -e ../../../dora/node-hub/llama-factory-recorder
path: llama-factory-recorder
inputs:
image:
source: camera/image
queue_size: 1
ground_truth: key-interpolation/text
outputs:
- text
env:
DEFAULT_QUESTION: Respond with left, right, forward, back, up or go home in order for the robotic arm to get the cup.
LLAMA_FACTORY_ROOT_PATH: /home/peter/Documents/work/LLaMA-Factory

+ 115
- 0
examples/lebai/graphs/qwenvl2.yml View File

@@ -0,0 +1,115 @@
nodes:
- id: lebai-client
build: pip install -e ../../../node-hub/lebai-client
path: lebai-client
inputs:
movec:
source: interpolation/movec
queue_size: 1
movej:
source: interpolation/movej
queue_size: 1
claw: interpolation/claw
stop: interpolation/stop
save: interpolation/save
go_to: interpolation/go_to
record: interpolation/record
cut: interpolation/cut
play: interpolation/play
teach: interpolation/teach
end_teach: interpolation/end_teach

- id: keyboard-listener
build: pip install dora-keyboard
path: dora-keyboard
outputs:
- char

- id: dora-microphone
build: pip install dora-microphone
path: dora-microphone
outputs:
- audio

- id: dora-distil-whisper
build: pip install dora-distil-whisper
path: dora-distil-whisper
inputs:
input: dora-microphone/audio
outputs:
- text

- id: key-interpolation
path: ../nodes/key_interpolation.py
inputs:
keyboard: keyboard-listener/char
outputs:
- text

- id: prompt-interpolation
path: ../nodes/prompt_interpolation.py
inputs:
text: dora-distil-whisper/text
outputs:
- text

- id: interpolation
path: ../nodes/interpolation.py
inputs:
keyboard: key-interpolation/text
vlm: dora-qwenvl/tick
outputs:
- movec
- movej
- claw
- stop
- save
- record
- play
- go_to
- cut
- teach
- end_teach

- id: camera
build: pip install opencv-video-capture
path: opencv-video-capture
inputs:
tick: dora/timer/millis/16
outputs:
- image
env:
CAPTURE_PATH: 8
IMAGE_WIDTH: 640
IMAGE_HEIGHT: 480

- id: plot
build: cargo install dora-rerun --locked
path: dora-rerun
inputs:
image:
source: camera/image
queue_size: 1
text_vlm: dora-qwenvl/tick
text_whisper: dora-distil-whisper/text
env:
IMAGE_WIDTH: 640
IMAGE_HEIGHT: 480

- id: dora-qwenvl
build: pip install -e ../../../dora/node-hub/dora-qwenvl
path: dora-qwenvl
inputs:
image:
source: camera/image
queue_size: 1
tick:
source: key-interpolation/text
queue_size: 1
text: prompt-interpolation/text
outputs:
- text
- tick
env:
CUSTOM_MODEL_PATH: /home/peter/Documents/work/LLaMA-Factory/saves/qwen2_vl-2b/lora-dora-demo/sft
DEFAULT_QUESTION: Respond with left, right, forward, back, up, down or go home in order for the robotic arm to get the cup.

+ 1
- 0
examples/lebai/graphs/train.sh View File

@@ -0,0 +1 @@
D

+ 96
- 0
examples/lebai/graphs/voice_teleop.yml View File

@@ -0,0 +1,96 @@
nodes:
- id: lebai-client
build: pip install -e ../../../node-hub/lebai-client
path: lebai-client
inputs:
movec:
source: interpolation/movec
queue_size: 1
movej:
source: interpolation/movej
queue_size: 1
claw: interpolation/claw
stop: interpolation/stop
save: interpolation/save
go_to: interpolation/go_to
record: interpolation/record
cut: interpolation/cut
play: interpolation/play
teach: interpolation/teach
end_teach: interpolation/end_teach

- id: dora-microphone
build: pip install dora-microphone
path: dora-microphone
outputs:
- audio

- id: dora-distil-whisper
build: pip install dora-distil-whisper
path: dora-distil-whisper
inputs:
input: dora-microphone/audio
outputs:
- text

- id: voice-interpolation
path: ../nodes/voice_interpolation.py
inputs:
text: dora-distil-whisper/text
outputs:
- text

- id: interpolation
path: ../nodes/interpolation.py
inputs:
text: dora-qwenvl-recorder/text
outputs:
- movec
- movej
- claw
- stop
- save
- record
- play
- go_to
- cut
- teach
- end_teach

- id: camera
build: pip install opencv-video-capture
path: opencv-video-capture
inputs:
tick: dora/timer/millis/16
outputs:
- image
env:
CAPTURE_PATH: 8
IMAGE_WIDTH: 640
IMAGE_HEIGHT: 480

- id: plot
build: cargo install dora-rerun --locked
path: dora-rerun
inputs:
image:
source: camera/image
queue_size: 1
text_vlm: dora-qwenvl-recorder/text
env:
IMAGE_WIDTH: 640
IMAGE_HEIGHT: 480

- id: dora-qwenvl-recorder
build: pip install -e ../../../dora/node-hub/llama-factory-recorder
path: llama-factory-recorder
inputs:
image:
source: camera/image
queue_size: 1
ground_truth: voice-interpolation/text
outputs:
- text
env:
DEFAULT_QUESTION: Respond with left, right, forward, back, up or go home in order for the robotic arm to get the cup.
LLAMA_FACTORY_ROOT_PATH: /home/peter/Documents/work/LLaMA-Factory

+ 60
- 0
examples/lebai/nodes/interpolation.py View File

@@ -0,0 +1,60 @@
from dora import Node
import pyarrow as pa
from enum import Enum

node = Node()


class Action(Enum):
"""
Action abstraction
"""

YAW_RIGHT = ("yaw right", "movej", [0, 0, 0, 0, -0.1, 0, 0.1])
YAW_LEFT = ("yaw left", "movej", [0, 0, 0, 0, 0.1, 0, 0.1])
PITCH_UP = ("pitch up", "movej", [0, 0, 0, -0.1, 0, 0, 0.1])
PITCH_DOWN = ("pitch down", "movej", [0, 0, 0.1, 0, 0, 0, 0.1])
ROLL_LEFT = ("roll left", "movej", [0, 0, 0, 0, 0, 0.1, 0.1])
ROLL_RIGHT = ("roll right", "movej", [0, 0, 0, 0, 0, -0.1, 0.1])
YAW_SHOULDER_RIGHT = (
"yaw shoulder right",
"movej",
[-0.1, 0, 0, 0, 0, 0, 0.1],
)
YAW_SHOULDER_LEFT = (
"yaw shoulder left",
"movej",
[0.1, 0, 0, 0, 0, 0, 0.1],
)
FORWARD = ("forward", "movec", [-0.02, 0, 0, 0, 0, 0, 0.1])
BACK = ("back", "movec", [0.02, 0, 0, 0, 0, 0, 0.1])
LEFT = ("left", "movec", [0, -0.02, 0, 0, 0, 0, 0.1])
RIGHT = ("right", "movec", [0, 0.02, 0, 0, 0, 0, 0.1])
UP = ("up", "movec", [0, 0, 0.02, 0, 0, 0, 0.1])
DOWN = ("down", "movec", [0, 0, -0.02, 0, 0, 0, 0.1])
CLOSE = ("close", "claw", [0])
OPEN = ("open", "claw", [100])
STOP = ("stop", "stop", [])
SAVE = ("save", "save", [])
GO_TO = ("go", "go_to", [])
END_TEACH = ("end of teach", "end_teach", [])
TEACH = ("teach", "teach", [])


for event in node:
if event["type"] == "INPUT":
text = event["value"][0].as_py().lower()
text = text.replace(".", "")
text = text.replace(".", "")

if "save" in text:
node.send_output("save", pa.array([text.replace("save ", "")]))
elif "go" in text:
node.send_output("go_to", pa.array([text.replace("go ", "")]))
elif "go to " in text:
node.send_output("go_to", pa.array([text.replace("go to ", "")]))
else:
for action in Action:
if action.value[0] in text:
node.send_output(action.value[1], pa.array(action.value[2]))
break

+ 49
- 0
examples/lebai/nodes/key_interpolation.py View File

@@ -0,0 +1,49 @@
from dora import Node
import pyarrow as pa

node = Node()


for event in node:
if event["type"] == "INPUT":
if event["id"] == "keyboard":
char = event["value"][0].as_py()

if char == "w":
node.send_output("text", pa.array(["forward"]))
elif char == "s":
node.send_output("text", pa.array(["back"]))
elif char == "c":
node.send_output("text", pa.array([" go home"]))
elif char == "d":
node.send_output("text", pa.array(["right"]))
elif char == "a":
node.send_output("text", pa.array(["left"]))
elif char == "e":
node.send_output("text", pa.array(["up"]))
elif char == "q":
node.send_output("text", pa.array(["down"]))
elif char == "t":
node.send_output("text", pa.array(["close"]))
elif char == "r":
node.send_output("text", pa.array(["open"]))
elif char == "6":
node.send_output("text", pa.array(["yaw right"]))
elif char == "4":
node.send_output("text", pa.array(["yaw left"]))
elif char == "3":
node.send_output("text", pa.array(["yaw shoulder right"]))
elif char == "1":
node.send_output("text", pa.array(["yaw shoulder left"]))
elif char == "8":
node.send_output("text", pa.array(["pitch up"]))
elif char == "2":
node.send_output("text", pa.array(["pitch down"]))
elif char == "7":
node.send_output("text", pa.array(["roll left"]))
elif char == "9":
node.send_output("text", pa.array(["roll right"]))
elif char == "x":
node.send_output("text", pa.array(["stop"]))
elif char == "j":
node.send_output("text", pa.array([""]))

+ 21
- 0
examples/lebai/nodes/prompt_interpolation.py View File

@@ -0,0 +1,21 @@
from dora import Node
import pyarrow as pa

node = Node()


for event in node:
if event["type"] == "INPUT":
if event["id"] == "text":
text = event["value"][0].as_py()
text = text.lower()
if "get" in text:
node.send_output(
"text",
pa.array(
[
"Respond with left, right, forward, back, up, down or go home in order for the robotic arm to "
+ text
]
),
)

+ 1
- 0
examples/lebai/nodes/vlm_prompt.py View File

@@ -0,0 +1 @@


+ 22
- 0
examples/lebai/nodes/voice_interpolation.py View File

@@ -0,0 +1,22 @@
from dora import Node
import pyarrow as pa

node = Node()


for event in node:
if event["type"] == "INPUT":
if event["id"] == "text":
text = event["value"][0].as_py()
text = text.lower()
text = text.replace("saving", "save")
text = text.replace("teaching", "teach")
text = text.replace("turning left", "yaw left")
text = text.replace("turning right", "yaw right")
text = text.replace("turning up", "pitch up")
text = text.replace("turning down", "pitch down")
text = text.replace("rolling right", "roll right")
text = text.replace("rolling left", "roll left")
text = text.replace("turning shoulder right", "yaw shoulder right")
text = text.replace("turning shoulder left", "yaw shoulder left")
node.send_output("text", pa.array([text]))

+ 103
- 0
examples/reachy/README.md View File

@@ -0,0 +1,103 @@
## Reachy 2

### Installation

#### Installation SDK

```bash
### Install the sdk
git clone https://github.com/pollen-robotics/reachy2-sdk
cd reachy2-sdk
pip install -e .
cd ..

### Connect Camera USB
echo 'SUBSYSTEM=="usb", ATTRS{idVendor}=="03e7", MODE="0666"' | sudo tee /etc/udev/rules.d/80-movidius.rules
sudo udevadm control --reload-rules && sudo udevadm trigger

### Install Polen vision
git clone https://github.com/pollen-robotics/pollen-vision.git
cd pollen-vision
git checkout lerobot_only_left_camera
pip install ".[depthai_wrapper]"
cd ..

### Teleoperation Collector
git clone https://github.com/pollen-robotics/reachy2_hdf5_recorder/
```

#### Installation dora-lerobot

```bash
## Create new python environment

git clone git@github.com:huggingface/dora_lerobot.git
pip install -e dora_lerobot
git clone git@github.com:dora-rs/dora-dora_lerobot.git --branch WORKING-REACHY
pip install -e dora-dora_lerobot/gym_dora

cargo install dora-rs --locked
pip install dora-rs
```

### AI Pipeline

### Data Collection

```bash
cd reachy2_hdf5_recorder
python3 record_episodes_hdf5.py -n <recording_session_name>_raw -l <epiodes_duration in s> -r <framerate> --robot_ip <robot_ip>
```

```bash
huggingface-cli upload \
<hf-organisation>/<dataset_name> \
data/<recording_session_name>_raw/ \
--repo-type dataset (--private)
```

> ### 06/07/2021
>
> As of today, we need to use several branches:
>
> - mobile_base : branch 21 # server side, install manually
> - reachy-sdk-api : branch 116 # server and client side, install manually
> - mobile-base-sdk : branch 25 # client side, install manually
> - reachy2-sdk-server : branch 135 # server side, install mannually
> Then push to HF hub!

### Training

```bash
python dora_lerobot/scripts/train.py \
policy=act_real \
env=aloha_real \
env.task=Reachy-v0 \
dataset_repo_id=<org-id>/<data-id< \
```

### Evaluation

```bash
dora start reachy/graphs/eval.yml --attach
```

### Reachy Initialization

```bash
ssh bedrock@192.168.1.51
```

```bash
cd dev_docker
sudo service stop


docker compose -f mode/dev.yaml up -d core

docker exec -it core bash

# In the docker container

ros2 launch reachy_bringup reachy.launch.py start_sdk_server:=true
```

+ 86
- 0
examples/reachy/nodes/gym_dora_node.py View File

@@ -0,0 +1,86 @@
import time
from pathlib import Path

import gymnasium as gym
import pandas as pd
from lerobot.common.datasets.lerobot_dataset import LeRobotDataset

import gym_dora # noqa: F401

env = gym.make(
"gym_dora/DoraReachy2-v0", disable_env_checker=True, max_episode_steps=10000
)
observation = env.reset()


class ReplayPolicy:
def __init__(self, example_path, epidode=0):
df_action = pd.read_parquet(example_path / "action.parquet")
df_episode_index = pd.read_parquet(example_path / "episode_index.parquet")
self.df = pd.merge_asof(
df_action[["timestamp_utc", "action"]],
df_episode_index[["timestamp_utc", "episode_index"]],
on="timestamp_utc",
direction="backward",
)
# self.df["episode_index"] = self.df["episode_index"].map(lambda x: x[0])
self.df = self.df[self.df["episode_index"] == epidode]
self.current_time = self.df["timestamp_utc"].iloc[0]
self.topic = "action"
self.index = 0
self.finished = False

def select_action(self, obs):
if self.index < len(self.df):
self.index += 1
else:
self.finished = True
row = self.df.iloc[self.index]
delta_time = (row["timestamp_utc"] - self.current_time).microseconds
self.current_time = row["timestamp_utc"]
if delta_time > 0:
time.sleep(delta_time / 1_000_000)
return row[self.topic], self.finished


class ReplayLeRobotPolicy:
def __init__(self, episode=21):
self.index = 0
self.finished = False
# episode = 1
dataset = LeRobotDataset("cadene/reachy2_mobile_base")
from_index = dataset.episode_data_index["from"][episode]
to_index = dataset.episode_data_index["to"][episode]
self.states = dataset.hf_dataset["observation.state"][from_index:to_index]
self.actions = dataset.hf_dataset["action"][from_index:to_index]

def select_action(self, obs):
if self.index < len(self.actions):
self.index += 1
else:
self.finished = True
# time.sleep(1 / 30)
return self.actions[self.index].numpy(), self.finished


# policy = ReplayPolicy(
# Path(
# "/home/rcadene/dora-aloha/aloha/graphs/out/018fa076-ad19-7c77-afa4-49f7f072e86f"
# )
# )

policy = ReplayLeRobotPolicy()

done = False
while not done:
actions, finished = policy.select_action(observation)

observation, reward, terminated, truncated, info = env.step(actions)
# cv2.imshow("frame", observation["pixels"]["cam_trunk"])
# if cv2.waitKey(1) & 0xFF == ord("q"):
# break
if terminated:
print(observation, reward, terminated, truncated, info, flush=True)
done = terminated | truncated | done | finished

env.close()

+ 31
- 0
examples/reachy/nodes/keyboard_node.py View File

@@ -0,0 +1,31 @@
from pynput import keyboard
from pynput.keyboard import Key, Events
import pyarrow as pa
from dora import Node


node = Node()
buffer_text = ""
space = False
submitted_text = []
cursor = -1
with keyboard.Events() as events:
while True:
event = events.get(0.1)
if event is not None and isinstance(event, Events.Press):
if event.key == Key.space and space == False:
cursor += 1
node.send_output("space", pa.array([cursor]))
space = True

elif event is not None and isinstance(event, Events.Release):
if event.key == Key.space:
node.send_output("space", pa.array([-1]))
space = False
elif event.key == Key.backspace:
node.send_output("failed", pa.array([cursor]))

if node.next(0.001) is None:
break

+ 78
- 0
examples/reachy/nodes/lerobot_webcam_saver.py View File

@@ -0,0 +1,78 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-

import os
import time
import numpy as np
import cv2
import pyarrow as pa
from pathlib import Path
from dora import Node
import subprocess

node = Node()

CAMERA_NAME = os.getenv("CAMERA_NAME", "camera")
CAMERA_WIDTH = 1280
CAMERA_HEIGHT = 800
FPS = 30

i = 0
episode = -1
dataflow_id = node.dataflow_id()

BASE = Path("out") / dataflow_id / "videos"
out_dir = BASE / f"{CAMERA_NAME}_episode_{episode:06d}"

for event in node:
event_type = event["type"]
if event_type == "INPUT":
if event["id"] == "record_episode":
record_episode = event["value"].to_numpy()[0]
print(f"Recording episode {record_episode}", flush=True)
# Save Episode Video
if episode != -1 and record_episode == -1:
out_dir = BASE / f"{CAMERA_NAME}_episode_{episode:06d}"
fname = f"{CAMERA_NAME}_episode_{episode:06d}.mp4"
video_path = BASE / fname
# Save video
ffmpeg_cmd = (
f"ffmpeg -r {FPS} "
"-f image2 "
"-loglevel error "
f"-i {str(out_dir / 'frame_%06d.png')} "
"-vcodec libx264 "
"-g 2 "
"-pix_fmt yuv444p "
f"{str(video_path)} &&"
f"rm -r {str(out_dir)}"
)
print(ffmpeg_cmd, flush=True)
subprocess.Popen([ffmpeg_cmd], start_new_session=True, shell=True)
episode = record_episode

# Make new directory and start saving images
elif episode == -1 and record_episode != -1:
episode = record_episode
out_dir = BASE / f"{CAMERA_NAME}_episode_{episode:06d}"
out_dir.mkdir(parents=True, exist_ok=True)
i = 0
else:
continue

elif event["id"] == "image":
# Only record image when in episode.
# Episode 0 is for not recording periods.
if episode == -1:
continue

fname = f"{CAMERA_NAME}_episode_{episode:06d}.mp4"
node.send_output(
"saved_image",
pa.array([{"path": f"videos/{fname}", "timestamp": i / FPS}]),
event["metadata"],
)
image = event["value"].to_numpy().reshape((CAMERA_HEIGHT, CAMERA_WIDTH, 3))
path = str(out_dir / f"frame_{i:06d}.png")
cv2.imwrite(path, image)
i += 1

+ 56
- 0
examples/reachy/nodes/plot_node.py View File

@@ -0,0 +1,56 @@
import os
import cv2
from dora import Node


IMAGE_WIDTH = int(os.getenv("IMAGE_WIDTH", "1280"))
IMAGE_HEIGHT = int(os.getenv("IMAGE_HEIGHT", "720"))
FONT = cv2.FONT_HERSHEY_SIMPLEX


node = Node()

joint = None
text = None
action = None

for event in node:
if event["type"] == "INPUT":
dora_id = event["id"]

if dora_id == "position":
joint = event["value"].to_numpy()
if "text" in dora_id:
text = event["value"][0].as_py()
if "action" in dora_id:
action = event["value"].to_numpy()

if dora_id == "image":
image = (
event["value"].to_numpy().reshape((IMAGE_HEIGHT, IMAGE_WIDTH, 3)).copy()
)
if action is not None:
cv2.putText(
image,
f"action: {action}",
(20, 40),
FONT,
0.5,
(190, 250, 0),
2,
)

if joint is not None:
cv2.putText(
image,
f"pos: {joint}",
(20, 20),
FONT,
0.5,
(190, 250, 100),
2,
)

cv2.imshow("frame", image)
if cv2.waitKey(1) & 0xFF == ord("q"):
break

+ 156
- 0
examples/reachy/nodes/reachy_client.py View File

@@ -0,0 +1,156 @@
import argparse
import os
import time
from pathlib import Path

# import h5py
import numpy as np
import pyarrow as pa
from dora import Node
from reachy2_sdk import ReachySDK

freq = 30

ROBOT_IP = "192.168.1.51"
# ROBOT_IP = "localhost"

reachy = ReachySDK(ROBOT_IP)

SIMULATION = False

reachy.turn_on()

time.sleep(1)
action = [
-0.11903145498601328,
0.11292280260403312,
0.48048914307403895,
-1.4491468779308918,
0.1895427567665842,
0.009599310885968814,
-0.20141099568014562,
2.2656896114349365,
-0.13212142437597074,
-0.07731808586334879,
-0.5141739976375295,
-1.512502329778286,
0.00034906585039886593,
0.3193952531149623,
0.40474185353748504,
2.2610876560211,
]


# reachy.l_arm.shoulder.pitch.goal_position = np.rad2deg(action[0])
# reachy.l_arm.shoulder.roll.goal_position = np.rad2deg(action[1])
# reachy.l_arm.elbow.yaw.goal_position = np.rad2deg(action[2])
# reachy.l_arm.elbow.pitch.goal_position = np.rad2deg(action[3])
# reachy.l_arm.wrist.roll.goal_position = np.rad2deg(action[4])
# reachy.l_arm.wrist.pitch.goal_position = np.rad2deg(action[5])
# reachy.l_arm.wrist.yaw.goal_position = np.rad2deg(action[6])
# reachy.l_arm.gripper.set_opening(min(100, max(0, action[7] * 40)))

# reachy.r_arm.shoulder.pitch.goal_position = np.rad2deg(action[8])
# reachy.r_arm.shoulder.roll.goal_position = np.rad2deg(action[9])
# reachy.r_arm.elbow.yaw.goal_position = np.rad2deg(action[10])
# reachy.r_arm.elbow.pitch.goal_position = np.rad2deg(action[11])
# reachy.r_arm.wrist.roll.goal_position = np.rad2deg(action[12])
# reachy.r_arm.wrist.pitch.goal_position = np.rad2deg(action[13])
# reachy.r_arm.wrist.yaw.goal_position = np.rad2deg(action[14])
# reachy.r_arm.gripper.set_opening(min(100, max(0, action[15] / 2.26 * 100)))


reachy.l_arm.goto_joints(action[0:7], duration=2.0, degrees=False)
reachy.r_arm.goto_joints(action[8:15], duration=2.0, degrees=False)

time.sleep(5)

node = Node()
for event in node:
id = event["id"]
match id:
case "action":
action = event["value"].to_numpy()

reachy.l_arm.shoulder.pitch.goal_position = np.rad2deg(action[0])
reachy.l_arm.shoulder.roll.goal_position = np.rad2deg(action[1])
reachy.l_arm.elbow.yaw.goal_position = np.rad2deg(action[2])
reachy.l_arm.elbow.pitch.goal_position = np.rad2deg(action[3])
reachy.l_arm.wrist.roll.goal_position = np.rad2deg(action[4])
reachy.l_arm.wrist.pitch.goal_position = np.rad2deg(action[5])
reachy.l_arm.wrist.yaw.goal_position = np.rad2deg(action[6])
# reachy.l_arm.gripper.set_opening(
# min(100, max(0, action[7] / 2.26 * 100))
# ) # replay true action value
reachy.l_arm.gripper.set_opening(
0 if action[7] < 2.0 else 100
) # trick to force the gripper to close fully

reachy.r_arm.shoulder.pitch.goal_position = np.rad2deg(action[8])
reachy.r_arm.shoulder.roll.goal_position = np.rad2deg(action[9])
reachy.r_arm.elbow.yaw.goal_position = np.rad2deg(action[10])
reachy.r_arm.elbow.pitch.goal_position = np.rad2deg(action[11])
reachy.r_arm.wrist.roll.goal_position = np.rad2deg(action[12])
reachy.r_arm.wrist.pitch.goal_position = np.rad2deg(action[13])
reachy.r_arm.wrist.yaw.goal_position = np.rad2deg(action[14])
# reachy.r_arm.gripper.set_opening(
# min(100, max(0, action[15] / 2.26 * 100))
# ) # replay true action value
reachy.r_arm.gripper.set_opening(
0 if action[15] < 2.0 else 100
) # trick to force the gripper to close fully
reachy.mobile_base.set_speed(action[16], action[17], np.rad2deg(action[18]))
reachy.head.neck.roll.goal_position = np.rad2deg(action[19])
reachy.head.neck.pitch.goal_position = np.rad2deg(action[20])
reachy.head.neck.yaw.goal_position = np.rad2deg(action[21])
case "tick":
if not SIMULATION:
mobile_base_pos = reachy.mobile_base.odometry
else:
mobile_base_pos = {"vx": 0, "vy": 0, "vtheta": 0}
qpos = {
"l_arm_shoulder_pitch": np.deg2rad(
reachy.l_arm.shoulder.pitch.present_position
),
"l_arm_shoulder_roll": np.deg2rad(
reachy.l_arm.shoulder.roll.present_position
),
"l_arm_elbow_yaw": np.deg2rad(reachy.l_arm.elbow.yaw.present_position),
"l_arm_elbow_pitch": np.deg2rad(
reachy.l_arm.elbow.pitch.present_position
),
"l_arm_wrist_roll": np.deg2rad(
reachy.l_arm.wrist.roll.present_position
),
"l_arm_wrist_pitch": np.deg2rad(
reachy.l_arm.wrist.pitch.present_position
),
"l_arm_wrist_yaw": np.deg2rad(reachy.l_arm.wrist.yaw.present_position),
"l_gripper": reachy.l_arm.gripper._present_position,
"r_arm_shoulder_pitch": np.deg2rad(
reachy.r_arm.shoulder.pitch.present_position
),
"r_arm_shoulder_roll": np.deg2rad(
reachy.r_arm.shoulder.roll.present_position
),
"r_arm_elbow_yaw": np.deg2rad(reachy.r_arm.elbow.yaw.present_position),
"r_arm_elbow_pitch": np.deg2rad(
reachy.r_arm.elbow.pitch.present_position
),
"r_arm_wrist_roll": np.deg2rad(
reachy.r_arm.wrist.roll.present_position
),
"r_arm_wrist_pitch": np.deg2rad(
reachy.r_arm.wrist.pitch.present_position
),
"r_arm_wrist_yaw": np.deg2rad(reachy.r_arm.wrist.yaw.present_position),
"r_gripper": reachy.r_arm.gripper._present_position,
"mobile_base_vx": np.deg2rad(mobile_base_pos["vx"]),
"mobile_base_vy": np.deg2rad(mobile_base_pos["vy"]),
"mobile_base_vtheta": np.deg2rad(mobile_base_pos["vtheta"]),
"head_roll": np.deg2rad(reachy.head.neck.roll.present_position),
"head_pitch": np.deg2rad(reachy.head.neck.pitch.present_position),
"head_yaw": np.deg2rad(reachy.head.neck.yaw.present_position),
}

node.send_output("agent_pos", pa.array(qpos.values()))

+ 73
- 0
examples/reachy/nodes/reachy_vision_client.py View File

@@ -0,0 +1,73 @@
import argparse
import os
import time
from pathlib import Path

import numpy as np
import pyarrow as pa
from dora import Node
from lerobot.common.datasets.lerobot_dataset import LeRobotDataset

# import h5py
from pollen_vision.camera_wrappers.depthai import SDKWrapper
from pollen_vision.camera_wrappers.depthai.utils import get_config_file_path

freq = 30

cam_name = "cam_trunk"

time.sleep(5)
cam = SDKWrapper(get_config_file_path("CONFIG_SR"), fps=freq)
# ret, image = cap.read()

import cv2
import numpy as np

episode = 1
dataset = LeRobotDataset("cadene/reachy2_teleop_remi")
from_index = dataset.episode_data_index["from"][episode]
to_index = dataset.episode_data_index["to"][episode]
actions = dataset.hf_dataset["action"][from_index:to_index]
states = dataset.hf_dataset["observation.state"][from_index:to_index]

images = dataset[0]["observation.images.cam_trunk"]


## Convert image from chw to hwc

# cv2.imwrite("test.jpg", (images.permute((1, 2, 0)).numpy() * 255).astype(np.uint8))
images = images.permute((1, 2, 0)).numpy() * 255
images = images.astype(np.uint8)
images = cv2.cvtColor(images, cv2.COLOR_RGB2BGR)

# start = time.time()
import PIL
import torch

# frame_hwc = (images.permute((1, 2, 0)) * 255).type(torch.uint8).cpu().numpy()

# PIL.Image.fromarray(frame_hwc).save( f"frame_.png")

node = Node()
index = 0


for event in node:
# import cv2

# while True:
id = event["id"]
cam_data, _, _ = cam.get_data()

left_rgb = cam_data["left"]
# cv2.imshow("frame", left_rgb)
# if cv2.waitKey(1) & 0xFF == ord("q"):
# images = dataset[from_index.item() + index]["observation.images.cam_trunk"]
# images = images.numpy().transpose() * 255
# images = images.astype(np.uint8)
# break
# index += 1

# Convert image to BGR from RGB
left_rgb = cv2.cvtColor(left_rgb, cv2.COLOR_BGR2RGB)
node.send_output("cam_trunk", pa.array(left_rgb.ravel()))

+ 21
- 0
examples/reachy/nodes/replay_node.py View File

@@ -0,0 +1,21 @@
from lerobot.common.datasets.lerobot_dataset import LeRobotDataset
import time
from dora import Node
import pyarrow as pa

episode = 1
dataset = LeRobotDataset("cadene/reachy2_teleop_remi")
from_index = dataset.episode_data_index["from"][episode]
to_index = dataset.episode_data_index["to"][episode]
actions = dataset.hf_dataset["action"][from_index:to_index]
states = dataset.hf_dataset["observation.state"][from_index:to_index]

images = dataset[from_index.item()]["observation.images.cam_trunk"]


node = Node()

time.sleep(1)
for state in states:
node.send_output("agent_pos", pa.array(state.numpy()))
time.sleep(1 / 30)

+ 103
- 0
examples/reachy1/README.md View File

@@ -0,0 +1,103 @@
## Reachy 2

### Installation

#### Installation SDK

```bash
### Install the sdk
git clone https://github.com/pollen-robotics/reachy2-sdk
cd reachy2-sdk
pip install -e .
cd ..

### Connect Camera USB
echo 'SUBSYSTEM=="usb", ATTRS{idVendor}=="03e7", MODE="0666"' | sudo tee /etc/udev/rules.d/80-movidius.rules
sudo udevadm control --reload-rules && sudo udevadm trigger

### Install Polen vision
git clone https://github.com/pollen-robotics/pollen-vision.git
cd pollen-vision
git checkout lerobot_only_left_camera
pip install ".[depthai_wrapper]"
cd ..

### Teleoperation Collector
git clone https://github.com/pollen-robotics/reachy2_hdf5_recorder/
```

#### Installation dora-lerobot

```bash
## Create new python environment

git clone git@github.com:huggingface/dora_lerobot.git
pip install -e dora_lerobot
git clone git@github.com:dora-rs/dora-dora_lerobot.git --branch WORKING-REACHY
pip install -e dora-dora_lerobot/gym_dora

cargo install dora-rs --locked
pip install dora-rs
```

### AI Pipeline

### Data Collection

```bash
cd reachy2_hdf5_recorder
python3 record_episodes_hdf5.py -n <recording_session_name>_raw -l <epiodes_duration in s> -r <framerate> --robot_ip <robot_ip>
```

```bash
huggingface-cli upload \
<hf-organisation>/<dataset_name> \
data/<recording_session_name>_raw/ \
--repo-type dataset (--private)
```

> ### 06/07/2021
>
> As of today, we need to use several branches:
>
> - mobile_base : branch 21 # server side, install manually
> - reachy-sdk-api : branch 116 # server and client side, install manually
> - mobile-base-sdk : branch 25 # client side, install manually
> - reachy2-sdk-server : branch 135 # server side, install mannually
> Then push to HF hub!

### Training

```bash
python dora_lerobot/scripts/train.py \
policy=act_real \
env=aloha_real \
env.task=Reachy-v0 \
dataset_repo_id=<org-id>/<data-id< \
```

### Evaluation

```bash
dora start reachy/graphs/eval.yml --attach
```

### Reachy Initialization

```bash
ssh bedrock@192.168.1.51
```

```bash
cd dev_docker
sudo service stop


docker compose -f mode/dev.yaml up -d core

docker exec -it core bash

# In the docker container

ros2 launch reachy_bringup reachy.launch.py start_sdk_server:=true
```

+ 80
- 0
examples/reachy1/graphs/eval.yml View File

@@ -0,0 +1,80 @@
nodes:
- id: reachy-client
custom:
source: ../nodes/reachy_client.py
inputs:
action: eval/action
tick: dora/timer/millis/33
outputs:
- agent_pos

- id: reachy-vision-client
custom:
source: ../nodes/reachy_vision_client.py
args: ../nodes/reachy_vision_client.py
inputs:
tick: dora/timer/millis/33
outputs:
- cam_trunk

- id: eval
custom:
source: /home/antoine/Pollen/dora_lerobot/dora_lerobot/scripts/eval.py
args: -p cadene/2024_06_06_act_reachy2_mobile_base_3_060000 eval.n_episodes=1 eval.batch_size=1 env.episode_length=20000 policy.n_action_steps=100
# source: ../nodes/gym_dora_node.py
inputs:
agent_pos: reachy-client/agent_pos
cam_trunk: reachy-vision-client/cam_trunk
outputs:
- action

- id: plot
custom:
source: ../nodes/plot_node.py
inputs:
image: reachy-vision-client/cam_trunk
position: reachy-client/agent_pos
text_action: eval/action
envs:
IMAGE_WIDTH: 1280
IMAGE_HEIGHT: 800

- id: lerobot-record
custom:
build: cargo install --git https://github.com/dora-rs/dora dora-record
source: dora-record
inputs:
observation.state: reachy-client/agent_pos
action: eval/action
observation.images.cam_trunk: cam_saver_trunk/saved_image
episode_index: keyboard/space

- id: cam_saver_trunk
custom:
source: ../nodes/lerobot_webcam_saver.py
inputs:
image: reachy-vision-client/cam_trunk
record_episode: keyboard/space
outputs:
- saved_image
envs:
CAMERA_NAME: observation.images.cam_trunk

- id: keyboard
custom:
source: ../nodes/keyboard_node.py
inputs:
heartbeat: dora/timer/millis/20
outputs:
- space
- failed



---

- [ ] Add support for EoF action space recording and action
- [ ] Add support for tokenized action
- [ ] Add support for mulitple images, videos, ...
- [ ] Add support for multiple actions
- [ ] Add support for multiple observations

+ 74
- 0
examples/reachy1/graphs/qwenvl2.yml View File

@@ -0,0 +1,74 @@
nodes:
- id: dora-reachy1
build: pip install -e ../../../node-hub/dora-reachy1
path: dora-reachy1
inputs:
head_action: text-interpolation/head_action
antenna_action: text-interpolation/antenna_action
gripper_action: text-interpolation/gripper_action
r_arm_action: text-interpolation/r_arm_action

- id: keyboard-listener
build: pip install dora-keyboard
path: dora-keyboard
outputs:
- char

- id: key-interpolation
path: ../nodes/key_interpolation.py
inputs:
keyboard: keyboard-listener/char
outputs:
- r_arm_action
- head_action
- text

- id: text-interpolation
path: ../nodes/text_interpolation.py
inputs:
text: dora-qwenvl/tick
text_2: key-interpolation/text
outputs:
- head_action
- antenna_action
- gripper_action
- r_arm_action

- id: camera
build: pip install -e ../../../node-hub/dora-reachy1
path: dora-reachy1-vision
inputs:
tick: dora/timer/millis/16
outputs:
- image
env:
CAPTURE_PATH: 8
IMAGE_WIDTH: 640
IMAGE_HEIGHT: 480

- id: plot
build: cargo install dora-rerun --locked
path: dora-rerun
inputs:
image:
source: camera/image
queue_size: 1
textlog_vlm: dora-qwenvl/tick
env:
IMAGE_WIDTH: 640
IMAGE_HEIGHT: 480

- id: dora-qwenvl
build: pip install -e ../../../../dora/node-hub/dora-qwenvl
path: dora-qwenvl
inputs:
image:
source: camera/image
queue_size: 1
tick: dora/timer/millis/2000
outputs:
- text
- tick
env:
DEFAULT_QUESTION: Respond with right, left, forward, backward, open, or close to grab the trash.
CUSTOM_MODEL_PATH: /home/peter/Documents/work/LLaMA-Factory/saves/qwen2_vl-2b/lora-dora-demo-trash-2/sft

+ 73
- 0
examples/reachy1/graphs/qwenvl2_recorder.yml View File

@@ -0,0 +1,73 @@
nodes:
- id: dora-reachy1
build: pip install -e ../../../node-hub/dora-reachy1
path: dora-reachy1
inputs:
head_action: text-interpolation/head_action
antenna_action: text-interpolation/antenna_action
gripper_action: text-interpolation/gripper_action
r_arm_action: text-interpolation/r_arm_action

- id: keyboard-listener
build: pip install dora-keyboard
path: dora-keyboard
outputs:
- char

- id: key-interpolation
path: ../nodes/key_interpolation.py
inputs:
keyboard: keyboard-listener/char
outputs:
- text

- id: text-interpolation
path: ../nodes/text_interpolation.py
inputs:
text: dora-qwenvl-recorder/text
outputs:
- head_action
- antenna_action
- gripper_action
- r_arm_action
- question

- id: camera
build: pip install -e ../../../node-hub/dora-reachy1
path: dora-reachy1-vision
inputs:
tick: dora/timer/millis/16
outputs:
- image
env:
CAPTURE_PATH: 8
IMAGE_WIDTH: 640
IMAGE_HEIGHT: 480

- id: plot
build: cargo install dora-rerun --locked
path: dora-rerun
inputs:
image:
source: camera/image
queue_size: 1
textlog_vlm: dora-qwenvl-recorder/text
env:
IMAGE_WIDTH: 640
IMAGE_HEIGHT: 480

- id: dora-qwenvl-recorder
build: pip install -e ../../../../dora/node-hub/llama-factory-recorder
path: llama-factory-recorder
inputs:
image:
source: camera/image
queue_size: 1
ground_truth: key-interpolation/text
text: text-interpolation/question
outputs:
- text
env:
# DEFAULT_QUESTION: Respond with look left, look right, look up, look down, wait, smile, or cry to look straight at the person and mimic his facial expression.
DEFAULT_QUESTION: Respond with right, left, forward, backward, open, or close to grab the trash.
LLAMA_FACTORY_ROOT_PATH: /home/peter/Documents/work/LLaMA-Factory

+ 86
- 0
examples/reachy1/nodes/gym_dora_node.py View File

@@ -0,0 +1,86 @@
import time
from pathlib import Path

import gymnasium as gym
import pandas as pd
from lerobot.common.datasets.lerobot_dataset import LeRobotDataset

import gym_dora # noqa: F401

env = gym.make(
"gym_dora/DoraReachy2-v0", disable_env_checker=True, max_episode_steps=10000
)
observation = env.reset()


class ReplayPolicy:
def __init__(self, example_path, epidode=0):
df_action = pd.read_parquet(example_path / "action.parquet")
df_episode_index = pd.read_parquet(example_path / "episode_index.parquet")
self.df = pd.merge_asof(
df_action[["timestamp_utc", "action"]],
df_episode_index[["timestamp_utc", "episode_index"]],
on="timestamp_utc",
direction="backward",
)
# self.df["episode_index"] = self.df["episode_index"].map(lambda x: x[0])
self.df = self.df[self.df["episode_index"] == epidode]
self.current_time = self.df["timestamp_utc"].iloc[0]
self.topic = "action"
self.index = 0
self.finished = False

def select_action(self, obs):
if self.index < len(self.df):
self.index += 1
else:
self.finished = True
row = self.df.iloc[self.index]
delta_time = (row["timestamp_utc"] - self.current_time).microseconds
self.current_time = row["timestamp_utc"]
if delta_time > 0:
time.sleep(delta_time / 1_000_000)
return row[self.topic], self.finished


class ReplayLeRobotPolicy:
def __init__(self, episode=21):
self.index = 0
self.finished = False
# episode = 1
dataset = LeRobotDataset("cadene/reachy2_mobile_base")
from_index = dataset.episode_data_index["from"][episode]
to_index = dataset.episode_data_index["to"][episode]
self.states = dataset.hf_dataset["observation.state"][from_index:to_index]
self.actions = dataset.hf_dataset["action"][from_index:to_index]

def select_action(self, obs):
if self.index < len(self.actions):
self.index += 1
else:
self.finished = True
# time.sleep(1 / 30)
return self.actions[self.index].numpy(), self.finished


# policy = ReplayPolicy(
# Path(
# "/home/rcadene/dora-aloha/aloha/graphs/out/018fa076-ad19-7c77-afa4-49f7f072e86f"
# )
# )

policy = ReplayLeRobotPolicy()

done = False
while not done:
actions, finished = policy.select_action(observation)

observation, reward, terminated, truncated, info = env.step(actions)
# cv2.imshow("frame", observation["pixels"]["cam_trunk"])
# if cv2.waitKey(1) & 0xFF == ord("q"):
# break
if terminated:
print(observation, reward, terminated, truncated, info, flush=True)
done = terminated | truncated | done | finished

env.close()

+ 41
- 0
examples/reachy1/nodes/key_interpolation.py View File

@@ -0,0 +1,41 @@
from dora import Node
import pyarrow as pa

node = Node()


for event in node:
if event["type"] == "INPUT":
if event["id"] == "keyboard":
char = event["value"][0].as_py()
step = 0.1
if char == "w":
node.send_output("text", pa.array(["arm forward"]))
elif char == "s":
node.send_output("text", pa.array(["arm backward"]))
elif char == "d":
node.send_output("text", pa.array(["arm right"]))
elif char == "a":
node.send_output("text", pa.array(["arm left"]))
elif char == "e":
node.send_output("text", pa.array(["arm up"]))
elif char == "q":
node.send_output("text", pa.array(["arm down"]))
elif char == "t":
node.send_output("text", pa.array(["gripper close"]))
elif char == "r":
node.send_output("text", pa.array(["gripper open"]))
elif char == "6":
node.send_output("text", pa.array(["look right"]))
elif char == "4":
node.send_output("text", pa.array(["look left"]))
elif char == "8":
node.send_output("text", pa.array(["look up"]))
elif char == "2":
node.send_output("text", pa.array(["look down"]))
elif char == "7":
node.send_output("text", pa.array(["cry"]))
elif char == "9":
node.send_output("text", pa.array(["smile"]))
elif char == "5":
node.send_output("text", pa.array(["wait"]))

+ 31
- 0
examples/reachy1/nodes/keyboard_node.py View File

@@ -0,0 +1,31 @@
from pynput import keyboard
from pynput.keyboard import Key, Events
import pyarrow as pa
from dora import Node


node = Node()
buffer_text = ""
space = False
submitted_text = []
cursor = -1
with keyboard.Events() as events:
while True:
event = events.get(0.1)
if event is not None and isinstance(event, Events.Press):
if event.key == Key.space and space == False:
cursor += 1
node.send_output("space", pa.array([cursor]))
space = True

elif event is not None and isinstance(event, Events.Release):
if event.key == Key.space:
node.send_output("space", pa.array([-1]))
space = False
elif event.key == Key.backspace:
node.send_output("failed", pa.array([cursor]))

if node.next(0.001) is None:
break

+ 78
- 0
examples/reachy1/nodes/lerobot_webcam_saver.py View File

@@ -0,0 +1,78 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-

import os
import time
import numpy as np
import cv2
import pyarrow as pa
from pathlib import Path
from dora import Node
import subprocess

node = Node()

CAMERA_NAME = os.getenv("CAMERA_NAME", "camera")
CAMERA_WIDTH = 1280
CAMERA_HEIGHT = 800
FPS = 30

i = 0
episode = -1
dataflow_id = node.dataflow_id()

BASE = Path("out") / dataflow_id / "videos"
out_dir = BASE / f"{CAMERA_NAME}_episode_{episode:06d}"

for event in node:
event_type = event["type"]
if event_type == "INPUT":
if event["id"] == "record_episode":
record_episode = event["value"].to_numpy()[0]
print(f"Recording episode {record_episode}", flush=True)
# Save Episode Video
if episode != -1 and record_episode == -1:
out_dir = BASE / f"{CAMERA_NAME}_episode_{episode:06d}"
fname = f"{CAMERA_NAME}_episode_{episode:06d}.mp4"
video_path = BASE / fname
# Save video
ffmpeg_cmd = (
f"ffmpeg -r {FPS} "
"-f image2 "
"-loglevel error "
f"-i {str(out_dir / 'frame_%06d.png')} "
"-vcodec libx264 "
"-g 2 "
"-pix_fmt yuv444p "
f"{str(video_path)} &&"
f"rm -r {str(out_dir)}"
)
print(ffmpeg_cmd, flush=True)
subprocess.Popen([ffmpeg_cmd], start_new_session=True, shell=True)
episode = record_episode

# Make new directory and start saving images
elif episode == -1 and record_episode != -1:
episode = record_episode
out_dir = BASE / f"{CAMERA_NAME}_episode_{episode:06d}"
out_dir.mkdir(parents=True, exist_ok=True)
i = 0
else:
continue

elif event["id"] == "image":
# Only record image when in episode.
# Episode 0 is for not recording periods.
if episode == -1:
continue

fname = f"{CAMERA_NAME}_episode_{episode:06d}.mp4"
node.send_output(
"saved_image",
pa.array([{"path": f"videos/{fname}", "timestamp": i / FPS}]),
event["metadata"],
)
image = event["value"].to_numpy().reshape((CAMERA_HEIGHT, CAMERA_WIDTH, 3))
path = str(out_dir / f"frame_{i:06d}.png")
cv2.imwrite(path, image)
i += 1

+ 56
- 0
examples/reachy1/nodes/plot_node.py View File

@@ -0,0 +1,56 @@
import os
import cv2
from dora import Node


IMAGE_WIDTH = int(os.getenv("IMAGE_WIDTH", "1280"))
IMAGE_HEIGHT = int(os.getenv("IMAGE_HEIGHT", "720"))
FONT = cv2.FONT_HERSHEY_SIMPLEX


node = Node()

joint = None
text = None
action = None

for event in node:
if event["type"] == "INPUT":
dora_id = event["id"]

if dora_id == "position":
joint = event["value"].to_numpy()
if "text" in dora_id:
text = event["value"][0].as_py()
if "action" in dora_id:
action = event["value"].to_numpy()

if dora_id == "image":
image = (
event["value"].to_numpy().reshape((IMAGE_HEIGHT, IMAGE_WIDTH, 3)).copy()
)
if action is not None:
cv2.putText(
image,
f"action: {action}",
(20, 40),
FONT,
0.5,
(190, 250, 0),
2,
)

if joint is not None:
cv2.putText(
image,
f"pos: {joint}",
(20, 20),
FONT,
0.5,
(190, 250, 100),
2,
)

cv2.imshow("frame", image)
if cv2.waitKey(1) & 0xFF == ord("q"):
break

+ 156
- 0
examples/reachy1/nodes/reachy_client.py View File

@@ -0,0 +1,156 @@
import argparse
import os
import time
from pathlib import Path

# import h5py
import numpy as np
import pyarrow as pa
from dora import Node
from reachy2_sdk import ReachySDK

freq = 30

ROBOT_IP = "192.168.1.51"
# ROBOT_IP = "localhost"

reachy = ReachySDK(ROBOT_IP)

SIMULATION = False

reachy.turn_on()

time.sleep(1)
action = [
-0.11903145498601328,
0.11292280260403312,
0.48048914307403895,
-1.4491468779308918,
0.1895427567665842,
0.009599310885968814,
-0.20141099568014562,
2.2656896114349365,
-0.13212142437597074,
-0.07731808586334879,
-0.5141739976375295,
-1.512502329778286,
0.00034906585039886593,
0.3193952531149623,
0.40474185353748504,
2.2610876560211,
]


# reachy.l_arm.shoulder.pitch.goal_position = np.rad2deg(action[0])
# reachy.l_arm.shoulder.roll.goal_position = np.rad2deg(action[1])
# reachy.l_arm.elbow.yaw.goal_position = np.rad2deg(action[2])
# reachy.l_arm.elbow.pitch.goal_position = np.rad2deg(action[3])
# reachy.l_arm.wrist.roll.goal_position = np.rad2deg(action[4])
# reachy.l_arm.wrist.pitch.goal_position = np.rad2deg(action[5])
# reachy.l_arm.wrist.yaw.goal_position = np.rad2deg(action[6])
# reachy.l_arm.gripper.set_opening(min(100, max(0, action[7] * 40)))

# reachy.r_arm.shoulder.pitch.goal_position = np.rad2deg(action[8])
# reachy.r_arm.shoulder.roll.goal_position = np.rad2deg(action[9])
# reachy.r_arm.elbow.yaw.goal_position = np.rad2deg(action[10])
# reachy.r_arm.elbow.pitch.goal_position = np.rad2deg(action[11])
# reachy.r_arm.wrist.roll.goal_position = np.rad2deg(action[12])
# reachy.r_arm.wrist.pitch.goal_position = np.rad2deg(action[13])
# reachy.r_arm.wrist.yaw.goal_position = np.rad2deg(action[14])
# reachy.r_arm.gripper.set_opening(min(100, max(0, action[15] / 2.26 * 100)))


reachy.l_arm.goto_joints(action[0:7], duration=2.0, degrees=False)
reachy.r_arm.goto_joints(action[8:15], duration=2.0, degrees=False)

time.sleep(5)

node = Node()
for event in node:
id = event["id"]
match id:
case "action":
action = event["value"].to_numpy()

reachy.l_arm.shoulder.pitch.goal_position = np.rad2deg(action[0])
reachy.l_arm.shoulder.roll.goal_position = np.rad2deg(action[1])
reachy.l_arm.elbow.yaw.goal_position = np.rad2deg(action[2])
reachy.l_arm.elbow.pitch.goal_position = np.rad2deg(action[3])
reachy.l_arm.wrist.roll.goal_position = np.rad2deg(action[4])
reachy.l_arm.wrist.pitch.goal_position = np.rad2deg(action[5])
reachy.l_arm.wrist.yaw.goal_position = np.rad2deg(action[6])
# reachy.l_arm.gripper.set_opening(
# min(100, max(0, action[7] / 2.26 * 100))
# ) # replay true action value
reachy.l_arm.gripper.set_opening(
0 if action[7] < 2.0 else 100
) # trick to force the gripper to close fully

reachy.r_arm.shoulder.pitch.goal_position = np.rad2deg(action[8])
reachy.r_arm.shoulder.roll.goal_position = np.rad2deg(action[9])
reachy.r_arm.elbow.yaw.goal_position = np.rad2deg(action[10])
reachy.r_arm.elbow.pitch.goal_position = np.rad2deg(action[11])
reachy.r_arm.wrist.roll.goal_position = np.rad2deg(action[12])
reachy.r_arm.wrist.pitch.goal_position = np.rad2deg(action[13])
reachy.r_arm.wrist.yaw.goal_position = np.rad2deg(action[14])
# reachy.r_arm.gripper.set_opening(
# min(100, max(0, action[15] / 2.26 * 100))
# ) # replay true action value
reachy.r_arm.gripper.set_opening(
0 if action[15] < 2.0 else 100
) # trick to force the gripper to close fully
reachy.mobile_base.set_speed(action[16], action[17], np.rad2deg(action[18]))
reachy.head.neck.roll.goal_position = np.rad2deg(action[19])
reachy.head.neck.pitch.goal_position = np.rad2deg(action[20])
reachy.head.neck.yaw.goal_position = np.rad2deg(action[21])
case "tick":
if not SIMULATION:
mobile_base_pos = reachy.mobile_base.odometry
else:
mobile_base_pos = {"vx": 0, "vy": 0, "vtheta": 0}
qpos = {
"l_arm_shoulder_pitch": np.deg2rad(
reachy.l_arm.shoulder.pitch.present_position
),
"l_arm_shoulder_roll": np.deg2rad(
reachy.l_arm.shoulder.roll.present_position
),
"l_arm_elbow_yaw": np.deg2rad(reachy.l_arm.elbow.yaw.present_position),
"l_arm_elbow_pitch": np.deg2rad(
reachy.l_arm.elbow.pitch.present_position
),
"l_arm_wrist_roll": np.deg2rad(
reachy.l_arm.wrist.roll.present_position
),
"l_arm_wrist_pitch": np.deg2rad(
reachy.l_arm.wrist.pitch.present_position
),
"l_arm_wrist_yaw": np.deg2rad(reachy.l_arm.wrist.yaw.present_position),
"l_gripper": reachy.l_arm.gripper._present_position,
"r_arm_shoulder_pitch": np.deg2rad(
reachy.r_arm.shoulder.pitch.present_position
),
"r_arm_shoulder_roll": np.deg2rad(
reachy.r_arm.shoulder.roll.present_position
),
"r_arm_elbow_yaw": np.deg2rad(reachy.r_arm.elbow.yaw.present_position),
"r_arm_elbow_pitch": np.deg2rad(
reachy.r_arm.elbow.pitch.present_position
),
"r_arm_wrist_roll": np.deg2rad(
reachy.r_arm.wrist.roll.present_position
),
"r_arm_wrist_pitch": np.deg2rad(
reachy.r_arm.wrist.pitch.present_position
),
"r_arm_wrist_yaw": np.deg2rad(reachy.r_arm.wrist.yaw.present_position),
"r_gripper": reachy.r_arm.gripper._present_position,
"mobile_base_vx": np.deg2rad(mobile_base_pos["vx"]),
"mobile_base_vy": np.deg2rad(mobile_base_pos["vy"]),
"mobile_base_vtheta": np.deg2rad(mobile_base_pos["vtheta"]),
"head_roll": np.deg2rad(reachy.head.neck.roll.present_position),
"head_pitch": np.deg2rad(reachy.head.neck.pitch.present_position),
"head_yaw": np.deg2rad(reachy.head.neck.yaw.present_position),
}

node.send_output("agent_pos", pa.array(qpos.values()))

+ 73
- 0
examples/reachy1/nodes/reachy_vision_client.py View File

@@ -0,0 +1,73 @@
import argparse
import os
import time
from pathlib import Path

import numpy as np
import pyarrow as pa
from dora import Node
from lerobot.common.datasets.lerobot_dataset import LeRobotDataset

# import h5py
from pollen_vision.camera_wrappers.depthai import SDKWrapper
from pollen_vision.camera_wrappers.depthai.utils import get_config_file_path

freq = 30

cam_name = "cam_trunk"

time.sleep(5)
cam = SDKWrapper(get_config_file_path("CONFIG_SR"), fps=freq)
# ret, image = cap.read()

import cv2
import numpy as np

episode = 1
dataset = LeRobotDataset("cadene/reachy2_teleop_remi")
from_index = dataset.episode_data_index["from"][episode]
to_index = dataset.episode_data_index["to"][episode]
actions = dataset.hf_dataset["action"][from_index:to_index]
states = dataset.hf_dataset["observation.state"][from_index:to_index]

images = dataset[0]["observation.images.cam_trunk"]


## Convert image from chw to hwc

# cv2.imwrite("test.jpg", (images.permute((1, 2, 0)).numpy() * 255).astype(np.uint8))
images = images.permute((1, 2, 0)).numpy() * 255
images = images.astype(np.uint8)
images = cv2.cvtColor(images, cv2.COLOR_RGB2BGR)

# start = time.time()
import PIL
import torch

# frame_hwc = (images.permute((1, 2, 0)) * 255).type(torch.uint8).cpu().numpy()

# PIL.Image.fromarray(frame_hwc).save( f"frame_.png")

node = Node()
index = 0


for event in node:
# import cv2

# while True:
id = event["id"]
cam_data, _, _ = cam.get_data()

left_rgb = cam_data["left"]
# cv2.imshow("frame", left_rgb)
# if cv2.waitKey(1) & 0xFF == ord("q"):
# images = dataset[from_index.item() + index]["observation.images.cam_trunk"]
# images = images.numpy().transpose() * 255
# images = images.astype(np.uint8)
# break
# index += 1

# Convert image to BGR from RGB
left_rgb = cv2.cvtColor(left_rgb, cv2.COLOR_BGR2RGB)
node.send_output("cam_trunk", pa.array(left_rgb.ravel()))

+ 21
- 0
examples/reachy1/nodes/replay_node.py View File

@@ -0,0 +1,21 @@
from lerobot.common.datasets.lerobot_dataset import LeRobotDataset
import time
from dora import Node
import pyarrow as pa

episode = 1
dataset = LeRobotDataset("cadene/reachy2_teleop_remi")
from_index = dataset.episode_data_index["from"][episode]
to_index = dataset.episode_data_index["to"][episode]
actions = dataset.hf_dataset["action"][from_index:to_index]
states = dataset.hf_dataset["observation.state"][from_index:to_index]

images = dataset[from_index.item()]["observation.images.cam_trunk"]


node = Node()

time.sleep(1)
for state in states:
node.send_output("agent_pos", pa.array(state.numpy()))
time.sleep(1 / 30)

+ 72
- 0
examples/reachy1/nodes/text_interpolation.py View File

@@ -0,0 +1,72 @@
from dora import Node
import pyarrow as pa
import numpy as np

node = Node()


for event in node:
if event["type"] == "INPUT":
text = event["value"][0].as_py()
text = text.lower()
text = text.replace(".", "")
head_step = 5
step = 0.02

if text == "look right":
node.send_output("head_action", pa.array([0, -head_step, 0]))
elif text == "look left":
node.send_output("head_action", pa.array([0, head_step, 0]))
elif text == "look up":
node.send_output(
"head_action", pa.array([head_step / 2, 0, -head_step / 2])
)
elif text == "look down":
node.send_output(
"head_action", pa.array([-head_step / 2, 0, head_step / 2])
)
elif text == "look up":
node.send_output(
"head_action", pa.array([head_step / 2, 0, -head_step / 2])
)
elif text == "look down":
node.send_output(
"head_action", pa.array([-head_step / 2, 0, head_step / 2])
)
elif text == "smile":
node.send_output("antenna_action", pa.array(["smile"]))
elif text == "cry":
node.send_output("antenna_action", pa.array(["cry"]))
elif text == "forward":
node.send_output("r_arm_action", pa.array([step, 0, 0]))
elif text == "backward":
node.send_output("r_arm_action", pa.array([-step, 0, 0]))
elif text == "right":
node.send_output("r_arm_action", pa.array([0, -step, 0]))
elif text == "left":
node.send_output("r_arm_action", pa.array([0, step, 0]))
elif text == "up":
node.send_output("r_arm_action", pa.array([0, 0, step]))
elif text == "down":
node.send_output("r_arm_action", pa.array([0, 0, -step]))
elif text == "open":
node.send_output(
"question",
pa.array(
[
"Respond with right, left, forward, backward, open, or close to grab the trash"
]
),
)

node.send_output("gripper_action", pa.array([-100]))
elif text == "close":
node.send_output(
"question",
pa.array(
[
"Respond with right, left, forward, backward, open, or close to put the trash in your hand in the right bin"
]
),
)
node.send_output("gripper_action", pa.array([100]))

+ 12
- 0
examples/so100/ASSEMBLING.md View File

@@ -0,0 +1,12 @@
# Dora pipeline for teleoperated low-cost arm and episode recording for LeRobot

SO-ARM100 is a low-cost robotic arm that can be teleoperated using a similar arm. This repository contains
the Dora pipeline to manipulate the arms, the camera, and record/replay episodes with LeRobot.

## Assembling

Follow the instructions in the [SO-ARM100 repository](https://github.com/TheRobotStudio/SO-ARM100)

## License

This library is licensed under the [Apache License 2.0](../../LICENSE).

+ 75
- 0
examples/so100/CONFIGURING.md View File

@@ -0,0 +1,75 @@
# Dora pipeline for teleoperated low-cost arm and episode recording for LeRobot

SO-ARM100 is a low-cost robotic arm that can be teleoperated using a similar arm. This repository contains
the Dora pipeline to manipulate the arms, the camera, and record/replay episodes with LeRobot.

## Configuring

Once you have assembled the robot, and installed the required software, you can configure the robot. It's essential to
configure it
correctly for the robot to work as expected. Here are the reasons why you need to configure the robot:

- You may have done some 'mistakes' during the assembly, like inverting the motors, or changing the offsets by rotating
the motors before assembling the robot. So your configuration will be different from the one we used to record the
data set.
- The recording pipeline needs to know the position of the motors to record the data set correctly. If the motors are
calibrated differently, the data set will be incorrect.

**Please read the instructions carefully before configuring the robot.**

The first thing to do is to configure the Servo BUS:

- Setting all the servos to the same baud rate (1M).
- Setting the ID of the servos from the base (1) to the gripper (6) for the Follower and Leader arms.

Those steps can be done using the official wizard provided by the
manufacturer [Feetech](https://gitee.com/ftservo/fddebug/blob/master/FD1.9.6(200107)-EN-U.7z).

After that, you need to configure the homing offsets and drive mode to have the same behavior for every user. We
recommend using our on-board tool to set all of that automatically:

- Connect the Follower arm to your computer.
- Retrieve the device port from the official wizard.
- Run the configuration tool with the following command and follow the instructions:

```bash
cd dora-lerobot/

# If you are using a custom environment, you will have to activate it before running the command
source [your_custom_env_bin]/activate

# If you followed the installation instructions, you can run the following command
source venv/bin/activate # On Linux
source venv/Scripts/activate # On Windows bash
venv\Scripts\activate.bat # On Windows cmd
venv\Scripts\activate.ps1 # On Windows PowerShell

python ./robots/so100/configure.py --port /dev/ttyUSB0 --follower --left
```

**Note:** change `/dev/ttyUSB0` to the device port you retrieved from the official wizard (like `COM3` on Windows).
**Note:** The configuration tool will disable all torque so you can move the arm freely to the Position 1.
**Note:** You will be asked to set the arm in two different positions. The two positions are:

![image](https://github.com/Hennzau/Hennzau/blob/main/assets/Koch_arm_positions.png)

**Node:** You will be asked the path of the configuration file, you can press enter to use the default one.

This configuration has to be exported into environment variables inside the graph file. Here is an example of the
configuration:

```YAML
nodes:
- id: so100-follower
env:
PORT: /dev/ttyUSB0
CONFIG: ../configs/follower.left.json
- id: lcr-to-so100
env:
FOLLOWER_CONTROL: ../configs/follower.left.json
```

## License

This library is licensed under the [Apache License 2.0](../../LICENSE).

+ 82
- 0
examples/so100/INSTALLATION.md View File

@@ -0,0 +1,82 @@
# Dora pipeline for teleoperated low-cost arm and episode recording for LeRobot

SO-ARM100 is a low-cost robotic arm that can be teleoperated using a similar arm. This repository contains
the Dora pipeline to manipulate the arms, the camera, and record/replay episodes with LeRobot.

## Installation

Dataflow-oriented robotic application (Dora) is a framework that makes creation of robotic applications fast and simple.
See [Dora repository](https://github.com/dora-rs/dora).

**Please read the instructions carefully before installing the required software and environment to run the robot.**

You must install Dora before attempting to run the SO-ARM100 pipeline. Here are the steps to install Dora:

- Install Rust by following the instructions at [Rustup](https://rustup.rs/). (You may need to install Visual Studio C++
build tools on Windows.)
- Install Dora by running the following command:

```bash
cargo install dora-cli
```

Now you're ready to run Rust dataflow applications! We decided to only make Python dataflow for SO-ARM100, so
you may need to setup your Python environment:

- Install Python 3.10 or later by following the instructions at [Python](https://www.python.org/downloads/).
- Clone this repository by running the following command:

```bash
git clone https://github.com/dora-rs/dora-lerobot
```

- Open a bash terminal and navigate to the repository by running the following command:

```bash
cd dora-lerobot
```

- Create a virtual environment by running the following command (you can find where is all your pythons executable with
the command `whereis python3` on Linux, on default for Windows it's located
in `C:\Users\<User>\AppData\Local\Programs\Python\Python3.12\python.exe)`):

```bash
path_to_your_python3_executable -m venv venv
```

- Activate the virtual environment and install the required Python packages by running the following command:

```bash
# If you are using a custom environment, you will have to activate it before running the command
source [your_custom_env_bin]/activate

# If you followed the installation instructions, you can run the following command
source venv/bin/activate # On Linux
source venv/Scripts/activate # On Windows bash
venv\Scripts\activate.bat # On Windows cmd
venv\Scripts\activate.ps1 # On Windows PowerShell

pip install -r robots/so100/requirements.txt
```

If you want to install the required Python packages in development mode, you can run the following command, but you will
have to avoid using `dora build` during execution procedure:

```bash
pip install -r robots/so100/development.txt # You **MUST** be inside dora-lerobot to run this command, not robots/so100
```

**Note**: You're totally free to use your own Python environment, a Conda environment, or whatever you prefer, you will
have to activate
your custom python environment before running `dora up && dora start [graph].yml`.

In order to record episodes, you need ffmpeg installed on your system. You can download it from
the [official website](https://ffmpeg.org/download.html). If you're on Windows, you can download the latest build
from [here](https://www.gyan.dev/ffmpeg/builds/). You can
extract the zip file and add the `bin` folder to your PATH.
If you're on Linux, you can install ffmpeg using the package manager of your distribution. (
e.g `sudo apt install ffmpeg` on Ubuntu, `brew install ffmpeg` on macOS)

## License

This library is licensed under the [Apache License 2.0](../../LICENSE).

+ 68
- 0
examples/so100/README.md View File

@@ -0,0 +1,68 @@
# Dora pipeline for teleoperated low-cost arm and episode recording for LeRobot

SO-ARM100 is a low-cost robotic arm that can be teleoperated using a similar arm. This repository contains
the Dora pipeline to record episodes for LeRobot.

## Assembling

Check the [ASSEMBLING.md](ASSEMBLING.md) file for instructions on how to assemble the robot from scratch.

## Installations

Check the [INSTALLATIONS.md](INSTALLATION.md) file for instructions on how to install the required software and
environment
to run the robot.

## Configuring

Check the [CONFIGURING.md](CONFIGURING.md) file for instructions on how to configure the robot to record episodes for
LeRobot and teleoperate the robot.

## Recording

It's probably better to check the [examples](#examples) below before trying to record episodes. It will give you a
better
understanding of how Dora works.

Check the [RECORDING.md](RECORDING.md) file for instructions on how to record episodes for LeRobot.

## Examples

There are also some other example applications in the `graphs` folder. Have fun!

Here is a list of the available examples:

There are also some other example applications in the `graphs` folder. Have fun!

Here is a list of the available examples:

- `mono_teleop_real_with_alexk_lcr.yml`: A simple real teleoperation pipeline that allows you to control a follower arm
using a leader
arm. It
does not record the episodes, so you don't need to have a camera.

You must configure the arms, retrieve the device port, and modify the file `mono_teleop_real_with_alexk_lcr.yml` to set
the correct
environment variables. (e.g. `PORT` and `CONFIG`, `LEADER_CONTROL` and `FOLLOWER_CONTROL`)

```bash
cd dora-lerobot/

# If you are using a custom environment, you will have to activate it before running the command
source [your_custom_env_bin]/activate

# If you followed the installation instructions, you can run the following command
source venv/bin/activate # On Linux
source venv/Scripts/activate # On Windows bash
venv\Scripts\activate.bat # On Windows cmd
venv\Scripts\activate.ps1 # On Windows PowerShell

dora build ./robots/so100/graphs/mono_teleop_real_with_alexk_lcr.yml # Only the first time, it will install all the requirements if needed

dora up
dora start ./robots/so100/graphs/mono_teleop_real_with_alexk_lcr.yml
```

## License

This library is licensed under the [Apache License 2.0](../../LICENSE).

Some files were not shown because too many files changed in this diff

Loading…
Cancel
Save