This examples shows how to create and connect dora operators and custom nodes in Python.
The dataflow.yml defines a simple dataflow graph with the following three nodes:
cargo run --example python-operator-dataflow
conda create -n example_env python=3.12
pip install -r requirements.txt
dora start dataflow.yml
dora start dataflow_llm.yml
Within the window you can ask question such as:
ask how are you
change bounding box plot to red
change confidence value to percentage
change object detection to only detect person
send 200 200 200 400 to topic line
record
wget https://raw.githubusercontent.com/dora-rs/dora/v0.3.2/examples/python-operator-dataflow/keyboard_op.py
wget https://raw.githubusercontent.com/dora-rs/dora/v0.3.2/examples/python-operator-dataflow/microphone_op.py
wget https://raw.githubusercontent.com/dora-rs/dora/v0.3.2/examples/python-operator-dataflow/whisper_op.py
wget https://raw.githubusercontent.com/dora-rs/dora/v0.3.2/examples/python-operator-dataflow/sentence_transformers_op.py
wget https://raw.githubusercontent.com/dora-rs/dora/v0.3.2/examples/python-operator-dataflow/llm_op.py
wget https://raw.githubusercontent.com/dora-rs/dora/v0.3.2/examples/python-operator-dataflow/file_saver_op.py
and adding the following to the dataflow configuration:
nodes:
- id: webcam
operator:
python: webcam.py
inputs:
tick: dora/timer/millis/50
outputs:
- image
- id: object_detection
operator:
python: object_detection.py
inputs:
image: webcam/image
outputs:
- bbox
- id: plot
operator:
python: plot.py
inputs:
image: webcam/image
bbox: object_detection/bbox
line: llm/line
keyboard_buffer: keyboard/buffer
user_message: keyboard/submitted
assistant_message: llm/assistant_message
## Speech to text
- id: keyboard
custom:
source: keyboard_op.py
outputs:
- buffer
- submitted
- record
- ask
- send
- change
inputs:
recording: whisper/text
- id: microphone
operator:
python: microphone_op.py
inputs:
record: keyboard/record
outputs:
- audio
- id: whisper
operator:
python: whisper_op.py
inputs:
audio: microphone/audio
outputs:
- text
## Code Modifier
- id: vectordb
operator:
python: sentence_transformers_op.py
inputs:
query: keyboard/change
saved_file: file_saver/saved_file
outputs:
- raw_file
- id: llm
operator:
python: llm_op.py
inputs:
code_modifier: vectordb/raw_file
assistant: keyboard/ask
message_sender: keyboard/send
outputs:
- modified_file
- line
- assistant_message
- id: file_saver
operator:
python: file_saver_op.py
inputs:
file: llm/modified_file
outputs:
- saved_file
The keyboard, microphone, whisper node, works in a very similar fashion as the object detection dataflow and I'll let you check it out by yourself.
The code modification flow works by first comparing an instruction with a vectordb of operators source code and then feeding the most similar operator to an llm with the instruction for code modification.
The end result is then saved using the file saver