The code has been validated through the dora-rerun node.
```
nodes:
- id: dora-pyorbbecksdk
path: dora-pyorbbecksdk
inputs:
tick: dora/timer/millis/30
outputs:
- image
- depth_image
- depth_data
- id: view
path: dora-rerun
inputs:
image_1:
source: dora-pyorbbecksdk/image
queue_size: 1
depth_image:
source: dora-pyorbbecksdk/depth_image
queue_size: 1
```

In addition to depth images, I also added output depth data to
facilitate the processing of depth information.
This makes it easier to understand visualization.
## Before
```mermaid
flowchart TB
dora-microphone
dora-vad
dora-distil-whisper
dora-rerun[/dora-rerun\]
subgraph ___dora___ [dora]
subgraph ___timer_timer___ [timer]
dora/timer/secs/2[\secs/2/]
end
end
dora/timer/secs/2 -- tick --> dora-microphone
dora-microphone -- audio --> dora-vad
dora-vad -- audio as input --> dora-distil-whisper
dora-distil-whisper -- text as original_text --> dora-rerun
```
## After
```mermaid
flowchart TB
dora-microphone[**dora-microphone**<hr/>*Microphone sending samples of audio data*]
dora-vad[**dora-vad**<hr/>*Voice Activity Detection using Sidero model filtering audio samples into audio samples including voice.*]
dora-distil-whisper[**dora-distil-whisper**<hr/>*Whisper model able to transcribe audio.*]
dora-rerun[/**dora-rerun**<hr/>*Real time visualisation using Rerun.*\]
subgraph ___dora___ [dora]
subgraph ___timer_timer___ [timer]
dora/timer/secs/2[\secs/2/]
end
end
dora/timer/secs/2 -- tick --> dora-microphone
dora-microphone -- audio --> dora-vad
dora-vad -- audio as input --> dora-distil-whisper
dora-distil-whisper -- text as original_text --> dora-rerun
```
Making installing the cli from pip auto install the api from pip. If
there's already one installed it will skip it, if there's none it's
going to install the latest version.