This commit adds the ability to use the `opentelemetry` feature flag at
runtime by using the env variable `DORA_TELEMETRY`. Using environment
variable reduce the need to do some argument drilling between our
different process as well as making configurable in the YAML at no cost.
Usage:
```bash
docker run -d -p6831:6831/udp -p6832:6832/udp -p16686:16686 jaegertracing/all-in-one:latest
DORA_TELEMETRY=true dora-daemon --run-dataflow dataflow.yml
```
Current limitation:
- The telemetry currently only work with Jaeger on the local network with
default port: 6831 and 6832.
The intent of this commit is to remove the quantity of log that is being pushed to user.
This commit removes the redeclaration of a set up tracing methods to centralise
the tokio-tracing subscriber within the extension crate. It also add the
feature to filter information based on Environment variable.
This makes it possible to define the log level for tokio tracing like
this:
```
RUST_LOG=debug dora-daemon --run-dataflow dataflow.yml
```
I have also unified the feature flag to make it easier to manage tracing features among the workspace.
I did not change the default behaviour of tracing in our crates and therefore by
using the command above you should get the same tracing log as before.
fix merge conflict generated
This commit is an initial draft at fixing #147. The error is due to the
fact that pyo3 has linked the libpython from the compilation and not
trying to use libpython that is available in `LD_LIBRARY_PATH`.
The current only solution to disable the linking is to use the `extension-module` flag.
This requires to make the python `runtime-node` packaged in a python library.
The python `runtime-node` should also be fully compatible with the other operators in case we want hybrid runtime.
The issue that i'm facing is that. Due to the packaging, I have to deal with the `GIL` that is present from the start of
`dora-runtime` node. This however makes the process single threaded wich is impossible.
So, I have to disable it, but when I do, I have a race condition:
```bash
Exception ignored in: <module 'threading' from '/usr/lib/python3.8/threading.py'>
Traceback (most recent call last):
File "/usr/lib/python3.8/threading.py", line 1373, in _shutdown
assert tlock.locked()
AssertionError:
```
The issue is the same as https://github.com/PyO3/pyo3/issues/1274
To fix this issue, I'm going to look again at the different step to make this work.
But this is my current investigation.
* Add a `dora-python-operator` crate to hold utils functions for dora python
* Remove python serialisation and deserialisation from `dora-runtime`
* Update `python` documentation
* add `pip` release in release workflow
* add blank space
* Reduce Python verion to 3.7 for default conda support
* Split CI workflows to avoid name collusions
There is a name collusion issue within cargo. See: https://github.com/rust-lang/cargo/issues/6313
That make it impossible to build at the same time the `dora` cli binary,
and the `dora` python shared library.
To avoid the collusion, this commit split the two workflows:
- `main` workflows consists of default-members and is tested on `ci.yml`
- `python` workflow is on `ci-python.yaml` which is only run when python file changed.
* add root example package
* add python version
* Fix Minor pypi release issue
- Name is `dora-rs` and not `dora`
- maturin 0.13 does not have `extension-module` features built in.
* Use `--all` to check build of `dora`
As the cli name was changed to `dora-cli`, we don't have name collusion anymore and can use `--all` flags for building and testing `dora`
Since the merge of https://github.com/PyO3/maturin/pull/605 ,
we can add features flag when maturin build the python project, removing
the need to always enabling `extension-module` flag when building.
The `extension-module` flag creates crash when building and testing outside
of maturin.
Previous fix required to disable default feature flag when building and testing which is not ideal.
Adding `next` and `send_output` requires an async threadpool as the
communication Layer defined by the Middleware Layer returns an async Future
stream.
I solve this issue by adding a tokio runtime on a separater thread that is connected with two channels.
One for sending data and one for receiving data.
Those channel are then exposed synchronously to Python. This should not be cause for
concern as channel are really fast.
Looking at Zenoh Python client, they are heavily using `pyo3-asyncio` implementation
of futures to pass Rust futures into Python.
This can be a solution as well, but, from previous experiment, I'm concerned about performance on such
solution. I have experienced that putting futures from Rust into the `asyncio` queue to be slow.
I'm concerned also by mixing `async` and `sync` code in Python, as it might be blocking. This might requires 2 threadpool in Python.
This might seem as heavy overhead for some operations.