To prepare for future feature, this commit initiate a `__init__.py` file
that makes the dora python node api, a mixed python/rust project.
Any python function or variable included in the `__init__.py` will be
distributed along rust function declared in `src/lib.rs`.
https://github.com/dora-rs/dora/issues/163 has shown the limitation on
depending too much on `pyo3`.
This commit is an initial draft at fixing #147. The error is due to the
fact that pyo3 has linked the libpython from the compilation and not
trying to use libpython that is available in `LD_LIBRARY_PATH`.
The current only solution to disable the linking is to use the `extension-module` flag.
This requires to make the python `runtime-node` packaged in a python library.
The python `runtime-node` should also be fully compatible with the other operators in case we want hybrid runtime.
The issue that i'm facing is that. Due to the packaging, I have to deal with the `GIL` that is present from the start of
`dora-runtime` node. This however makes the process single threaded wich is impossible.
So, I have to disable it, but when I do, I have a race condition:
```bash
Exception ignored in: <module 'threading' from '/usr/lib/python3.8/threading.py'>
Traceback (most recent call last):
File "/usr/lib/python3.8/threading.py", line 1373, in _shutdown
assert tlock.locked()
AssertionError:
```
The issue is the same as https://github.com/PyO3/pyo3/issues/1274
To fix this issue, I'm going to look again at the different step to make this work.
But this is my current investigation.
* Add a `dora-python-operator` crate to hold utils functions for dora python
* Remove python serialisation and deserialisation from `dora-runtime`
* Update `python` documentation
* add `pip` release in release workflow
* add blank space
* Reduce Python verion to 3.7 for default conda support
* Split CI workflows to avoid name collusions
There is a name collusion issue within cargo. See: https://github.com/rust-lang/cargo/issues/6313
That make it impossible to build at the same time the `dora` cli binary,
and the `dora` python shared library.
To avoid the collusion, this commit split the two workflows:
- `main` workflows consists of default-members and is tested on `ci.yml`
- `python` workflow is on `ci-python.yaml` which is only run when python file changed.
* add root example package
* add python version
* Fix Minor pypi release issue
- Name is `dora-rs` and not `dora`
- maturin 0.13 does not have `extension-module` features built in.
* Use `--all` to check build of `dora`
As the cli name was changed to `dora-cli`, we don't have name collusion anymore and can use `--all` flags for building and testing `dora`
Changes the message format from raw data bytes to a higher-level `Message` struct serialized with capnproto. In addition to the raw data, which is sent as a byte array as before, the `Message` struct features `metadata` field. This metadata field can be used to pass open telemetry contexts, deadlines, etc.
Since the merge of https://github.com/PyO3/maturin/pull/605 ,
we can add features flag when maturin build the python project, removing
the need to always enabling `extension-module` flag when building.
The `extension-module` flag creates crash when building and testing outside
of maturin.
Previous fix required to disable default feature flag when building and testing which is not ideal.
Adding `next` and `send_output` requires an async threadpool as the
communication Layer defined by the Middleware Layer returns an async Future
stream.
I solve this issue by adding a tokio runtime on a separater thread that is connected with two channels.
One for sending data and one for receiving data.
Those channel are then exposed synchronously to Python. This should not be cause for
concern as channel are really fast.
Looking at Zenoh Python client, they are heavily using `pyo3-asyncio` implementation
of futures to pass Rust futures into Python.
This can be a solution as well, but, from previous experiment, I'm concerned about performance on such
solution. I have experienced that putting futures from Rust into the `asyncio` queue to be slow.
I'm concerned also by mixing `async` and `sync` code in Python, as it might be blocking. This might requires 2 threadpool in Python.
This might seem as heavy overhead for some operations.