A Zeebe question answering worker based on Hugging Face NLP pipeline
Set a virtual python environment for version 3.7.10 and install requirements:
pip install -r requirements.txt
Specify a local (after downloading under models folder) or an Hugging Face question answering model in the .env file
Due to high resource consumption of some models, we decided to make this worker configurable in term of task name and associated model. For example, so it is possible to separate tasks and models with multiple workers for language handling :
- task
question-answering-en
and model (default will be downloaded at worker startup from Hugging Face's website) - task
question-answering-fr
and local modelmodels/
If you have a local/docker-compose Zeebe running locally you can run/debug with:
python index.py
python -m unittest
docker build -t teode/question-answering-french-worker:v1.0.0 -f Dockerfile.fr .
Or download from the release section
You must have a local or a port-forwarded Zeebe gateway for the worker to connect then:
docker run --name zb-qa-fr-wkr zeebe-question-answering-french-worker
Example BPMN with service task:
<bpmn:serviceTask id="my-question-answering" name="My Question Answering">
<bpmn:extensionElements>
<zeebe:taskDefinition type="my-env-var-task-name" />
</bpmn:extensionElements>
</bpmn:serviceTask>
- the worker is registered for the type of your choice (set as an env var)
- required variables:
question
- the question on contextcontext
- the text to infer from
- jobs are completed with variables:
answer
- the answer inferred from the textscore
- the confidence (between 0 and 1) in the answer