Skip to content

Properly support batched/non-batched with vllm/llama.cpp #200

Properly support batched/non-batched with vllm/llama.cpp

Properly support batched/non-batched with vllm/llama.cpp #200

Re-run triggered July 3, 2024 22:43
Status Failure
Total duration 2m 48s
Artifacts

lint.yml

on: pull_request
Fit to window
Zoom out
Zoom in

Annotations

2 errors and 3 warnings
lint
Process completed with exit code 3.
lint
Process completed with exit code 12.
lint: src/instructlab/sdg/llmblock.py#L135
W4902: Using deprecated method warn() (deprecated-method)
lint: src/instructlab/sdg/llmblock.py#L135
W4902: Using deprecated method warn() (deprecated-method)
lint: src/instructlab/sdg/llmblock.py#L156
R1728: Consider using a generator instead 'max(len(value) for value in parsed_outputs.values())' (consider-using-generator)