-
Notifications
You must be signed in to change notification settings - Fork 144
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Wrong API Key Spawns Phantom Inference Pipeline Process #907
Comments
Hi, is anyone working on this? I'd love to make a contribution if this is still unassigned |
Hi @farhan0167 , I have not started working on this issue yet so please feel free to give it a try! Thanks! |
hi @yeldarby , so I tried running the minimum reproducible as is: from inference_sdk import InferenceHTTPClient
client = InferenceHTTPClient(
api_url="http://localhost:9001", # use local inference server
api_key="invalid" # optional to access your private data and models
)
result = client.start_inference_pipeline_with_workflow(
video_reference=[0],
workspace_name="roboflow-docs",
workflow_id="clip-frames",
max_fps=max_fps,
workflows_parameters={
"prompt": "blurry",
"threshold": 0.16
}
) and I'm getting the following error which I believe is what you are expecting:
Environment
Is there anything that I'm missing that I could try out? Is there another public workflow that I could try out that gives you the error? |
Hi @farhan0167 , do you get more output? @yeldarby suggests we should see infinite chain of lines like the one below:
|
@grzegorz-roboflow yeah I actually tried some different examples, but I wasn't able to reproduce the error with an infinite chain of logs. If i don't provide an api key for the following for example:
Then i get a normal response back but when i uncomment the line, and re-run it, I get only:
|
Hi @farhan0167 , I fixed the infinite loop in #983 (please have a look if you'd like to understand how I reproduced the error), I think we still have good area for improvement here when it comes to returned error, would you like to look into this? |
I'd be interested in taking a look at this. And I see why I wasnt seeing the infinite loop. I was actually not looking at the logs for the server. Before I get started, could you explain how you got your webcam running? Having a hard time running my webcam using the example here from inference_sdk import InferenceHTTPClient
import atexit
import time
max_fps = 4
client = InferenceHTTPClient(
api_url="http://localhost:9001", # use local inference server
#api_key="" # optional to access your private data and models
)
# Start a stream on an rtsp stream
result = client.start_inference_pipeline_with_workflow(
video_reference=0,
workspace_name="test-coqbp",
workflow_id="detect-and-classify",
max_fps=max_fps,
workflows_parameters={
"prompt": "blurry", # change to look for something else
"threshold": 0.16
}
) I get something like this:
before it stops. Not sure if there's an issue that already addresses this or if this should be its own issue. But as soon as I understand how everything's working, I can look into the error thing. |
Hi @farhan0167, I start inference pipeline container: docker build \
-t roboflow/roboflow-inference-server-cpu:test \
-f docker/dockerfiles/Dockerfile.onnx.cpu \
.
docker run \
--rm \
-p 9001:9001 \
-p 9002:9002 \
-e NOTEBOOK_ENABLED=true \
-e ENABLE_STREAM_API=true \
-e LOG_LEVEL=DEBUG \
-e LMM_ENABLED=True \
--name=hosted_inference \
-v ./inference:/app/inference \
-v ./docker/config/cpu_http.py:/app/cpu_http.py \
roboflow/roboflow-inference-server-cpu:test Then, in my browser, I visit Hope this helps! |
The original issue of pipeline not terminated was resolved, I'm reopening this since we can add improvement related to this issue - when pipeline fails we can send more descriptive error message back to the UI |
Search before asking
Bug
If you pass the wrong API key to
/inference_pipelines/initialise
it will create a zombie process that spits out all sorts of errors indefinitely.Logs:
Environment
Minimal Reproducible Example
Using a public Workflow that doesn't require an API Key but passing an invalid one:
Additional
No response
Are you willing to submit a PR?
The text was updated successfully, but these errors were encountered: