You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The app gets stuck on 'Loading...'. I have tried running both, with and without llm fine-tuned model. Not sure what it is. Here are the prints before interrupting it. I have waited up to 5 minutes several times.
teleprompter % python main.py llm
Using LLM for suggestions
Starting up...
Loading model...
* Serving Flask app "main" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
* Running on http://127.0.0.1:5001/ (Press CTRL+C to quit)
127.0.0.1 - - [26/Dec/2022 18:36:41] "GET / HTTP/1.1" 200 -
127.0.0.1 - - [26/Dec/2022 18:36:41] "GET /static/style.css HTTP/1.1" 404 -
Starting 'hear' command
Starting 'hear' command
127.0.0.1 - - [26/Dec/2022 18:36:42] "POST /api/v0/swarm/peers?timeout=2500ms HTTP/1.1" 404 -
Starting 'hear' command
Starting 'hear' command
Starting 'hear' command
Starting 'hear' command
Starting 'hear' command
Starting 'hear' command
Starting 'hear' command
Starting 'hear' command
Starting 'hear' command
Starting 'hear' command
Starting 'hear' command
Starting 'hear' command
Starting 'hear' command
Starting 'hear' command
Starting 'hear' command
127.0.0.1 - - [26/Dec/2022 18:36:45] "POST /api/v0/swarm/peers?timeout=2500ms HTTP/1.1" 404 -
Starting 'hear' command
Starting 'hear' command
Starting 'hear' command
Starting 'hear' command
Starting 'hear' command
Starting 'hear' command
Starting 'hear' command
^C%
The text was updated successfully, but these errors were encountered:
I also got this and below after every "Starting ./hear"
"huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using tokenizers before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)"
The app gets stuck on 'Loading...'. I have tried running both, with and without llm fine-tuned model. Not sure what it is. Here are the prints before interrupting it. I have waited up to 5 minutes several times.
The text was updated successfully, but these errors were encountered: