-
Notifications
You must be signed in to change notification settings - Fork 93
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Where's James? #10
Comments
Exactly! It’s very confusing at this page “This experience showcases James, our interactive digital human that has the knowledge of NVIDIA’s products by having direct access to our product knowledge base. James and the RAG-powered backend application use a collection of NVIDIA NIM inference microservices, NVIDIA ACE technologies, and ElevenLabs text-to-speech to provide natural and immersive responses. Using James as an inspiration, users would be able to download and customize the Digital Human for Customer Service blueprint for their industry, with document ingestion from RAG and customizing the avatar look and voice for their application.“ It speaks a lot about James, but there is no James. |
Hi @pythonllm. I assume you manage to deploy this on your on-prem GPU server. |
Hello) https://docs.nvidia.com/ace/latest/workflows/tokkio/text/Tokkio_GCP_CSP_Setup_Guide_automated.html or what You mean? |
Hi @pythonllm. Thanks for the update. I am trying to install it on my on-prem GPU server. I thought you were referring to on-prem GPU server when you mentioned "my server". |
Good afternoon.
I ran the project on my server, it works
I played around and customized other scenes except Ben
Can you please tell me if I can get James somewhere?
You wrote so much about him that I would like to run him on my server
Should I create a 3d head model myself or use one of the ones you offer in audio2face?
Where can I find James?
The text was updated successfully, but these errors were encountered: