You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Firstly, I would like to express my sincere gratitude to all the developers for creating and maintaining Sedna, an outstanding AI cloud-edge collaboration project.
I am currently utilizing the Sedna cloud-native AI framework based on KubeEdge, and I have several GPU nodes in the cloud. I am seeking guidance on the best practices for integrating these nodes with my KubeEdge cluster. Specifically, I have the following queries:
Should the cloud GPU nodes be integrated into the cluster using standard Kubernetes (K8S) methods or through KubeEdge? What are the pros and cons of each approach in the context of the Sedna framework?
If the integration is done through standard K8S, would the inability to install the Edgemesh component pose challenges in communication with edge nodes?
Conversely, if the integration is via KubeEdge, are there limitations due to KubeEdge potentially trimming some functionalities? For instance, would there be issues in supporting cloud workloads, like scheduling GPU resources?
Understanding the best approach to integrate cloud GPUs with KubeEdge in a Sedna framework is crucial for my project. Any guidance or recommendations you can provide would be greatly appreciated.
Thank you for your time and assistance.
The text was updated successfully, but these errors were encountered:
Firstly, I would like to express my sincere gratitude to all the developers for creating and maintaining Sedna, an outstanding AI cloud-edge collaboration project.
I am currently utilizing the Sedna cloud-native AI framework based on KubeEdge, and I have several GPU nodes in the cloud. I am seeking guidance on the best practices for integrating these nodes with my KubeEdge cluster. Specifically, I have the following queries:
Should the cloud GPU nodes be integrated into the cluster using standard Kubernetes (K8S) methods or through KubeEdge? What are the pros and cons of each approach in the context of the Sedna framework?
If the integration is done through standard K8S, would the inability to install the Edgemesh component pose challenges in communication with edge nodes?
Conversely, if the integration is via KubeEdge, are there limitations due to KubeEdge potentially trimming some functionalities? For instance, would there be issues in supporting cloud workloads, like scheduling GPU resources?
Understanding the best approach to integrate cloud GPUs with KubeEdge in a Sedna framework is crucial for my project. Any guidance or recommendations you can provide would be greatly appreciated.
Thank you for your time and assistance.
The text was updated successfully, but these errors were encountered: