dotnet add package KubernetesClient
dotnet msbuild /t:slngen
You should be able to use a standard KubeConfig file with this library,
see the BuildConfigFromConfigFile
function below. Most authentication
methods are currently supported, but a few are not, see the
known-issues.
You should also be able to authenticate with the in-cluster service
account using the InClusterConfig
function shown below.
There is optional built-in metric generation for prometheus client metrics. The exported metrics are:
k8s_dotnet_request_total
- Counter of request, broken down by HTTP Methodk8s_dotnet_response_code_total
- Counter of responses, broken down by HTTP Method and response codek8s_request_latency_seconds
- Latency histograms broken down by method, api group, api version and resource kind
There is an example integrating these monitors in the examples/prometheus directory.
// Load from the default kubeconfig on the machine.
var config = KubernetesClientConfiguration.BuildConfigFromConfigFile();
// Load from a specific file:
var config = KubernetesClientConfiguration.BuildConfigFromConfigFile(Environment.GetEnvironmentVariable("KUBECONFIG"));
// Load from in-cluster configuration:
var config = KubernetesClientConfiguration.InClusterConfig()
// Use the config object to create a client.
var client = new Kubernetes(config);
var namespaces = client.CoreV1.ListNamespace();
foreach (var ns in namespaces.Items) {
Console.WriteLine(ns.Metadata.Name);
var list = client.CoreV1.ListNamespacedPod(ns.Metadata.Name);
foreach (var item in list.Items)
{
Console.WriteLine(item.Metadata.Name);
}
}
var ns = new V1Namespace
{
Metadata = new V1ObjectMeta
{
Name = "test"
}
};
var result = client.CoreV1.CreateNamespace(ns);
Console.WriteLine(result);
var status = client.CoreV1.DeleteNamespace(ns.Metadata.Name, new V1DeleteOptions());
There is extensive example code in the examples directory.
git clone [email protected]:kubernetes-client/csharp.git
cd csharp\examples\simple
dotnet run
While the preferred way of connecting to a remote cluster from local machine is:
var config = KubernetesClientConfiguration.BuildConfigFromConfigFile();
var client = new Kubernetes(config);
Not all auth providers are supported at the moment #91. You can still connect to a cluster by starting the proxy command:
$ kubectl proxy
Starting to serve on 127.0.0.1:8001
and changing config:
var config = new KubernetesClientConfiguration { Host = "http://127.0.0.1:8001" };
Notice that this is a workaround and is not recommended for production use.
The project uses XUnit as unit testing framework.
To run the tests:
cd csharp\tests
dotnet restore
dotnet test
You'll need a Linux machine with Docker.
Check out the generator project into some other directory
(henceforth $GEN_DIR
).
cd $GEN_DIR/..
git clone https://github.com/kubernetes-client/gen
# Where REPO_DIR points to the root of the csharp repository
cd
${GEN_DIR}/openapi/csharp.sh ${REPO_DIR}/src/KubernetesClient ${REPO_DIR}/csharp.settings
SDK Version | Kubernetes Version | .NET Targeting |
---|---|---|
13.0 | 1.29 | net6.0;net7.0;net8.0;net48*;netstandard2.0* |
12.0 | 1.28 | net6.0;net7.0;net48*;netstandard2.0* |
11.0 | 1.27 | net6.0;net7.0;net48*;netstandard2.0* |
10.0 | 1.26 | net6.0;net7.0;net48*;netstandard2.0* |
9.1 | 1.25 | netstandard2.1;net6.0;net7.0;net48*;netstandard2.0* |
9.0 | 1.25 | netstandard2.1;net5.0;net6.0;net48*;netstandard2.0* |
8.0 | 1.24 | netstandard2.1;net5.0;net6.0;net48*;netstandard2.0* |
7.2 | 1.23 | netstandard2.1;net5.0;net6.0;net48*;netstandard2.0* |
7.0 | 1.23 | netstandard2.1;net5.0;net6.0 |
6.0 | 1.22 | netstandard2.1;net5.0 |
5.0 | 1.21 | netstandard2.1;net5 |
4.0 | 1.20 | netstandard2.0;netstandard2.1 |
3.0 | 1.19 | netstandard2.0;net452 |
2.0 | 1.18 | netstandard2.0;net452 |
1.6 | 1.16 | netstandard1.4;netstandard2.0;net452; |
1.4 | 1.13 | netstandard1.4;net451 |
1.3 | 1.12 | netstandard1.4;net452 |
-
Starting from
2.0
, dotnet sdk versioning adopted -
Kubernetes Version
here means the version sdk models and apis were generated from -
Kubernetes api server guarantees the compatibility with
n-2
(n-3
after 1.28) version. for example:- 1.19 based sdk should work with 1.21 cluster, but not guaranteed to work with 1.22 cluster.
and vice versa:
- 1.21 based sdk should work with 1.19 cluster, but not guaranteed to work with 1.18 cluster.
Note: in practice, the sdk might work with much older clusters, at least for the more stable functionality. However, it is not guaranteed past then-2
(orn-3
after 1.28 ) version. See #1511 for additional details.
see also https://kubernetes.io/releases/version-skew-policy/
- 1.19 based sdk should work with 1.21 cluster, but not guaranteed to work with 1.22 cluster.
-
Fixes (including security fixes) are not back-ported automatically to older sdk versions. However, contributions from the community are welcomed 😊; See Contributing for instructions on how to contribute.
-
*
KubernetesClient.Classic
: netstandard2.0 and net48 are supported with limited features
Please see CONTRIBUTING.md for instructions on how to contribute.