-
Notifications
You must be signed in to change notification settings - Fork 569
25. Adding Kubernetes Support
Now that we have the application running nicely in containers, we can leverage the power of an orchestrator such as Kubernetes. If you have never used Kubernetes before, we recommend that you get a high-level overview of what it is before diving in (things will make a whole lot more sense).
Follow the link to read more about the benefits of an orchestrator, which for the purposes of this demo is Kubernetes.
With that base knowledge to stand on, let's integrate Kubernetes support to this project.
At the root of our project, let's create a directory to store the files which Kubernetes requires:
mkdir eShopWCFK8s
Inside that directory, we'll create two yml files which describe the kubernetes deployment for the SQL server and WCF service containers.
touch eshop-sql-container-deployment.yml
touch eshop-wcf-container-deployment.yml
Let's walk through chunk by chunk what needs to go into the sql yml file. We start off my specifying the schema of this service. What we define in this file is a deployment, which will create a ReplicaSet to help the deployment orchestrate SQL pod creation, updates, and deletion.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: sql-data
What follows after 'spec' is what defines the deployment. We give this deployment a label.
spec:
template:
metadata:
labels:
app: sql-data
The next chunk of the manifest will look very similar to the dockerfile we created earlier. We'll tell Kubernetes what image to use for this container, give it a name and set some environmental variables in the container.
At the very bottom we indicate that (for this Pod) we want it to be attached to a Kubernetes Node which is running Windows as the OS. This marks the end of one service definition and the beginning of another.
spec:
containers:
- name: sql-data
image: microsoft/mssql-server-windows-developer
env:
- name: ACCEPT_EULA
value: "Y"
- name: SA_PASSWORD
value: Pass@word
nodeSelector:
beta.kubernetes.io/os: windows
---
With our deployment successfully defined, we want to specify another service. Our service type is declared to be a Load Balancer, which allows us to provide an externally accessible IP address. It also distributes traffic among the pods under it's control.
By declaring port and targetPort to be 1433, we are routing traffic in such a way that when someone hit's the external IP at our declared port, it will get routed through to the port that our SQL server in the container is listening on (SQL listens on port 1433 by default).
apiVersion: v1
kind: Service
metadata:
labels:
app: sql-data
name: sql-data
spec:
type: LoadBalancer
#loadBalancerIP: 52.187.173.125
ports:
- port: 1433
targetPort: 1433
selector:
app: sql-data
Putting it all together, we get a file that looks like this:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: sql-data
spec:
template:
metadata:
labels:
app: sql-data
spec:
containers:
- name: sql-data
image: microsoft/mssql-server-windows-developer
env:
- name: ACCEPT_EULA
value: "Y"
- name: SA_PASSWORD
value: Pass@word
nodeSelector:
beta.kubernetes.io/os: windows
---
apiVersion: v1
kind: Service
metadata:
labels:
app: sql-data
name: sql-data
spec:
type: LoadBalancer
#loadBalancerIP: 52.187.173.125
ports:
- port: 1433
targetPort: 1433
selector:
app: sql-data
The yml file for the WCF container will look fairly similar to the sql container yml file. Let's look at the contents and we'll call out anything which was not present in the sql file.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: eshop-modernized-wcf
spec:
replicas: 1
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
minReadySeconds: 5
template:
metadata:
labels:
app: eshop-modernized-wcf
spec:
containers:
- name: eshop-modernized-wcf
image: eshop/wcfservice
ports:
- containerPort: 80
imagePullPolicy: Always
env:
- name: ConnectionString
value: "Server=sql-data-for-wcf;Database=eShopDatabase;User Id=sa;Password=Testing11@@"
nodeSelector:
beta.kubernetes.io/os: windows
---
apiVersion: v1
kind: Service
metadata:
name: eshop-modernized-wcf
labels:
app: eshop-modernized-wcf
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
selector:
app: eshop-modernized-wcf
The unique thing to call out in this file is how we specify a "strategy" for this deployment starting at line 7. By defining our strategy to be a "rolling update", we tell Kubernetes how we want it to replace old Pods with new ones. A rolling update means that new Pods will be brought online then old Pods go offline.
The definitions that follow the declaration of this strategy, maxSurge and maxUnavailable, specifies to Kubernetes 1) the max number of Pods that can be created beyond the desired number of Pods and 2) the max number of desired Pods which can go offline while the update happens.
Continue reading to learn about how we can deploy Kubernetes into Azure
- Home
- Release notes
- e-books
-
MVC & Web Forms Samples
- Tour of the "legacy" ASP.NET web apps to modernize
- How to containerize the .NET Framework web apps with Windows Containers and Docker
- Publishing your Windows Container images into a Docker Registry
- Deploying the Apps to Azure Web Apps for Containers
- Deploying the Apps to ACI (Azure Container Instances)
- Deploying your Windows Containers based app into Azure VMs (Including CI CD)
- Deploying into local Kubernetes in Windows 10 and Docker for Windows development environment
- How to deploy your Windows Containers based apps into Kubernetes in Azure Container Service (Including CI CD)
- How to add authentication authorization with Azure Active Directory
- How to migrate the SQL database to Azure with the Azure Database Migration Service
- Using Application Insights in eShopOnContainers
- N-Tier sample: WinForms app and WFC service
- ASP.NET to Azure App Service Migration Workshop