قالب وردپرس درنا توس
Home / IOS Development / Kubernetes guide for Swift on the server

Kubernetes guide for Swift on the server



As you learn more about developing server-side apps, you will encounter several situations where you need tools to handle outside source code processes. You have to handle things like:

  • Deployment
  • Dependency management
  • Logging
  • Performance monitoring

While Swift on the server continues to mature, you can subtract a collection of tools that are considered "Cloud Native" for to achieve these things with a Swift API!

In this tutorial you will:

  • Use Kubernetes to distribute and keep your app running.
  • Use Kubernetes to recreate your high-availability app.
  • Use the helm to combine all previous work you did with Kubernetes into a command.

This tutorial will use Kitura to build the API you will be working on, called RazeKube . Look ̵

1; KUBE!

 Swift Bird in a Cube

While Kube is many things (everything to see, know everything), there is one thing Kube is not: resilient! You will use Cloud Native tools to do so!

In this tutorial, use the following:

  • Kitura 2.7, or higher
  • macOS 10.14, or higher
  • Swift 5.0, or higher
  • Xcode 10.2, or higher
  • Terminal

Cloud Native Development and Swift

Here is a brief introduction on what the Swift Server Working Group (SSWG) is working on and what Cloud Native Development entails.

Fast on the server draws inspiration from many different ecosystems for inspiration. Vapor, Kitura and Perfect all draw their architecture from different projects in different programming languages. The concept of Futures and Promises in SwiftNIO is not native to Swift, although SwiftNIO is designed with the aim of being a library that stands high as a standard on its own.

A group of people who make up the Swift Server workgroup meet weekly to discuss the progress of the ecosystem. They have a few goals, but this one stands out as relevant to this tutorial:

The Swift Server Working Group will … define and run an incubation process for these efforts to reduce duplication of effort, increase compatibility, and promote best practices.

No matter how you may feel about using third-party libraries, the concept of reducing repetitive code is important and worth pursuing.

Cloud Native Computing Foundation

Another collective to achieve this goal is the Cloud Native Computing Foundation (CNCF). The primary charter of CNCF is as follows:

The mission of the foundation is to make cloud native computing ubiquitous. CNCF Cloud Native Definition v1.0 says: Cloud native technologies allow organizations to build and run scalable apps in modern, dynamic environments such as public, private and hybrid clouds. Containers, service networks, microservices, unchangeable infrastructure and declarative APIs are examples of this approach.

Swift and Kubernetes

Swift developers who have focused their efforts on mobile devices did not worry too much about standardization with other platforms. Apple has a reputation for forming iOS-centric design guidelines, and there are a number of tools to achieve similar goals, but all on one ecosystem – iOS.

In the world of server computing, multiple languages ​​can solve almost any problem. There are many times when one programming language makes more sense than another. Instead of engaging in a "holy war" discussion about when Swift makes more sense than other languages, you want to focus on using Swift as a means to the end. You get a taste of the tools that can help you solve problems while you do it!

Kubernetes is the first tool to use. Although Kubernetes is an important deployment tool in today's server development, it serves a variety of other purposes as well, and you will explore them in this tutorial! You dive right in after making sure your app works the way it needs it.

Getting Started

Click the Download Materials button at the top or bottom of this tutorial to get project files you want to use to build an example of the Kitura app.

Then you need to install Docker for Desktop and Kitura CLI to continue with this tutorial.

Note : There are two things I want to point out before diving into this tutorial:

  1. Audrey Tam has written an absolutely brilliant guide on how to use Docker here. Docker will be discussed in this tutorial as a building block for other components, and I recommend reading Audrey's tutorial before continuing. She has also written a tutorial on how to deploy a Kitura app with Kubernetes here, which is also worth reading to get acquainted with some of the basic concepts used here too!
  2. Docker for Desktop seems to be the ideal way to use Kubernetes on your Mac lately, but you have options! You can try MiniKube or another cloud provider to set up an online Kubernetes service, but Docker for desktop includes support for Kubernetes as well as other things that will be useful throughout this tutorial.

Installing Docker and Kubernetes

If you & # 39; have already installed Docker, start it, and then jump down to the next section: Activation of Kubernetes in Docker .

In a browser, open https://www.docker.com/ products / docker-desktop and click the Download Desktop for Mac and Windows button: [19659007]  Docker Desktop Installation

On the next page, log in or create a free Docker Hub account, if you do not already have one. Then continue to download the installer by clicking the Download Docker Desktop for Mac button:

 Docker Desktop Download

You will download a file called Docker. dmg . Once downloaded, double-click the file. You will see a dialog that will allow you to drag the Docker Whale into the application folder.

 Drag Docker to application folder

When the installer is displayed, you must allow privileged access for your Mac. Make sure that you install the Stable version – previous versions of Docker for desktop included only Kubernetes on the Edge version.

Enable Kubernetes in Docker

When the installation is complete, open the Docker Whale menu in the upper toolbar of the Mac and select Settings . In the category Kubernetes check Activate Kubernetes and then click Use :

 Enables Kubernetes within Docker preferences

restart Docker for this change to take effect. If you do, open Preferences again, and make sure the bottom of the window says both Docker and Kubernetes are running.

 Confirmation of Kubernetes running within Docker

double check that Docker is installed, open Terminal and enter docker version – you should see the output like this :

  Client: Docker Engine - Community
Version: 18.09.2
API version: 1.39
Go version: go1.10.8
Git commit: 6247962
Built: Sun Feb 10 04:12:39 2019
OS / Arch: darwin / amd64
Experimental: False

Server: Docker Engine - Community
Motor:
Version: 18.09.2
API version: 1.39 (minimum version 1.12)
Go version: go1.10.6
Git commit: 6247962
Built: Sun Feb 10 04:13:06 2019
OS / Arch: linux / amd64
Experimental: False

To ensure that Kubernetes is running, enter kubectl get all and you should see that a service is running:

  NAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGE
service / cybernetes ClusterIP 10.96.0.1  443 / TCP 16h

Creating RazeKube

You've set the stage – now it's time to create your RazeKube API. First, install Kitura CLI!

Installing Kitura CLI

Note : If you have already done this in another tutorial, verify that Kitura is installed by going into kitura --version in Terminal . If you see a version number, skip to the next section: Running RazeKube .

The easiest way to install Kitura CLI (command line interface) is via Homebrew. Follow the instructions to install Homebrew and then enter the following commands, at a time, to install Kitura CLI:

  brew tap ibm-swift / kitura
brew install kitura

By installing Kitura CLI, you get the ability to generate launch projects from the command line, but you also get the built-in ability to build and run your app in a Docker container with kitura build and kitura run ! This will come in handy later.

Running RazeKube

Now build and run the launch app, before diving into Kubernetes.

Navigate to the start directory root directory in Terminal . To check, type the command ls and make sure you see Package.swift in the resulting output.

Go into quick build to make sure everything is OK, then enter fast run . Production should be similar to:

[2019
-07-10T15:26:56.591-05:00] [WARNING] [ConfigurationManager.swift:394 load(url:deserializerName:)]   Unable to load data from URL /Users/davidokunibm/RayWenderlich/rw-cloud-native/final/RazeKube/config/mappings.json
[Wed Jul 10 15:26:56 2019] com.ibm.diagnostics.healthcenter.loader INFO: Swift Application Metrics
[2019-07-10T15:26:56.642-05:00] [INFO] [Metrics.swift:52 initializeMetrics(router:)]   Initialized calculations.
[2019-07-10T15:26:56.648-05:00] [INFO] [HTTPServer.swift:237 listen(_:)]   Listening on port 8080

Click Allow if you see this dialog asking if you want your app to accept incoming network connections:

 Allow inbound connection dialog

Now, in a browser, open localhost: 8080 – you should see this website:

 Kitura HomePage up and running on your localhost

Finally, check to make sure the all-knowing Cube is still … omniscient: Navigate to localhost: 8080 / cubed? number = 5 in your browser. You should see the following result:

 Showing the result of 5 cubes in the browser

If you see this, great job! Your startup project works as you please. Now you must deliberately sabotage Kube. Don't worry – the mighty Kube will forgive you and show you the light in the end.

Crashing Your RazeKube

Note : You are going to create an .xcodeproj file for the startup project, so you can open it in Xcode. If you are using Xcode 11 beta, I cannot guarantee that the entire tutorial will work, but you should be able to open the Xcode 11 beta project by double-clicking RazeKube.xcodeproj .

Or, to ensure that the xed command opens Xcode 10 type this command:

  sudo xcode-select -s /Applications/Xcode.app/Contents/ Developer

In Terminal press Control-C to stop the server, and then enter these commands:

  quick packet generate-xcodeproj
xed.

I Xcode open Sources / Application / Routes / KubeRoutes.swift . This is a good time to look at Kube's sheer power by examining the function kubeHandler !

After exhaling, add the following code at the end of initializeKubedRoutes (app :) :

  app.router.get ("/ uhoh", handler: fatalHandler)

Here you declare that any GET requests for / uhoh path will be handled by the function fatalHandler .

To preserve the error message, add the following function at the bottom of this file:

  func fatalHandler (request: RouterRequest, answer: RouterResponse,
next: () -> Valid) {
fatal error ()
}

Save your file. Close Xcode : Although you could build and run this in Xcode if you want, for the rest of this tutorial, you will work almost exclusively in Terminal and a browser!

In Terminal type these two commands:

  quick build
fast run

Open a web browser and confirm that localhost: 8080 loads your homepage. Now for the fun part – navigate to localhost: 8080 / uhoh in your browser. Yikes! Terminal process should freakout and tell you something like the following:

  Deadly error: file / Users / davidokunibm / RayWenderlich / rw-cloud-native / final / RazeKube / Sources / Application / Routes / KubeRoutes.swift, line 52
[1] 42560 illegal hardware instruction quickly

And your browser doesn't look better:

 The browser couldn't display the UHOH route since the application crashed

For all the work Apple has done to make Swift a safe language that doesn't crash often , it is important to remember that crashes are still happening and as a developer you need to cushion them. This is where Kubernetes can help by restarting your app!

Kubernetes and the State Machine

The heart of Kubernetes is the concept of governing the state and how this state is defined. That's why it's OK to think of the core of Kubernetes as a large database – you wouldn't be wrong!

This database is managed by something called osvd . This in itself is a tool that is also supported by the Cloud Native Computing Foundation. Serving Kubernetes is a matter of simply dictating the state to etc. using a command line interface called kubectl . You can use .yaml or .json files to dictate the state of an app, or you can enter specific instructions into a command via kubectl . You're going to do a bit of both.

Note : Your RazeKube app uses something called Helm charts to manage your app in a Kubernetes cluster. You will learn what this does in a bit!

Here's how a YAML file looks to describe the distribution of RazeKube:

  apiVersion: apps / v1
type: Distribution
metadata:
Name: razekube
labels:
app: razekube
version: "1.0.0"
containers:
- name: razekube-swift-run
image: razekube-swift-run
ports:
- name: http server
containerPort: 8080

Note the specification for containers towards the bottom – this means you must first create a container image for your app!

Building and running RazeKube Docker Image

In Terminal make sure you are in the root directory of your app. Type the command kitura build and pour a cup of coffee – this may take a few minutes. You should see output like this:

You may receive an error saying "Failed to run IBM Cloud Developer Tools". If you receive this error, follow the instructions and run “kitura idt” to install IBM Cloud Developer Tools. When finished, enter kitura build to continue.

  Validating Docker image name
OK
Check if Docker container razekube-swift tool is running
OK
Deletes the container named & # 39; razekube-swift-tools & # 39; ...
OK
Checks the Docker image log to see if an image already exists
OK
Create image razekube-swift tool based on Dockerfile tool ...
The image will have user davidokunibm with id 501 added

Performs docker image build - file Dockerfile tool - tag razekube-swift-tools --rm - pull
--build-arg bx_dev_userid = 501 --build-arg bx_dev_user = davidokunibm.

OK
Creates a container named & # 39; razekube-swift-tools & # 39; from that picture ...
OK
Starter Container Razecube Swift Tools ...
OK
OK
Stopper Container Razecube Swift Tools ...
OK

Kitura CLI makes your life easier while displaying the Docker commands it runs to build this image.

Then type the command kitura run – after about 30 seconds, you should see this output:

  The run-cmd option was not specified
Stops the & # 39; rage cube-fast-running & # 39; the container ...
OK
The container & # 39; razekube-quick-run & # 39; has already stopped
Validates Docker image names
Binding IP and ports for Docker image.
OK
Check if Docker container razekube-swift-run is running
OK
Deletes the container named & # 39; razekube-swift-run & # 39; ...
OK
Checks the Docker image log to see if an image already exists
OK
Creates razekube-fast races based on Dockerfile ...

Performs docker image build file Dockerfile - tag razekube-swift-run --rm - pull.
OK
Creates a container named & # 39; razekube-swift-run & # 39; from that picture ...
OK
Starter container & # 39; razekube-quick-run & # 39; ...
OK
Logs for the razekube fast-running container:
[2019-07-10T21:06:23.250Z] [WARNING] [ConfigurationManager.swift:394 load(url:deserializerName:)]   Unable to load data from URL /swift-project/config/mappings.json
[Wed Jul 10 21:06:23 2019] com.ibm.diagnostics.healthcenter.loader INFO: Swift Application Metrics
[2019-07-10T21:06:23.450Z] [INFO] [Metrics.swift:52 initializeMetrics(router:)]   Initialized calculations.
[2019-07-10T21:06:23.456Z] [INFO] [HTTPServer.swift:237 listen(_:)]   Listening on port 8080

These logs should look familiar – your API is now running in a Linux container via Docker!

Labeling your RazeKube Docker Image

Open a web browser and navigate to localhost: 8080 to make sure you can see the website. Then press Control-C in the Terminal to stop the container.

Enter the command docker image ls – your output should look like this:

  REPOSITORY PICTURE ID Size created
razekube-swift-run last eb85ef44e45f 2 minutes ago 598MB
razekube-swift-tools last 2008ae41e316 3 minutes ago 1.97GB

Kitura CLI configures your app to use its own container – razekube-swift-tools – to compile your app than the one that eventually runs it – razekube-swift-run – all in the name to save you space on your maturity.

Finally, tag your photo like this:

  docker tag razekube-swift-run razekube-swift-run: 1.0.0

Write docker image ls again to make sure your razekube-swift-run code was created:

  REPOSITORY TAG IMAGE ID CREATED STIZE
razekube-swift-run 1.0.0 eb85ef44e45f 3 minutes ago 598MB
razekube-swift-run last eb85ef44e45f 3 minutes ago 598MB
razekube-swift-tools last 2008ae41e316 4 minutes ago 1.97GB

OK, next time you put this in your Kubernetes cluster!

Deploying RazeKube to Kubernetes

First, type kubectl get all and kubectl get pods and check that the printout looks like this:

  ➜ kubectl get all
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) Age
service / cybernetes ClusterIP 10.96.0.1  443 / TCP 19 t
  ➜ kubectl few pods
No resources found.

In Kubernetes, a pod is the smallest unit available – just a set of co-located containers. Observing a pod is similar to observing an app you distribute.

Create a RazeKube pod by entering the following command in Terminal :

  kubectl create deployment razekube --image = razekube-swift-run: 1.0.0

Confirm that your app was distributed by running kubectl, get pods and check that your output looks like this:

  NAME READY STATUS REVIEWS
razekube-6dfd6844f7-74j7f 1/1 Running 0 26s

Kubernetes creates a unique identifier for each pod while running, unless you specify otherwise. While this is great to see that your app is running, you still haven't configured a way to access it!

Creating a RazeKube Service

This is where the Kubernetes start to shine. Instead of removing control, you gain complete control over how end users access each distribution through a service .

Add an access point for your app by creating a service like this:

  kubectl expose deployment razekube --type = "NodePort" --port = 8080

Write now kubectl get svc to get a list of exposed services currently on the run at Kubernetes, and you should see output like this:

  NAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGE
kubernetes ClusterIP 10.96.0.1  443 / TCP 20h
razekube NodePort 10.105.98.111  8080: 32612 / TCP 1m

Note the PORT (S) column – Kubernetes has mapped port 8080 on your app to a randomly assigned port. This port will be different each time, so make sure you notice which port Kubernetes opened for you. Open a web browser and navigate to that address, which would be localhost: 32612 in my case. If you see the homepage, can you ask Almighty Cube to demonstrate his power by navigating to localhost: 32612 / cube? Number = 4 – you should see this:

 4 cubes running in Kubernetes

Nice! You are now running a Swift app on Kubernetes !!!

 The sun with some sunglasses on

Recovering From a Crash

Now you are going to test how Kubernetes makes things work for you. First, type kubectl get all in Terminal and you should see the following output:

  NAME READY STATUS RESTARTS AGE
pod / razekube-6dfd6844f7-74j7f 1/1 Running 0 11m

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) Age
service / cybernetes ClusterIP 10.96.0.1  443 / TCP 20h
service / raze cube NodePort 10.105.98.111  8080: 32612 / TCP 8m

NAME WANTED CURRENT UPDATED AVAILABLE AGE
distribution.apps / razekube 1 1 1 1 11m

NAME WANTED PRESENT READY AGE
replicaset.apps / razekube-6dfd6844f7 1 1 1 11m

Notice how each component of your state is enumerated for you.

Then type the command kubectl get pods but does not press Return yet. In a moment, here's what you should do:

  • Navigate to localhost: 32612 / uhoh in your browser, which deliberately crashes your app.
  • Press Return in Terminal and run the same kubectl get pods command repeatedly until you see that your STATUS is Running . Tip : Press up arrow to display the previous command again.
  • Navigate to localhost: 32612 in your browser.

As you continue to enter command in the Terminal you will see your pod pod state evolve like this:

  NAME READY STATUS RESTARTS AGE
razekube-6dfd6844f7-74j7f 0/1 Error 0 17m

NAME READY STATUS REPEATED AGE
razekube-6dfd6844f7-74j7f 0/1 CrashLoopBackOff 0 17m

NAME READY STATUS REPEATED AGE
razekube-6dfd6844f7-74j7f 0/1 ContainerCreating 1 17m

NAME READY STATUS REPEATED AGE
razekube-6dfd6844f7-74j7f 1/1 Running 1 17m

While Kubernetes scans the state of everything in your cluster, it unites how things are – crashed – with how it should be etc. . If there is a discrepancy, Kubernetes is working to resolve the difference!

You have dictated that there should be a working distribution called razekube but by triggering the / uhoh route, that distribution no longer works. When Kubernetes picks up that the non-functional state does not match the desired functional state in osvd it redistributes the container to bring it back to a functional state. After the deployment has resumed, you can access your app to see that you are back in business!

Deploying Replicas

Running / not running is not the only state that can be managed by Kubernetes. Think of the scenario that a bunch of people have heard about the almighty Cube and they want to check out the power. You must have more than one app running at the same time to handle all that traffic!

In Terminal type the following command:

  kubectl scale --replicas = 5 distribution cube cube

Typically, with heavier apps, you can enter this command to see this happen in real time:


But this is a pretty easy app, so the change will happen immediately.

Enter kubectl get pods and kubectl get distributions to check out the new app state:

  ub kubectl get pods
NAME READY STATUS REPEATED AGE
razekube-6dfd6844f7-74j7f 1/1 Running 4 32m
razekube-6dfd6844f7-88wr7 1/1 Running 0 1m
razekube-6dfd6844f7-b4snx 1/1 Running 0 1m
razekube-6dfd6844f7-tn6mr 1/1 Running 0 1m
razekube-6dfd6844f7-vnr7w 1/1 Running 0 1m
  ➜ kubectl gets distributions
NAME WANTED CURRENT UPDATED AVAILABLE AGE
shaving cube 5 5 5 5 33m

In this case, you have told etc. that the desired state of your cluster should be that there are 5 copies for your razecube distribution.

Hit your / uhoh route a couple of times and type kubectl few pods over and over into Terminal to observe the condition of your bellows as they working to maintain its dictated state!

Kubernetes can do so much more than just these two examples. You can do things like:

  • Manage TLS certificate secrets for encrypted traffic.
  • Create an Ingress Controller to handle where certain traffic enters your cluster.
  • Handle a load balance so that distributions in the cluster receive equal amounts of traffic.

And because you were working on a Docker container all the time, this means that this tool is not native to Swift alone – it works for all apps you can put in Docker;].

Clean Up

Rather than delve deeper into several of these opportunities, you should learn how to consolidate all the steps you have skipped over with Helm! Before continuing to work with Helm, use kubectl to clean up the cluster like this:

  kubectl delete service razekube
kubectl delete distribution cube cube

When this is done, type kubectl get pods to make sure you have no resources on the run.

Helmet: The Kubernetes Package Manager

Helm is a package manager designed to simplify the distribution of simple or complex apps to Kubernetes. Helm has two components you need to know about:

  • client referred to as the helm on your command line and dictating distribution commands to your Kubernetes cluster.
  • server referred to as helm cult which takes commands from helm and forwards them to Kubernetes.

Helm uses YAML and JSON files to manage distributions to Kubernetes, and they are called Charts . An advantage of using Kitura CLI is that the app generator will create these map files for you!

What is in a map?

In Terminal make sure you are in the root directory of the RazeKube app and type the following command:

  cat chart / razekube / Values.yaml

Notice the format of this document, especially the top component:

  replica Quantity: 1
revisionHistoryLimit: 1
picture:
depot: razekube-swift-run
tag: 1.0.0
pullPolicy: Always
resources:
requests:
CPU: 200m
memory: 300Mi
livenessProbe:
initialDelaySeconds: 30
periodSeconds: 10
service:
name: fast
type: NodePort
servicePort: 8080

In this one file you define:

  • The number of copies you want for your distribution.
  • The Docker image for the distribution you want to make.
  • The service and port you want to create to reveal the distribution.

Do you remember how to configure each of these things individually with kubectl commands? This file allows you to do all of these with a fast command!

Now, configure Helm to work with your Kubernetes cluster and quickly work on your distribution commands. [19659195] Setting up a helmet and tiller

Good news – the helm is already technically installed, thanks to Kitura CLI! However, the Kubernetes cluster is not yet configured to receive commands from Helm, which means you need to set up Tiller .

In Terminal type the following command: [19659054] ror init

If you see output that ends with "Happy Helming!", You're ready to go. Type rudder version and make sure your client and server versions match as follows:

  Client: & version.Version {SemVer: "v2.12.3", GitCommit: "eecf22f77df5f65c823aacd2dbd30ae6c65f186e", Git
Server: & version.Version {SemVer: "v2.12.3", GitCommit: "eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState: "clean"}

Deploying RazeKube With Helm

Next, you are going to make two changes to your RazeKube chart: Navigate to chart / razekube and open values.yaml in a text editor of your choice.

Note: It is critical that you make sure your spacing for text in these YAML documents is perfectly aligned. YAML can be frustrating to work with due to this need, but the hierarchy of components in a Helm chart is easy to see this way.

Update lines 3 and 8 of this file so that they look like so:

replicaCount: 5
revisionHistoryLimit: 1
image:
  repository: razekube-swift-run
  tag: 1.0.0
  pullPolicy: IfNotPresent

Here’s what you just updated:

  • Rather than deploy one replica of your app at first, then scaling to five, you are writing that your desired state should contain five replicas of your deployment.
  • Also, when you are choosing to pull an image from a remote container registry, you only choose to look for a remote version of the container image if it is not present in your Docker file system already. You could update this to be any remote image you have access to if you want, but since this image is available locally, you are choosing to use what is present.

Save this file, and navigate back to the root directory of your app in Terminal. Enter the following command to do everything at once:

helm install -n razekube-app chart/razekube/

Behold Your Charted RazeKube!

After you run this command, Helm will give you output that should look very very similar to what you get when using kubectl to check your app status:

NAME:   razekube-app
LAST DEPLOYED: Wed Jul 10 17:29:15 2019
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/Service
NAME                          TYPE      CLUSTER-IP    EXTERNAL-IP  PORT(S)         AGE
razekube-application-service  NodePort  10.105.48.55         8080:32086/TCP  1s

==> v1beta1/Deployment
NAME                 DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
razekube-deployment  5        0        0           0          1s

==> v1/Pod(related)
NAME                                  READY  STATUS             RESTARTS  AGE
razekube-deployment-7f5694f847-9qnzc  0/1    Pending            0         0s
razekube-deployment-7f5694f847-9zfb8  0/1    Pending            0         0s
razekube-deployment-7f5694f847-dfp9v  0/1    ContainerCreating  0         0s
razekube-deployment-7f5694f847-pxn67  0/1    Pending            0         0s
razekube-deployment-7f5694f847-v5bq2  0/1    Pending            0         0s

Look at you! That was quite a bit easier than all those kubectl commands, wasn’t it? It’s important to know how kubectl works, but it’s equally as important to know that you can combine all of the work that those commands do into a Helm chart.

In my example, look at the port that was assigned to the service: 32086. This means that my app should be available at localhost:32086. Open a web browser and navigate to the app at the port open on your service:

Nice work! Now, just like before, access the /uhoh route for your port, and notice how the app crashes. Then access your homepage or the /kubed?number=4 route again, and notice that your app is back up and running!

In Terminalenter the command helm list — your output should look like this:

NAME          REVISION	UPDATED                   STATUS    CHART           
razekube-app  1       	Wed Jul 10 17:29:15 2019  DEPLOYED  razekube-1.0.0
APP VERSION    NAMESPACE
	       default

This shows you the status of your deployments with Helm.

Now, run kubectl get all to look at your output:

NAME                                       READY     STATUS    RESTARTS   AGE
pod/razekube-deployment-7f5694f847-9qnzc   1/1       Running   3          7m
pod/razekube-deployment-7f5694f847-9zfb8   1/1       Running   2          7m
pod/razekube-deployment-7f5694f847-dfp9v   1/1       Running   2          7m
pod/razekube-deployment-7f5694f847-pxn67   1/1       Running   2          7m
pod/razekube-deployment-7f5694f847-v5bq2   1/1       Running   3          7m

NAME                                   TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
service/kubernetes                     ClusterIP   10.96.0.1              443/TCP          21h
service/razekube-application-service   NodePort    10.105.48.55           8080:32086/TCP   7m

NAME                                  DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/razekube-deployment   5         5         5            5           7m

NAME                                             DESIRED   CURRENT   READY     AGE
replicaset.apps/razekube-deployment-7f5694f847   5         5         5         7m

Helm gives you a powerful tool to make deploying and managing your apps much easier than if you only had access to kubectl. Again, it’s still important to have access to kubectland to have a working understanding of it, so you can configure individual components of your app. More importantly, you can use these commands to learn how to automate your deployments with Helm too!

To clean up, type helm delete razekube-appand use either helm list or kubectl get all to check the status of everything after it’s been cleaned up.

Where to Go From Here?

You can download the final project using the Download Materials button at the top or bottom of this tutorial.

Thankfully, both inside and outside of the Swift community, you have a plethora of resources at your fingertips to learn more about how you can manage these tools with your Swift REST APIs.

You’ve probably heard about these books by now, but both our Vapor and Kitura books talk about using industry standard tools like Docker. The Kitura book specifically touches on using Nginx as an Ingress controller, and Prometheus and Grafana for performance monitoring. Also, tools like Appsody exist to make the integration of these tools easy! Additionally, you can try another Kitura tutorial on Github to learn how to deploy your own PostgreSQL database into Kubernetes, as well as an API that works with it.

Please write to us in the forums below if you have more questions, or want to ask about other tools that exist in this space!


Source link