在线时间:8:00-16:00
迪恩网络APP
随时随地掌握行业动态
扫描二维码
关注迪恩网络微信公众号
开源软件名称(OpenSource Name):eon01/kubernetes-workshop开源软件地址(OpenSource Url):https://github.com/eon01/kubernetes-workshop开源编程语言(OpenSource Language):Python 83.5%开源软件介绍(OpenSource Introduction):
IntroductionIn this workshop, we're going to:
We will start by developing then deploying a simple Python application (a Flask API that returns the list of trending repositories by programming language). Development EnvironmentWe are going to use Python 3.6.7 We are using Ubuntu 18.04 that comes with Python 3.6 by default. You should be able to invoke it with the command python3. (Ubuntu 17.10 and above also come with Python 3.6.7) If you use Ubuntu 16.10 and 17.04, you should be able to install it with the following commands: sudo apt-get update
sudo apt-get install python3.6 If you are using Ubuntu 14.04 or 16.04, you need to get Python 3 from a Personal Package Archive (PPA): sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt-get update
sudo apt-get install python3.6 For the other operating systems, visit this guide, follow the instructions and install Python3. Now install PIP, the package manager: sudo apt-get install python3-pip Follow this by the installation of Virtualenvwrapper, which is a virtual environment manager: sudo pip3 install virtualenvwrapper Create a folder for your virtualenvs (I use ~/dev/PYTHON_ENVS) and set it as WORKON_HOME: mkdir ~/dev/PYTHON_ENVS
export WORKON_HOME=~/dev/PYTHON_ENVS In order to source the environment details when the user login, add the following lines to ~/.bashrc: source "/usr/local/bin/virtualenvwrapper.sh"
export WORKON_HOME="~/dev/PYTHON_ENVS" Make sure to adapt the WORKON_HOME to your real WORKON_HOME. Now we need to create then activate the new environment: mkvirtualenv --python=/usr/bin/python3 trendinggitrepositories
workon trendinggitrepositories Let's create the application directories: mkdir trendinggitrepositories
cd trendinggitrepositories
mkdir api
cd api Once the virtual environment is activated, we can install Flask: pip install flask Developing a Trending Git Repositories API (Flask)Inside the API folder from flask import Flask
app = Flask(__name__)
@app.route('/')
def index():
return "Hello, World!"
if __name__ == '__main__':
app.run(debug=True) This will return a hello world message when a user requests the "/" route. Now run it using:
We now need to install PyGithub since we need it to communicate with Github API v3. pip install PyGithub Go to Github and create a new app. We will need the application "Client ID" and "Client Secret": from github import Github
g = Github("xxxxxxxxxxxxx", "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx") This is how the mini API looks like: from flask import Flask, jsonify, abort
import urllib.request, json
from flask import request
app = Flask(__name__)
from github import Github
g = Github("xxxxxx", "xxxxxxxxxxxxx")
@app.route('/')
def get_repos():
r = []
try:
args = request.args
n = int(args['n'])
except (ValueError, LookupError) as e:
abort(jsonify(error="No integer provided for argument 'n' in the URL"))
repositories = g.search_repositories(query='language:python')[:n]
for repo in repositories:
with urllib.request.urlopen(repo.url) as url:
data = json.loads(url.read().decode())
r.append(data)
return jsonify({'repos':r })
if __name__ == '__main__':
app.run(debug=True) Let's hide the Github token and secret as well as other variables in the environment. from flask import Flask, jsonify, abort, request
import urllib.request, json, os
from github import Github
app = Flask(__name__)
CLIENT_ID = os.environ['CLIENT_ID']
CLIENT_SECRET = os.environ['CLIENT_SECRET']
DEBUG = os.environ['DEBUG']
g = Github(CLIENT_ID, CLIENT_SECRET)
@app.route('/')
def get_repos():
r = []
try:
args = request.args
n = int(args['n'])
except (ValueError, LookupError) as e:
abort(jsonify(error="No integer provided for argument 'n' in the URL"))
repositories = g.search_repositories(query='language:python')[:n]
for repo in repositories:
with urllib.request.urlopen(repo.url) as url:
data = json.loads(url.read().decode())
r.append(data)
return jsonify({'repos':r })
if __name__ == '__main__':
app.run(debug=DEBUG) The code above will return the top "n" repositories using Python as a programming language. We can use other languages too: from flask import Flask, jsonify, abort, request
import urllib.request, json, os
from github import Github
app = Flask(__name__)
CLIENT_ID = os.environ['CLIENT_ID']
CLIENT_SECRET = os.environ['CLIENT_SECRET']
DEBUG = os.environ['DEBUG']
g = Github(CLIENT_ID, CLIENT_SECRET)
@app.route('/')
def get_repos():
r = []
try:
args = request.args
n = int(args['n'])
l = args['l']
except (ValueError, LookupError) as e:
abort(jsonify(error="Please provide 'n' and 'l' parameters"))
repositories = g.search_repositories(query='language:' + l)[:n]
try:
for repo in repositories:
with urllib.request.urlopen(repo.url) as url:
data = json.loads(url.read().decode())
r.append(data)
return jsonify({
'repos':r,
'status': 'ok'
})
except IndexError as e:
return jsonify({
'repos':r,
'status': 'ko'
})
if __name__ == '__main__':
app.run(debug=DEBUG) In a .env file, add the variables you want to use:
Before running the Flask application, you need to source these variables: source .env Now, you can go to
The list is long, but our mini API is working fine. Now, let's freeze the dependencies: pip freeze > requirements.txt Before running the API on Kubernetes, let's create a Dockerfile. This is a typical Dockerfile for a Python app:
Now you can build it: docker build --no-cache -t tgr . Then run it: docker rm -f tgr
docker run -it --name tgr -p 5000:5000 -e CLIENT_ID="xxxxxxx" -e CLIENT_SECRET="xxxxxxxxxxxxxxx" -e DEBUG="True" tgr Let's include some other variables as environment variables: from flask import Flask, jsonify, abort, request
import urllib.request, json, os
from github import Github
app = Flask(__name__)
CLIENT_ID = os.environ['CLIENT_ID']
CLIENT_SECRET = os.environ['CLIENT_SECRET']
DEBUG = os.environ['DEBUG']
HOST = os.environ['HOST']
PORT = os.environ['PORT']
g = Github(CLIENT_ID, CLIENT_SECRET)
@app.route('/')
def get_repos():
r = []
try:
args = request.args
n = int(args['n'])
l = args['l']
except (ValueError, LookupError) as e:
abort(jsonify(error="Please provide 'n' and 'l' parameters"))
repositories = g.search_repositories(query='language:' + l)[:n]
try:
for repo in repositories:
with urllib.request.urlopen(repo.url) as url:
data = json.loads(url.read().decode())
r.append(data)
return jsonify({
'repos':r,
'status': 'ok'
})
except IndexError as e:
return jsonify({
'repos':r,
'status': 'ko'
})
if __name__ == '__main__':
app.run(debug=DEBUG, host=HOST, port=PORT) For security reasons, let's change the user inside the container from root to a user with less rights that we create:
Now if we want to run the container, we need to add many environment variables to the docker run command. An easier solution is using docker run -it --env-file .env my_container Our .env file looks like the following one:
After this modification, rebuild the image docker rm -f tgr;
docker run -it --name tgr -p 5000:5000 --env-file .env tgr Our application runs using A production server typically receives abuse from spammers, script kiddies, and should be able to handle high traffic. In our case, a good solution is using a WSGI HTTP server like Gunicorn (or uWsgi). First, let's install This is why we are going to change our Docker file:
In order to optimize the Wsgi server, we need to set the number of its workers and threads to:
This is why we are going to create another Python configuration file ( import multiprocessing
workers = multiprocessing.cpu_count() * 2 + 1
threads = 2 * multiprocessing.cpu_count() In the same file, we are going to include other configurations of Gunicorn: from os import environ as env
bind = env.get("HOST","0.0.0.0") +":"+ env.get("PORT", 5000) This is the final import multiprocessing
workers = multiprocessing.cpu_count() * 2 + 1
threads = 2 * multiprocessing.cpu_count()
from os import environ as env
bind = env.get("HOST","0.0.0.0") +":"+ env.get("PORT", 5000) In consequence, we should adapt the Dockerfile to the new Gunicorn configuration by changing the last line to :
Now, build Pushing the Image to a Remote RegistryA Docker registry is a storage and distribution system for named Docker images. The images we built are stored in our local environment and can only be used if you deploy locally. However, if you choose to deploy a Kubernetes cluster in a cloud or any different environment, these images will be not found. This is why we need to push the build images to a remote registry. Think of container registries as a git system for Docker images. There are plenty of containers registries:
You can also host your private container registry that supports OAuth, LDAP and Active Directory authentication using the registry provided by Docker: docker run -d -p 5000:5000 --restart=always --name registry registry:2 More about self-hosting a registry can be found in the official Docker documentation. We are going to use Dockerhub; this is why you need to create an account on hub.docker.com. Now, using Docker CLI, login: docker login Now rebuild the image using the new tag:
Example: docker build -t eon01/tgr:1 . Finally, push the image: docker push eon01/tgr:1 A Security NoticeMany of the publicly (and even private Docker images) seems to be secure, but it's not the case. When we built our image, we told Docker to copy all the images from the application folder to the image and we push it to an external public registry. COPY . . Or
The above commands will even copy the A good solution is to tell Docker to ignore these files during the build using a
At this stage, you should remove any image that you pushed to a distant registry, reset the Github tokens, build the new image without any cache: docker build -t eon01/tgr:1 . --no-cache Push it again: docker push eon01/tgr:1 Installing MinikubeOne of the fastest ways to try Kubernetes is using Minkube, which will create a virtual machine for you and deploy a ready-to-use Kubernetes cluster. Before you begin the installation, you need to make sure that your laptop supports virtualization: If your using Linux, run the following command and make sure that the output is not empty: grep -E --color 'vmx|svm' /proc/cpuinfo Mac users should execute: sysctl -a | grep -E --color 'machdep.cpu.features|VMX' If you see Windows users should use
If everything is okay, you need to install a hypervisor. You have a list of possibilities here: Some of these hypervisors are only compatible with some OSs like Hyper-V (formerly known as Windows Server Virtualization) for windows. VirtualBox is however cross-platform, and this is why we are going to use it here. Make sure to follow the instructions to install it. Now, install Minikube. Linux systems: curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 && chmod +x minikube
sudo install minikube /usr/local/bin MacOs: brew cask install minikube
curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-amd64 && chmod +x minikube
sudo mv minikube /usr/local/bin Windows: Use Chocolatey as an administrator: choco install minikube Or use the installer binary. Minikube does not support all Kubernetes features (like load balancing for example), however, you can find the most important features there: Minikube supports the following Kubernetes features:
You can also add different addons like:
If you run minikube start -p workshop --extra-config=apiserver.enable-swagger-ui=true --alsologtostderr You have plenty of other options to start a Minikube cluster; you can, for instance, choose the Kubernetes version and the VM driver: minikube start --kubernetes-version="v1.12.0" --vm-driver="virtualbox" Start the new cluster: minikube start -p workshop --extra-config=apiserver.enable-swagger-ui=true --alsologtostderr You can get detailed information about the cluster using: kubectl cluster-info If you didn't install kubectl, follow the official instructions. You can open the dashboard using Deploying to KubernetesWe have three main ways to deploy our container to Kubernetes and scale it to N replica. The first one is the original form of replication in Kubernetes, and it's called Replication Controller. Even if Replica Sets replace it, it's still used in some codes. This is a typical example: apiVersion: v1
kind: ReplicationController
metadata:
name: app
spec:
replicas: 3
selector:
app: app
template:
metadata:
name: app
labels:
app: app
spec:
containers:
- name: tgr
image: reg/app:v1
ports:
- containerPort: 80 We can also use Replica Sets, another way to deploy an app and replicate it: apiVersion: extensions/v1beta1
kind: ReplicaSet
metadata:
name: app
spec:
replicas: 3
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
environment: dev
spec:
containers:
- name: app
image: reg/app:v1
ports:
- containerPort: 80 Replica Set and Replication Controller do almost the same thing. They ensure that you have a specified number of pod replicas running at any given time in your cluster. There are however, some differences. As you may notice, we are using Replica Set use Set-Based selectors while replication controllers use Equity-Based selectors. Selectors match Kubernetes objects (like pods) using the constraints of the specified label, and we are going to see an example in a Deployment specification file. Label selectors with equality-based requirements use three operators:
In the last example, we used this notation:
|
2023-10-27
2022-08-15
2022-08-17
2022-09-23
2022-08-13
请发表评论