• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

intel/intel-device-plugins-for-kubernetes: Collection of Intel device plugins fo ...

原作者: [db:作者] 来自: 网络 收藏 邀请

开源软件名称(OpenSource Name):

intel/intel-device-plugins-for-kubernetes

开源软件地址(OpenSource Url):

https://github.com/intel/intel-device-plugins-for-kubernetes

开源编程语言(OpenSource Language):

Go 89.2%

开源软件介绍(OpenSource Introduction):

Overview

Build Status Go Report Card GoDoc

This repository contains a framework for developing plugins for the Kubernetes device plugins framework, along with a number of device plugin implementations utilizing that framework.

The v0.24 release is the latest feature release with its documentation available here.

Table of Contents

Prerequisites

Prerequisites for building and running these device plugins include:

Plugins

The below sections detail existing plugins developed using the framework.

GPU Device Plugin

The GPU device plugin provides access to discrete (Intel® Iris® Xe MAX) and integrated GPU HW device files.

The demo subdirectory contains both a GPU plugin demo video and an OpenCL sample deployment (intelgpu-job.yaml).

FPGA Device Plugin

The FPGA device plugin supports FPGA passthrough for the following hardware:

  • Intel® Arria® 10 devices
  • Intel® Stratix® 10 devices

The FPGA plugin comes as three parts.

Refer to each individual sub-components documentation for more details. Brief overviews of the sub-components are below.

The demo subdirectory contains a video showing deployment and use of the FPGA plugin. Sources relating to the demo can be found in the opae-nlb-demo subdirectory.

Device Plugin

The FPGA device plugin is responsible for discovering and reporting FPGA devices to kubelet.

Admission Controller

The FPGA admission controller webhook is responsible for performing mapping from user-friendly function IDs to the Interface ID and Bitstream ID that are required for FPGA programming. It also implements access control by namespacing FPGA configuration information.

CRI-O Prestart Hook

The FPGA prestart CRI-O hook performs discovery of the requested FPGA function bitstream and programs FPGA devices based on the environment variables in the workload description.

QAT Device Plugin

The QAT plugin supports device plugin for Intel QAT adapters, and includes code showing deployment via DPDK.

The demo subdirectory includes details of both a QAT DPDK demo and a QAT OpenSSL demo. Source for the OpenSSL demo can be found in the relevant subdirectory.

Details for integrating the QAT device plugin into Kata Containers can be found in the Kata Containers documentation repository.

VPU Device Plugin

The VPU device plugin supports Intel VCAC-A card (https://www.intel.com/content/dam/www/public/us/en/documents/datasheets/media-analytics-vcac-a-accelerator-card-by-celestica-datasheet.pdf) the card has:

  • 1 Intel Core i3-7100U processor
  • 12 MyriadX VPUs
  • 8GB DDR4 memory

The demo subdirectory includes details of a OpenVINO deployment and use of the VPU plugin. Sources can be found in openvino-demo.

SGX Device Plugin

The SGX device plugin allows workloads to use Intel® Software Guard Extensions (Intel® SGX) on platforms with SGX Flexible Launch Control enabled, e.g.,:

  • 3rd Generation Intel® Xeon® Scalable processor family, code-named “Ice Lake”
  • Intel® Xeon® E3 processor
  • Intel® NUC Kit NUC7CJYH

The Intel SGX plugin comes in three parts.

The demo subdirectory contains a video showing the deployment and use of the Intel SGX device plugin. Sources relating to the demo can be found in the sgx-sdk-demo and sgx-aesmd-demo subdirectories.

Brief overviews of the Intel SGX sub-components are given below.

device plugin

The SGX device plugin is responsible for discovering and reporting Intel SGX device nodes to kubelet.

Containers requesting Intel SGX resources in the cluster should not use the device plugins resources directly.

Intel SGX Admission Webhook

The Intel SGX admission webhook is responsible for performing Pod mutations based on the sgx.intel.com/quote-provider pod annotation set by the user. The purpose of the webhook is to hide the details of setting the necessary device resources and volume mounts for using Intel SGX remote attestation in the cluster. Furthermore, the Intel SGX admission webhook is responsible for writing a pod/sandbox sgx.intel.com/epc annotation that is used by Kata Containers to dynamically adjust its virtualized Intel SGX encrypted page cache (EPC) bank(s) size.

The Intel SGX admission webhook is available as part of Intel Device Plugin Operator or as a standalone SGX Admission webhook image.

Intel SGX EPC memory registration

The Intel SGX EPC memory available on each node is registered as a Kubernetes extended resource using node-feature-discovery (NFD). A custom NFD source hook is installed as part of SGX device plugin operator deployment and NFD is configured to register the Intel SGX EPC memory extended resource reported by the hook.

Containers requesting Intel SGX EPC resources in the cluster use sgx.intel.com/epc resource which is of type memory.

DSA Device Plugin

The DSA device plugin supports acceleration using the Intel Data Streaming accelerator(DSA).

DLB Device Plugin

The DLB device plugin supports Intel Dynamic Load Balancer accelerator(DLB).

IAA Device Plugin

The IAA device plugin supports acceleration using the Intel Analytics accelerator(IAA).

Device Plugins Operator

To simplify the deployment of the device plugins, a unified device plugins operator is implemented.

Currently the operator has support for the DSA, DLB, FPGA, GPU, IAA, QAT, and Intel SGX device plugins. Each device plugin has its own custom resource definition (CRD) and the corresponding controller that watches CRUD operations to those custom resources.

The Device plugins operator README gives the installation and usage details. The operator is also available via operatorhub.io and on Red Hat OpenShift Container Platform.

Demos

The demo subdirectory contains a number of demonstrations for a variety of the available plugins.

Workload Authors

For workloads to get accesss to devices managed by the plugins, the Pod spec must specify the hardware resources needed:

spec:
  containers:
    - name: demo-container
      image: <registry>/<image>:<version>
      resources:
        limits:
          <device namespace>/<resource>: X

The summary of resources available via plugins in this repository is given in the table below.

Device Namespace Registered Resource(s) Example(s)
dlb.intel.com pf or vf dlb-libdlb-demo-pod.yaml
dsa.intel.com wq-user-[shared or dedicated] dsa-accel-config-demo-pod.yaml
fpga.intel.com custom, see mappings intelfpga-job.yaml
gpu.intel.com i915 intelgpu-job.yaml
iaa.intel.com wq-user-[shared or dedicated] iaa-qpl-demo-pod.yaml
qat.intel.com generic or cy/dc/sym/asym crypto-perf-dpdk-pod-requesting-qat.yaml
sgx.intel.com epc intelsgx-job.yaml
vpu.intel.com hddl intelvpu-job.yaml

Developers

For information on how to develop a new plugin using the framework, see the Developers Guide and the code in the device plugins pkg directory.

Running E2E Tests

Currently the E2E tests require having a Kubernetes cluster already configured on the nodes with the hardware required by the device plugins. Also all the container images with the executables under test must be available in the cluster. If these two conditions are satisfied, run the tests with:

$ go test -v ./test/e2e/...

In case you want to run only certain tests, e.g., QAT ones, run:

$ go test -v ./test/e2e/... -args -ginkgo.focus "QAT"

If you need to specify paths to your custom kubeconfig containing embedded authentication info then add the -kubeconfig argument:

$ go test -v ./test/e2e/... -args -kubeconfig /path/to/kubeconfig

The full list of available options can be obtained with:

$ go test ./test/e2e/... -args -help

It is possible to run the tests which don't depend on hardware without a pre-configured Kubernetes cluster. Just make sure you have Kind installed on your host and run:

$ make test-with-kind

Running Controller Tests with a Local Control Plane

The controller-runtime library provides a package for integration testing by starting a local control plane. The package is called envtest. The operator uses this package for its integration testing. Please have a look at envtest's documentation to set up it properly. But basically you just need to have etcd and kube-apiserver binaries available on your host. By default they are expected to be located at /usr/local/kubebuilder/bin. But you can have it stored anywhere by setting the KUBEBUILDER_ASSETS environment variable. If you have the binaries copied to ${HOME}/work/kubebuilder-assets, run the tests:

$ KUBEBUILDER_ASSETS=${HOME}/work/kubebuilder-assets make envtest

Supported Kubernetes Versions

Releases are made under the github releases area. Supported releases and matching Kubernetes versions are listed below:

Branch Kubernetes branch/version Status
release-0.24 Kubernetes 1.24 branch v1.24.x supported
release-0.23 Kubernetes 1.23 branch v1.23.x supported
release-0.22 Kubernetes 1.22 branch v1.22.x supported
release-0.21 Kubernetes 1.21 branch v1.21.x unsupported
release-0.20 Kubernetes 1.20 branch v1.20.x unsupported
release-0.19 Kubernetes 1.19 branch v1.19.x unsupported
release-0.18 Kubernetes 1.18 branch v1.18.x unsupported
release-0.17 Kubernetes 1.17 branch v1.17.x unsupported
release-0.15 Kubernetes 1.15 branch v1.15.x unsupported
release-0.11 Kubernetes 1.11 branch v1.11.x unsupported

Pre-built plugin images

Pre-built images of the plugins are available on the Docker hub. These images are automatically built and uploaded to the hub from the latest main branch of this repository.

Release tagged images of the components are also available on the Docker hub, tagged with their release version numbers in the format x.y.z, corresponding to the branches and releases in this repository.

Note: the default deployment files and operators are configured with imagePullPolicy IfNotPresent and can be changed with scripts/set-image-pull-policy.sh.

License

All of the source code required to build intel-device-plugins-for-kubernetes is available under Open Source licenses. The source code files identify external Go modules used. Binaries are distributed as container images on DockerHub*. Those images contain license texts and source code under /licenses.

Security

Reporting a Potential Security Vulnerability: If you have discovered potential security vulnerability in this project, please send an e-mail to [email protected]. Encrypt sensitive information using our PGP public key.

Please provide as much information as possible, including:

  • The projects and versions affected
  • Detailed description of the vulnerability
  • Information on known exploits

A member of the Intel Product Security Team will review your e-mail and contact you to collaborate on resolving the issue. For more information on how Intel works to resolve security issues, see Vulnerability Handling Guidelines.

Related Code

A related Intel SRIOV network device plugin can be found in this repository

Helm Charts

The helm charts is one way of distributing Kubernetes resources of the device plugins framework.




鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap