在线时间:8:00-16:00
迪恩网络APP
随时随地掌握行业动态
扫描二维码
关注迪恩网络微信公众号
开源软件名称(OpenSource Name):open-telemetry/opentelemetry-operator开源软件地址(OpenSource Url):https://github.com/open-telemetry/opentelemetry-operator开源编程语言(OpenSource Language):Go 97.1%开源软件介绍(OpenSource Introduction):OpenTelemetry Operator for KubernetesThe OpenTelemetry Operator is an implementation of a Kubernetes Operator. The operator manages:
DocumentationGetting startedTo install the operator in an existing cluster, make sure you have kubectl apply -f https://github.com/open-telemetry/opentelemetry-operator/releases/latest/download/opentelemetry-operator.yaml Once the kubectl apply -f - <<EOF
apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
name: simplest
spec:
config: |
receivers:
otlp:
protocols:
grpc:
http:
processors:
exporters:
logging:
service:
pipelines:
traces:
receivers: [otlp]
processors: []
exporters: [logging]
EOF WARNING: Until the OpenTelemetry Collector format is stable, changes may be required in the above example to remain compatible with the latest version of the OpenTelemetry Collector image being referenced. This will create an OpenTelemetry Collector instance named The At this point, the Operator does not validate the contents of the configuration file: if the configuration is invalid, the instance will still be created but the underlying OpenTelemetry Collector might crash. The Operator does examine the configuration file to discover configured receivers and their ports. If it finds receivers with ports, it creates a pair of kubernetes services, one headless, exposing those ports within the cluster. The headless service contains a UpgradesAs noted above, the OpenTelemetry Collector format is continuing to evolve. However, a best-effort attempt is made to upgrade all managed In certain scenarios, it may be desirable to prevent the operator from upgrading certain By configuring a resource's The default and only other acceptable value for Deployment modesThe Sidecar injectionA sidecar with the OpenTelemetry Collector can be injected into pod-based workloads by setting the pod annotation kubectl apply -f - <<EOF
apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
name: sidecar-for-my-app
spec:
mode: sidecar
config: |
receivers:
jaeger:
protocols:
thrift_compact:
processors:
exporters:
logging:
service:
pipelines:
traces:
receivers: [jaeger]
processors: []
exporters: [logging]
EOF
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: myapp
annotations:
sidecar.opentelemetry.io/inject: "true"
spec:
containers:
- name: myapp
image: jaegertracing/vertx-create-span:operator-e2e-tests
ports:
- containerPort: 8080
protocol: TCP
EOF When there are multiple The annotation value can come either from the namespace, or from the pod. The most specific annotation wins, in this order:
When using a pod-based workload, such as kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
labels:
app: my-app
annotations:
sidecar.opentelemetry.io/inject: "true" # WRONG
spec:
selector:
matchLabels:
app: my-app
replicas: 1
template:
metadata:
labels:
app: my-app
annotations:
sidecar.opentelemetry.io/inject: "true" # CORRECT
spec:
containers:
- name: myapp
image: jaegertracing/vertx-create-span:operator-e2e-tests
ports:
- containerPort: 8080
protocol: TCP
EOF When using sidecar mode the OpenTelemetry collector container will have the environment variable OpenTelemetry auto-instrumentation injectionThe operator can inject and configure OpenTelemetry auto-instrumentation libraries. Currently Java, NodeJS and Python are supported. To use auto-instrumentation, configure an kubectl apply -f - <<EOF
apiVersion: opentelemetry.io/v1alpha1
kind: Instrumentation
metadata:
name: my-instrumentation
spec:
exporter:
endpoint: http://otel-collector:4317
propagators:
- tracecontext
- baggage
- b3
sampler:
type: parentbased_traceidratio
argument: "0.25"
EOF The above CR can be queried by Then add an annotation to a pod to enable injection. The annotation can be added to a namespace, so that all pods within that namespace wil get instrumentation, or by adding the annotation to individual PodSpec objects, available as part of Deployment, Statefulset, and other resources. Java: instrumentation.opentelemetry.io/inject-java: "true" NodeJS: instrumentation.opentelemetry.io/inject-nodejs: "true" Python: instrumentation.opentelemetry.io/inject-python: "true" The possible values for the annotation can be
Multi-container podsIf nothing else is specified, instrumentation is performed on the first container available in the pod spec. In some cases (for example in the case of the injection of an Istio sidecar) it becomes necessary to specify on which container(s) this injection must be performed. For this, it is possible to fine-tune the pod(s) on which the injection will be carried out. For this, we will use the apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment-with-multiple-containers
spec:
selector:
matchLabels:
app: my-pod-with-multiple-containers
replicas: 1
template:
metadata:
labels:
app: my-pod-with-multiple-containers
annotations:
instrumentation.opentelemetry.io/inject-java: "true"
instrumentation.opentelemetry.io/container-names: "myapp,myapp2"
spec:
containers:
- name: myapp
image: myImage1
- name: myapp2
image: myImage2
- name: myapp3
image: myImage3 In the above case, Use customized or vendor instrumentationBy default, the operator uses upstream auto-instrumentation libraries. Custom auto-instrumentation can be configured by overriding the image fields in a CR. apiVersion: opentelemetry.io/v1alpha1
kind: Instrumentation
metadata:
name: my-instrumentation
spec:
java:
image: your-customized-auto-instrumentation-image:java
nodejs:
image: your-customized-auto-instrumentation-image:nodejs
python:
image: your-customized-auto-instrumentation-image:python The Dockerfiles for auto-instrumentation can be found in autoinstrumentation directory. Follow the instructions in the Dockerfiles on how to build a custom container image. Compatibility matrixOpenTelemetry Operator vs. OpenTelemetry CollectorThe OpenTelemetry Operator follows the same versioning as the operand (OpenTelemetry Collector) up to the minor part of the version. For example, the OpenTelemetry Operator v0.18.1 tracks OpenTelemetry Collector 0.18.0. The patch part of the version indicates the patch level of the operator itself, not that of OpenTelemetry Collector. Whenever a new patch version is released for OpenTelemetry Collector, we'll release a new patch version of the operator. By default, the OpenTelemetry Operator ensures consistent versioning between itself and the managed When a custom OpenTelemetry Operator vs. Kubernetes vs. Cert ManagerWe strive to be compatible with the widest range of Kubernetes versions as possible, but some changes to Kubernetes itself require us to break compatibility with older Kubernetes versions, be it because of code incompatibilities, or in the name of maintainability. Every released operator will support a specific range of Kubernetes versions, to be determined at the latest during the release. We use The OpenTelemetry Operator might work on versions outside of the given range, but when opening new issues, please make sure to test your scenario on a supported version.
Contributing and DevelopingPlease see CONTRIBUTING.md. Approvers (@open-telemetry/operator-approvers):
Emeritus Approvers:
Maintainers (@open-telemetry/operator-maintainers):
Emeritus Maintainers
Learn more about roles in the community repository. Thanks to all the people who already contributed! License |
2023-10-27
2022-08-15
2022-08-17
2022-09-23
2022-08-13
请发表评论