• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

telepresenceio/telepresence: Local development against a remote Kubernetes or Op ...

原作者: [db:作者] 来自: 网络 收藏 邀请

开源软件名称(OpenSource Name):

telepresenceio/telepresence

开源软件地址(OpenSource Url):

https://github.com/telepresenceio/telepresence

开源编程语言(OpenSource Language):

Go 95.5%

开源软件介绍(OpenSource Introduction):

Telepresence 2: fast, efficient local development for Kubernetes microservices

Telepresence gives developers infinite scale development environments for Kubernetes.

Website: https://www.getambassador.io/products/telepresence/
Slack: Discuss

With Telepresence:

  • You run one service locally, using your favorite IDE and other tools
  • You run the rest of your application in the cloud, where there is unlimited memory and compute

This gives developers:

  • A fast local dev loop, with no waiting for a container build / push / deploy
  • Ability to use their favorite local tools (IDE, debugger, etc.)
  • Ability to run large-scale applications that can't run locally

Quick Start

A few quick ways to start using Telepresence

  • Telepresence Quick Start: Quick Start
  • Install Telepresence: Install
  • Contributor's Guide: Guide
  • Meetings: Check out our community meeting schedule for opportunities to interact with Telepresence developers

Documentation

Telepresence documentation is available on the Ambassador Labs webside:
Documentation

Telepresence 2

Telepresence 2 is based on learnings from the original Telepresence architecture. Rewritten in Go, Telepresence 2 provides a simpler and more powerful user experience, improved performance, and better reliability than Telepresence 1. More details on Telepresence 2 are below.

Walkthrough

Install an interceptable service:

Start with an empty cluster:

$ kubectl create deploy hello --image=k8s.gcr.io/echoserver:1.4
deployment.apps/hello created
$ kubectl expose deploy hello --port 80 --target-port 8080
service/hello exposed
$ kubectl get ns,svc,deploy,po
NAME                        STATUS   AGE
namespace/kube-system       Active   53m
namespace/default           Active   53m
namespace/kube-public       Active   53m
namespace/kube-node-lease   Active   53m

NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.43.0.1      <none>        443/TCP   53m
service/hello        ClusterIP   10.43.73.112   <none>        80/TCP    2m

NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/hello   1/1     1            1           2m

NAME                        READY   STATUS    RESTARTS   AGE
pod/hello-9954f98bf-6p2k9   1/1     Running   0          2m15s

Check telepresence version

$ telepresence version
Client: v2.6.7 (api v3)
Root Daemon: v2.6.7 (api v3)
User Daemon: v2.6.7 (api v3)

Establish a connection to the cluster (outbound traffic)

Let telepresence connect:

$ telepresence connect
Launching Telepresence Root Daemon
Launching Telepresence User Daemon
Connected to context default (https://35.232.104.64)

A session is now active and outbound connections will be routed to the cluster. I.e. your laptop is "inside" the cluster.

$ curl hello.default
CLIENT VALUES:
client_address=10.42.0.189
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://hello.default:8080/

SERVER VALUES:
server_version=nginx: 1.10.0 - lua: 10001

HEADERS RECEIVED:
accept=*/*
host=hello.default
user-agent=curl/7.79.1
BODY:
-no body in request-

Intercept the service. I.e. redirect traffic to it to our laptop (inbound traffic)

Add an intercept for the hello deployment on port 9000. Here, we also start a service listening on that port:

$ telepresence intercept hello --port 9000 -- python3 -m http.server 9000
Using Deployment hello
intercepted
    Intercept name         : hello
    State                  : ACTIVE
    Workload kind          : Deployment
    Destination            : 127.0.0.1:9000
    Service Port Identifier: 80
    Volume Mount Point     : /tmp/telfs-524630891
    Intercepting           : all TCP requests
Serving HTTP on 0.0.0.0 port 9000 (http://0.0.0.0:9000/) ...

The python -m httpserver is now started on port 9000 and will run until terminated by <ctrl>-C. Access it from a browser using http://hello/ or use curl from another terminal. With curl, it presents a html listing from the directory where the server was started. Something like:

$ curl hello
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<title>Directory listing for /</title>
</head>
<body>
<h1>Directory listing for /</h1>
<hr>
<ul>
<li><a href="file1.txt">file1.txt</a></li>
<li><a href="file2.txt">file2.txt</a></li>
</ul>
<hr>
</body>
</html>

Observe that the python service reports that it's being accessed:

127.0.0.1 - - [16/Jun/2022 11:39:20] "GET / HTTP/1.1" 200 -

Since telepresence is now intercepting services in the default namespace, all services in that namespace can now be reached directly by their name. You can of course still use the namespaced name too, e.g. curl hello.default.

Clean-up and close daemon processes

End the service with <ctrl>-C and then try curl hello.default or http://hello.default again. The intercept is gone, and the echo service responds as normal. Using just curl hello will no longer succeed. This is because telepresence stopped mapping the default namespace when there were no more intercepts using it.

Now end the session too. Your desktop no longer has access to the cluster internals.

$ telepresence quit
Telepresence Network disconnecting...done
Telepresence Traffic Manager disconnecting...done
$ curl hello.default
curl: (6) Could not resolve host: hello.default

The telepresence daemons are still running in the background, which is harmless. You'll need to stop them before you upgrade telepresence. That's done by passing the options -u (stop user daemon) and -r (stop root daemon) to the quit command.

$ telepresence quit -ur
Telepresence Network quitting...done
Telepresence Traffic Manager quitting...done

What got installed in the cluster?

Telepresence installs the Traffic Manager in your cluster if it is not already present. This deployment remains unless you uninstall it.

Telepresence injects the Traffic Agent as an additional container into the pods of the workload you intercept, and will optionally install an init-container to route traffic through the agent (the init-container is only injected when the service is headless or uses a numerical targetPort). The modifications persist unless you uninstall them.

At first glance, we can see that the deployment is installed ...

$ kubectl get svc,deploy,pod
service/kubernetes   ClusterIP   10.43.0.1       <none>        443/TCP                      7d22h
service/hello        ClusterIP   10.43.145.57    <none>        80/TCP                       13m

NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/hello   1/1     1            1           13m

NAME                         READY   STATUS    RESTARTS        AGE
pod/hello-774455b6f5-6x6vs   2/2     Running   0               10m

... and that the traffic-manager is installed in the "ambassador" namespace.

$ kubectl -n ambassador get svc,deploy,pod
NAME                      TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
service/traffic-manager   ClusterIP   None           <none>        8081/TCP   17m
service/agent-injector    ClusterIP   10.43.72.154   <none>        443/TCP    17m

NAME                              READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/traffic-manager   1/1     1            1           17m

NAME                                  READY   STATUS    RESTARTS   AGE
pod/traffic-manager-dcd4cc64f-6v5bp   1/1     Running   0          17m

The traffic-agent is installed too, in the hello pod. Here together with an init-container, because the service is using a numerical targetPort.

$ kubectl describe pod hello-774455b6f5-6x6vs 
Name:         hello-774455b6f5-6x6vs
Namespace:    default
Priority:     0
Node:         multi/192.168.1.110
Start Time:   Thu, 16 Jun 2022 11:38:22 +0200
Labels:       app=hello
              pod-template-hash=774455b6f5
Annotations:  telepresence.getambassador.io/inject-traffic-agent: enabled
              telepresence.getambassador.io/restartedAt: 2022-06-16T09:38:21Z
Status:       Running
IP:           10.42.0.191
IPs:
  IP:           10.42.0.191
Controlled By:  ReplicaSet/hello-774455b6f5
Init Containers:
  tel-agent-init:
    Container ID:  containerd://e968352b3d85d6f966ac55f02da2401f93935f6df1f087b06bbe1cfc8854d5fb
    Image:         docker.io/datawire/ambassador-telepresence-agent:1.12.6
    Image ID:      docker.io/datawire/ambassador-telepresence-agent@sha256:2652d2767d1e8968be3fb22f365747315e25ac95e12c3d39f1206080a1e66af3
    Port:          <none>
    Host Port:     <none>
    Args:
      agent-init
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 16 Jun 2022 11:38:39 +0200
      Finished:     Thu, 16 Jun 2022 11:38:39 +0200
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /etc/traffic-agent from traffic-config (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wzhhs (ro)
Containers:
  echoserver:
    Container ID:   containerd://80d4645769a06b8671b5a4ce29d28abfa72ce5659ba96916c231bb9629593a29
    Image:          k8s.gcr.io/echoserver:1.4
    Image ID:       sha256:523cad1a4df732d41406c9de49f932cd60d56ffd50619158a2977fd1066028f9
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Thu, 16 Jun 2022 11:38:40 +0200
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wzhhs (ro)
  traffic-agent:
    Container ID:  containerd://ef3605a60f7c02229f156e3dc0e99f9b055fba1037587513871e64180670d0a4
    Image:         docker.io/datawire/ambassador-telepresence-agent:1.12.6
    Image ID:      docker.io/datawire/ambassador-telepresence-agent@sha256:2652d2767d1e8968be3fb22f365747315e25ac95e12c3d39f1206080a1e66af3
    Port:          9900/TCP
    Host Port:     0/TCP
    Args:
      agent
    State:          Running
      Started:      Thu, 16 Jun 2022 11:38:41 +0200
    Ready:          True
    Restart Count:  0
    Readiness:      exec [/bin/stat /tmp/agent/ready] delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:
      _TEL_AGENT_POD_IP:   (v1:status.podIP)
      _TEL_AGENT_NAME:    hello-774455b6f5-6x6vs (v1:metadata.name)
    Mounts:
      /etc/traffic-agent from traffic-config (rw)
      /tel_app_exports from export-volume (rw)
      /tel_pod_info from traffic-annotations (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wzhhs (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-api-access-wzhhs:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
  traffic-annotations:
    Type:  DownwardAPI (a volume populated by information about the pod)
    Items:
      metadata.annotations -> annotations
  traffic-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      telepresence-agents
    Optional:  false
  export-volume:
    Type:        EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:      
    SizeLimit:   <unset>
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  13m   default-scheduler  Successfully assigned default/hello-774455b6f5-6x6vs to multi
  Normal  Pulling    13m   kubelet            Pulling image "docker.io/datawire/ambassador-telepresence-agent:1.12.6"
  Normal  Pulled     13m   kubelet            Successfully pulled image "docker.io/datawire/ambassador-telepresence-agent:1.12.6" in 17.043659509s
  Normal  Created    13m   kubelet            Created container tel-agent-init
  Normal  Started    13m   kubelet            Started container tel-agent-init
  Normal  Pulled     13m   kubelet            Container image "k8s.gcr.io/echoserver:1.4" already present on machine
  Normal  Created    13m   kubelet            Created container echoserver
  Normal  Started    13m   kubelet            Started container echoserver
  Normal  Pulled     13m   kubelet            Container image "docker.io/datawire/ambassador-telepresence-agent:1.12.6" already present on machine
  Normal  Created    13m   kubelet            Created container traffic-agent
  Normal  Started    13m   kubelet            Started container traffic-agent

Uninstalling

You can uninstall the traffic-agent from specific deployments or from all deployments. Or you can choose to uninstall everything in which case the traffic-manager and all traffic-agents will be uninstalled.

telepresence uninstall --everything

will remove everything that was automatically installed by telepresence from the cluster.

Troubleshooting

The telepresence background processes daemon and connector both produces log files that can be very helpful when problems are encountered. The files are named daemon.log and connector.log. The location of the logs differ depending on what platform that is used:

  • macOS ~/Library/Logs/telepresence
  • Linux ~/.cache/telepresence/logs
  • Windows "%USERPROFILE%\AppData\Local\logs"

Visit the troubleshooting section in the Telepresence documentation for more advice:
Troubleshooting




鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap