I'm trying to run a cronjob inside k8 container. Below is my code for CronJob resource:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: deployment-restart
namespace: <YOUR NAMESPACE>
spec:
concurrencyPolicy: Forbid
schedule: '0 8 * * *' # cron spec of time, here, 8 o'clock
jobTemplate:
spec:
backoffLimit: 2 # this has very low chance of failing, as all this does
# is prompt kubernetes to schedule new replica set for
# the deployment
activeDeadlineSeconds: 600 # timeout, makes most sense with
# "waiting for rollout" variant specified below
template:
spec:
serviceAccountName: deployment-restart # name of the service
# account configured above
restartPolicy: Never
containers:
- name: kubectl
image: bitnami/kubectl # probably any kubectl image will do,
# optionaly specify version, but this
# should not be necessary, as long the
# version of kubectl is new enough to
# have `rollout restart`
command:
- 'kubectl'
- 'rollout'
- 'restart'
- 'deployment/<YOUR DEPLOYMENT NAME>
The above template was copied from How to schedule pods restart.
My deployment doesn't have kubectl installed so I'm specifying different image with kubectl under containers (bitnami/kubectl).
However, when my deployment restarts, it restarts with kubectl image containers which results in error. I don't know what's happening out here. When I'm restarting my own deployment shouldn't it fetch the image the deployment refers to?
question from:
https://stackoverflow.com/questions/65839768/kubernets-cronjob-containers 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…