Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
894 views
in Technique[技术] by (71.8m points)

kubernetes - How to schedule a cronjob which executes a kubectl command?

How to schedule a cronjob which executes a kubectl command?

I would like to run the following kubectl command every 5 minutes:

kubectl patch deployment runners -p '{"spec":{"template":{"spec":{"containers":[{"name":"jp-runner","env":[{"name":"START_TIME","value":"'$(date +%s)'"}]}]}}}}' -n jp-test

For this, I have created a cronjob as below:

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: hello
spec:
  schedule: "*/5 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: hello
            image: busybox
            args:
            - /bin/sh
            - -c
            - kubectl patch deployment runners -p '{"spec":{"template":{"spec":{"containers":[{"name":"jp-runner","env":[{"name":"START_TIME","value":"'$(date +%s)'"}]}]}}}}' -n jp-test
          restartPolicy: OnFailure

But it is failing to start the container, showing the message :

Back-off restarting failed container

And with the error code 127:

State:          Terminated
      Reason:       Error
      Exit Code:    127

From what I checked, the error code 127 says that the command doesn't exist. How could I run the kubectl command then as a cron job ? Am I missing something?

Note: I had posted a similar question ( Scheduled restart of Kubernetes pod without downtime ) , but that was more of having the main deployment itself as a cronjob, here I'm trying to run a kubectl command (which does the restart) using a CronJob - so I thought it would be better to post separately

kubectl describe cronjob hello -n jp-test:

Name:                       hello
Namespace:                  jp-test
Labels:                     <none>
Annotations:                kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"batch/v1beta1","kind":"CronJob","metadata":{"annotations":{},"name":"hello","namespace":"jp-test"},"spec":{"jobTemplate":{"spec":{"templ...
Schedule:                   */5 * * * *
Concurrency Policy:         Allow
Suspend:                    False
Starting Deadline Seconds:  <unset>
Selector:                   <unset>
Parallelism:                <unset>
Completions:                <unset>
Pod Template:
  Labels:  <none>
  Containers:
   hello:
    Image:      busybox
    Port:       <none>
    Host Port:  <none>
    Args:
      /bin/sh
      -c
      kubectl patch deployment runners -p '{"spec":{"template":{"spec":{"containers":[{"name":"jp-runner","env":[{"name":"START_TIME","value":"'$(date +%s)'"}]}]}}}}' -n jp-test
    Environment:     <none>
    Mounts:          <none>
  Volumes:           <none>
Last Schedule Time:  Wed, 27 Feb 2019 14:10:00 +0100
Active Jobs:         hello-1551273000
Events:
  Type    Reason            Age   From                Message
  ----    ------            ----  ----                -------
  Normal  SuccessfulCreate  6m    cronjob-controller  Created job hello-1551272700
  Normal  SuccessfulCreate  1m    cronjob-controller  Created job hello-1551273000
  Normal  SawCompletedJob   16s   cronjob-controller  Saw completed job: hello-1551272700

kubectl describe job hello -v=5 -n jp-test

Name:           hello-1551276000
Namespace:      jp-test
Selector:       controller-uid=fa009d78-3a97-11e9-ae31-ac1f6b1a0950
Labels:         controller-uid=fa009d78-3a97-11e9-ae31-ac1f6b1a0950
                job-name=hello-1551276000
Annotations:    <none>
Controlled By:  CronJob/hello
Parallelism:    1
Completions:    1
Start Time:     Wed, 27 Feb 2019 15:00:02 +0100
Pods Statuses:  0 Running / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  controller-uid=fa009d78-3a97-11e9-ae31-ac1f6b1a0950
           job-name=hello-1551276000
  Containers:
   hello:
    Image:      busybox
    Port:       <none>
    Host Port:  <none>
    Args:
      /bin/sh
      -c
      kubectl patch deployment runners -p '{"spec":{"template":{"spec":{"containers":[{"name":"jp-runner","env":[{"name":"START_TIME","value":"'$(date +%s)'"}]}]}}}}' -n jp-test
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Events:
  Type     Reason                Age              From            Message
  ----     ------                ----             ----            -------
  Normal   SuccessfulCreate      7m               job-controller  Created pod: hello-1551276000-lz4dp
  Normal   SuccessfulDelete      1m               job-controller  Deleted pod: hello-1551276000-lz4dp
  Warning  BackoffLimitExceeded  1m (x2 over 1m)  job-controller  Job has reached the specified backoff limit

Name:           hello-1551276300
Namespace:      jp-test
Selector:       controller-uid=ad52e87a-3a98-11e9-ae31-ac1f6b1a0950
Labels:         controller-uid=ad52e87a-3a98-11e9-ae31-ac1f6b1a0950
                job-name=hello-1551276300
Annotations:    <none>
Controlled By:  CronJob/hello
Parallelism:    1
Completions:    1
Start Time:     Wed, 27 Feb 2019 15:05:02 +0100
Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  controller-uid=ad52e87a-3a98-11e9-ae31-ac1f6b1a0950
           job-name=hello-1551276300
  Containers:
   hello:
    Image:      busybox
    Port:       <none>
    Host Port:  <none>
    Args:
      /bin/sh
      -c
      kubectl patch deployment runners -p '{"spec":{"template":{"spec":{"containers":[{"name":"jp-runner","env":[{"name":"START_TIME","value":"'$(date +%s)'"}]}]}}}}' -n jp-test
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Events:
  Type    Reason            Age   From            Message
  ----    ------            ----  ----            -------
  Normal  SuccessfulCreate  2m    job-controller  Created pod: hello-1551276300-8d5df
See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

Long story short BusyBox doesn' have kubectl installed.

You can check it yourself using kubectl run -i --tty busybox --image=busybox -- sh which will run a BusyBox pod as interactive shell.

I would recommend using bitnami/kubectl:latest.

Also keep in mind that You will need to set proper RBAC, as you will get Error from server (Forbidden): services is forbidden

You could use something like this:

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  namespace: jp-test
  name: jp-runner
rules:
- apiGroups:
  - extensions
  - apps
  resources:
  - deployments
  verbs:
  - 'patch'

---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: jp-runner
  namespace: jp-test
subjects:
- kind: ServiceAccount
  name: sa-jp-runner
  namespace: jp-test
roleRef:
  kind: Role
  name: jp-runner
  apiGroup: ""

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: sa-jp-runner
  namespace: jp-test

---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: hello
spec:
  schedule: "*/5 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          serviceAccountName: sa-jp-runner
          containers:
          - name: hello
            image: bitnami/kubectl:latest
            command:
            - /bin/sh
            - -c
            - kubectl patch deployment runners -p '{"spec":{"template":{"spec":{"containers":[{"name":"jp-runner","env":[{"name":"START_TIME","value":"'$(date +%s)'"}]}]}}}}' -n jp-test
          restartPolicy: OnFailure

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...