Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
762 views
in Technique[技术] by (71.8m points)

kubernetes - 更改configMap后重新启动kubernetes部署(Restart kubernetes deployment after changing configMap)

I have a deployment which includes a configMap, persistentVolumeClaim, and a service.

(我有一个包含configMap,persistentVolumeClaim和服务的部署。)

I have changed the configMap and re-applied the deployment to my cluster.

(我已经更改了configMap,并将部署重新应用于我的集群。)

I understand that this change does not automatically restart the pod in the deployment:

(我了解此更改不会自动重新启动部署中的Pod:)

configmap change doesn't reflect automatically on respective pods

(configmap更改不会自动反映在相应的pod上)

Updated configMap.yaml but it's not being applied to Kubernetes pods

(更新了configMap.yaml,但未将其应用于Kubernetes吊舱)

I know that I can kubectl delete -f wiki.yaml && kubectl apply -f wiki.yaml .

(我知道我可以kubectl delete -f wiki.yaml && kubectl apply -f wiki.yaml 。)

But that destroys the persistent volume which has data I want to survive the restart.

(但这破坏了永久卷,该卷具有我想在重启后幸存的数据。)

How can I restart the pod in a way that keeps the existing volume?

(如何以保持现有音量的方式重新启动Pod?)

Here's what wiki.yaml looks like:

(wiki.yaml如下所示:)

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: dot-wiki
spec:
  accessModes:
    - ReadWriteOnce
  volumeMode: Filesystem
  resources:
    requests:
      storage: 4Gi
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: wiki-config
data:
  config.json: |
    {
      "farm": true,
      "security_type": "friends",
      "secure_cookie": false,
      "allowed": "*"
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: wiki-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: wiki
  template:
    metadata:
      labels:
        app: wiki
    spec:
      securityContext:
        runAsUser: 1000
        runAsGroup: 1000
        fsGroup: 1000
      initContainers:
      - name: wiki-config
        image: dobbs/farm:restrict-new-wiki
        securityContext:
          runAsUser: 0
          runAsGroup: 0
          allowPrivilegeEscalation: false
        volumeMounts:
          - name: dot-wiki
            mountPath: /home/node/.wiki
        command: ["chown", "-R", "1000:1000", "/home/node/.wiki"]
      containers:
      - name: farm
        image: dobbs/farm:restrict-new-wiki
        command: [
          "wiki", "--config", "/etc/config/config.json",
          "--admin", "bad password but memorable",
          "--cookieSecret", "any-random-string-will-do-the-trick"]
        ports:
        - containerPort: 3000
        volumeMounts:
          - name: dot-wiki
            mountPath: /home/node/.wiki
          - name: config-templates
            mountPath: /etc/config
      volumes:
      - name: dot-wiki
        persistentVolumeClaim:
          claimName: dot-wiki
      - name: config-templates
        configMap:
          name: wiki-config
---
apiVersion: v1
kind: Service
metadata:
  name: wiki-service
spec:
  ports:
  - name: http
    targetPort: 3000
    port: 80
  selector:
    app: wiki
  ask by Eric Dobbs translate from so

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

For the specific question about restarting containers after the configuration is changed, as of kubectl v1.15 you can do this:

(对于有关更改配置后重新启动容器的特定问题,从kubectl v1.15开始,您可以执行以下操作:)

  1. Apply the change to your configuration: kubectl apply -f wiki.yaml

    (将更改应用到您的配置: kubectl apply -f wiki.yaml)

  2. Restart containers in the deployment: kubectl rollout restart deployment wiki-deployment

    (重新启动部署中的容器: kubectl rollout restart deployment wiki-deployment)


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...