Skip to content

Unable to upgrade chart due to changes in metadata #374

@samip5

Description

@samip5

It seems that at some point there were some changes to labels? I did not see an issue about this yet.

zammad                  390d   False     Helm upgrade failed for release default/zammad with chart [email protected]: cannot patch "zammad-nginx" with kind Deployment: Deployment.apps "zammad-nginx" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/component":"zammad-nginx", "app.kubernetes.io/instance":"zammad", "app.kubernetes.io/name":"zammad"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable && cannot patch "zammad-railsserver" with kind Deployment: Deployment.apps "zammad-railsserver" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/component":"zammad-railsserver", "app.kubernetes.io/instance":"zammad", "app.kubernetes.io/name":"zammad"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable && cannot patch "zammad-scheduler" with kind Deployment: Deployment.apps "zammad-scheduler" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/component":"zammad-scheduler", "app.kubernetes.io/instance":"zammad", "app.kubernetes.io/name":"zammad"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable && cannot patch "zammad-websocket" with kind Deployment: Deployment.apps "zammad-websocket" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/component":"zammad-websocket", "app.kubernetes.io/instance":"zammad", "app.kubernetes.io/name":"zammad"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable

Helm history:

REVISION	UPDATED                 	STATUS  	CHART        	APP VERSION	DESCRIPTION
1430    	Thu Oct 30 08:30:13 2025	deployed	zammad-14.3.0	6.5.0-101  	Rollback to 1428
1435    	Fri Oct 31 04:19:05 2025	failed  	zammad-15.2.5	6.5.2-22   	Upgrade "zammad" failed: cannot patch "zammad-nginx" with kind Deployment: Deployment.apps "zammad-nginx" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/component":"zammad-nginx", "app.kubernetes.io/instance":"zammad", "app.kubernetes.io/name":"zammad"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable && cannot patch "zammad-railsserver" with kind Deployment: Deployment.apps "zammad-railsserver" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/component":"zammad-railsserver", "app.kubernetes.io/instance":"zammad", "app.kubernetes.io/name":"zammad"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable && cannot patch "zammad-scheduler" with kind Deployment: Deployment.apps "zammad-scheduler" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/component":"zammad-scheduler", "app.kubernetes.io/instance":"zammad", "app.kubernetes.io/name":"zammad"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable && cannot patch "zammad-websocket" with kind Deployment: Deployment.apps "zammad-websocket" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/component":"zammad-websocket", "app.kubernetes.io/instance":"zammad", "app.kubernetes.io/name":"zammad"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable

Values:

image:
  repository: ghcr.io/zammad/zammad
  tag: "6.5@sha256:d863b2687ba79b2e9d2c01956072f29f7d7a209b9955a0626afb14ef26c3b662"

service:
  type: ClusterIP
  port: 8080

ingress:
  enabled: true
  className: "external"
  annotations:
    external-dns.alpha.kubernetes.io/target: "external.${SECRET_DOMAIN}"
    nginx.ingress.kubernetes.io/proxy-body-size: 100M
    nginx.ingress.kubernetes.io/enable-cors: "true"
    nginx.ingress.kubernetes.io/cors-allow-headers: "X-Forwarded-For"
  hosts:
    - host: &host helpdesk.${SECRET_DOMAIN}
      paths:
        - path: /
          pathType: ImplementationSpecific
  tls:
    - hosts:
        - *host

elasticsearch:
  # Workaround: switch to bitnami legacy image repository (https://github.com/bitnami/containers/issues/83267)
  image:
    repository: bitnamilegacy/elasticsearch
  global:
    security:
      allowInsecureImages: true
  clusterName: zammad
  coordinating:
    replicaCount: 0
  data:
    replicaCount: 0
  ingest:
    replicaCount: 0
  master:
    heapSize: 4g
    masterOnly: false
    replicaCount: 1
    resources:
      requests:
        cpu: 2
        memory: 6Gi
      limits:
        cpu: 4
        memory: 8Gi

zammadConfig:
  elasticsearch:
    enabled: true
    host: zammad-elasticsearch-master
    initialisation: true
    pass: ""
    port: 9200
    reindex: false
    schema: http
    user: ""
  postgresql:
    enabled: false
    db: zammad
    host: postgres16-rw.database.svc.cluster.local
    user: zammad

  scheduler:
    # Redundancy: Run 2 scheduler replicas for high availability
    # Only one will be active (Zammad handles this internally), but provides failover
    replicas: 2

    # Enhanced liveness probe: Detects database connectivity issues AND job processing problems
    # This prevents silent failures where the scheduler appears healthy but isn't processing jobs
    livenessProbe:
      exec:
        command:
          - /bin/sh
          - -c
          - |
            cd /opt/zammad
            bundle exec rails runner "
            begin
              # Test database connectivity
              ActiveRecord::Base.connection.execute('SELECT 1')

              # Check for jobs stuck for too long (indicates scheduler not processing)
              old_jobs = Delayed::Job.where(failed_at: nil, locked_at: nil)
                                    .where('run_at <= ?', Time.now - 900)  # 15 minutes
                                    .count

              if old_jobs > 20
                STDERR.puts \"LIVENESS FAIL: #{old_jobs} jobs stuck for >15 minutes\"
                exit 1
              end

              # Check for excessive job backlog (indicates scheduler overwhelmed or stuck)
              pending_jobs = Delayed::Job.where(failed_at: nil, locked_at: nil)
                                        .where('run_at <= ?', Time.now)
                                        .count

              if pending_jobs > 200
                STDERR.puts \"LIVENESS FAIL: Excessive job backlog (#{pending_jobs} jobs)\"
                exit 1
              end

              puts 'Scheduler health: OK'
              exit 0
            rescue => e
              STDERR.puts \"Scheduler health check failed: #{e.message}\"
              exit 1
            end
            " 2>/dev/null || exit 1
      initialDelaySeconds: 180
      periodSeconds: 300
      timeoutSeconds: 60
      failureThreshold: 2

# Note: Passwords should not contain special characters requiring URL encoding
secrets:
  autowizard:
    useExisting: false
    secretKey: autowizard
    secretName: autowizard
  elasticsearch:
    useExisting: false
    secretKey: password
    secretName: elastic-credentials
  postgresql:
    useExisting: true
    secretKey: postgresql-pass
    secretName: zammad-postgres-secrets
  redis:
    useExisting: false
    secretKey: redis-password
    secretName: redis-pass

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions