Description
I had a contour installation done via YAML and a bunch of applications using the contour ingress class.
kubectl apply -f https://projectcontour.io/quickstart/contour.yaml
cat <<EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
annotations:
ingressclass.kubernetes.io/is-default-class: "true"
meta.helm.sh/release-name: contour
meta.helm.sh/release-namespace: projectcontour
labels:
app.kubernetes.io/component: contour
app.kubernetes.io/instance: contour
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: contour
io.portainer.kubernetes.application.name: contour
io.portainer.kubernetes.application.owner: admin
name: contour
spec:
controller: projectcontour.io/projectcontour/contour-contour
EOF
I wanted to switch to the helm installation so I deleted the namespace, ingress class and all other resources:
kubectl delete -f https://projectcontour.io/quickstart/contour.yaml
kubectl delete ingressclass contour
And reinstalled contour via helm following the official documentation, and I also checked to have everything up and running, ingressclass included.
But I am no more able to reach the applications hosts, while reverting back to the YAML installation I solve the error.
I am using version contour-20.0.3
, while in another cluster I did the first installation via helm with version contour-17.0.9
and it works perfectly fine but there are some differences in the pods deployed:
NAME READY STATUS RESTARTS AGE
pod/contour-55676fcf76-6rwxq 1/1 Running 0 74d
pod/contour-55676fcf76-lgdwg 1/1 Running 0 49d
pod/contour-envoy-9npd7 2/2 Running 2 (106d ago) 111d
pod/contour-envoy-dmpzq 2/2 Running 2 (50d ago) 67d
pod/contour-envoy-k6w74 2/2 Running 4 (105d ago) 112d
pod/contour-envoy-qwpzg 2/2 Running 2 (106d ago) 111d
pod/envoy-cl2zg 2/2 Running 2 (106d ago) 111d
pod/envoy-fsftf 2/2 Running 2 (50d ago) 67d
pod/envoy-qhckt 2/2 Running 2 (106d ago) 111d
pod/envoy-qtrmt 2/2 Running 4 (105d ago) 112d
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/contour ClusterIP 10.43.181.185 <none> 8001/TCP 418d
service/contour-envoy LoadBalancer 10.43.59.74 <pending> 80:31691/TCP,443:30669/TCP 418d
service/envoy LoadBalancer 10.43.124.25 <pending> 80:30997/TCP,443:30167/TCP 408d
In the new version I only have contour
and contour-envoy
pods and services, while the envoy
is missing and is not mentioned also in the documentation, maybe is that the problem?