Kubernetes deployment issues
See original GitHub issueHi,
(this isn’t a follow up to #74 but quite a different question/issue)
So i’ve been trying to deploy verdaccio-gitlab in a kubernetes cluster that i’m setting for our next dev environment (migrating from docker swarm). I got most things working but i’m stumbling a bit on verdaccio-gitlab and was hoping to get some insights and maybe if i’m lucky some input from people who have a kubernetes deployments working properly.
I can get verdaccio to load properly, I can get the login to work but once I’m logged in I’m unable to fetch the packages. The issue is that once i’m logged in I get a ERR_CONNECTION_CLOSED/ERR_CONNECTION_RESET when fetching the packages
I’ve first tried to use the helm chart for verdaccio and simply change the the image to verdaccio-gitlab, this worked but I can’t fetch the packages after login.
Expecting this to be possibly an issue with the helm chart, I’ve made my own deployment and also fetched the last copy from the repo to build a new image on top of verdaccio 4.0.0 beta10 instead of the beta3 version which is currently used for the latest tagged docker image. The symptoms are the same again.
Finally I’ve tried to simply deploy a verdaccio beta10 with just htpasswd and without the gitlab plugin and it looks to be working as it should. I might be missing obvious but I unfortunately can’t see any errors in nginx ingress, gitlab or verdaccio
apiVersion: v1
kind: ConfigMap
metadata:
name: verdaccio
labels:
app: verdaccio
data:
config.yaml: |-
storage: /verdaccio/storage/data
plugins: /verdaccio/plugins
listen:
- 0.0.0.0:4873
auth:
gitlab:
url: https://gitlab.my-gitlab-server.com
authCache:
enabled: true
ttl: 300
uplinks:
npmjs:
url: https://registry.npmjs.org/
packages:
'@*/*':
# scoped packages
access: $authenticated
publish: $authenticated
proxy: npmjs
gitlab: true
'**':
access: $authenticated
publish: $authenticated
proxy: npmjs
gitlab: true
# Log level can be changed to info, http etc. for less verbose output
logs:
- {type: stdout, format: pretty-timestamped, level: debug}
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: verdaccio
name: verdaccio
spec:
selector:
matchLabels:
app: verdaccio
replicas: 1
strategy:
type: Recreate
rollingUpdate: null
template:
metadata:
labels:
app: verdaccio
spec:
containers:
- name: verdaccio
image: simon-jouet/verdaccio-gitlab:latest # that's just a docker build of master
imagePullPolicy: Never
ports:
- containerPort: 4873
name: http
livenessProbe:
httpGet:
path: /-/ping
port: http
initialDelaySeconds: 5
readinessProbe:
httpGet:
path: /-/ping
port: http
initialDelaySeconds: 5
volumeMounts:
- mountPath: /verdaccio/storage
name: storage
readOnly: false
- mountPath: /verdaccio/conf
name: config
readOnly: true
volumes:
- name: config
configMap:
name: verdaccio
- name: storage
persistentVolumeClaim:
claimName: verdaccio
---
apiVersion: v1
kind: Service
metadata:
name: verdaccio
labels:
app: verdaccio
spec:
ports:
- port: 4873
selector:
app: verdaccio
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: verdaccio
labels:
app: verdaccio
annotations:
certmanager.k8s.io/cluster-issuer: letsencrypt
spec:
tls:
- hosts:
- https://gitlab.my-gitlab-server.com
secretName: tls-verdaccio
rules:
- host: https://gitlab.my-gitlab-server.com
http:
paths:
- backend:
serviceName: verdaccio
servicePort: 4873
Issue Analytics
- State:
- Created 4 years ago
- Reactions:3
- Comments:19 (3 by maintainers)
Top GitHub Comments
Please bear in mind that while reducing the JWT size might resolve the issue for some/most users, this won’t be a permanent fix. In large® GitLab instances a user just need to have access/be maintainer of enough groups to trigger this issue again.
Could there be any other solution despite dropping everything into the JWT? Eg. configure a separate storage (defaults to JWT) for those information.
Finally got it working!
By default
ingress-nginx
has http/2 enabled and the default configuration cannot cope with headers this large. I’ve changed myingress-nginx
config withhttp2-max-field-size: 8k
and it’s working. The issue was that with a smaller size nginx fails and closes the connection and because it’s http/2 it doesn’t send or log any errors (I was expecting a 414).I think the JWT token should be stripped to contain less information, is there any reasons for the duplicates in
real_groups
? What is the purpose of bothgroups
andreal_groups
? The issue is that storing this kind of info in the JWT token will always result in this issue, it will just depend on the number of groups/repos in gitlab.