Question regarding storage on multi-node install
See original GitHub issueI set up a multi-node install of microk8s 1.19, and enabled storage.
However, when I try to deploy a statefulset with a volume template, I run into issues.
The server pod can be brought up on a different node from the one the PV is actually created on, causing the pod to fail to start, as the directory for the volume does not exist. Is this expected? How can I ensure the volume is provisioned on the correct node?
Here’s the relevant bit from my helm template with the volume claim:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ $namePrefix }}-postgres
labels:
{{ include "baseline-study.selectorLabels" . | nindent 4 }}
spec:
replicas: 1
serviceName: {{ $namePrefix }}-studydb
updateStrategy:
type: RollingUpdate
selector:
matchLabels:
app: postgres
tier: database
{{- include "baseline-study.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
app: postgres
tier: database
{{- include "baseline-study.selectorLabels" . | nindent 8 }}
spec:
{{- if .Values.postgres.configFile }}
{{ $pgconf := .Values.postgres.configFile }}
{{- if (len (.Files.Get $pgconf)) }}
volumes:
- name: {{ .Release.Name }}-pgconfig
configMap:
name: {{ .Release.Name }}-pgconfig
{{ end -}}
{{ end -}}
terminationGracePeriodSeconds: 10
containers:
- name: postgres
image: postgres:12.4
{{- if .Values.postgres.configFile }}
{{ $pgconf := .Values.postgres.configFile }}
{{- if (len (.Files.Get $pgconf)) }}
args:
- -c
- config_file=/etc/postgres.conf
{{ end -}}
{{ end -}}
imagePullPolicy: IfNotPresent
ports:
- name: postgres
containerPort: 5432
protocol: TCP
resources:
requests:
cpu: 100m # TODO: is this right???
memory: 512Mi # TODO: is this right???
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
key: pg.username
name: {{ $pgCredSecretName }}
- name: PGUSER
valueFrom:
secretKeyRef:
key: pg.username
name: {{ $pgCredSecretName }}
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
key: pg.password
name: {{ $pgCredSecretName }}
- name: POSTGRES_DB
value: studydb
- name: PGDATA
value: /var/lib/data/postgresql/data/pgdata
- name: POD_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
livenessProbe:
exec:
command: ["sh","-c","exec pg_isready --host $POD_IP"]
failureThreshold: 6
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
readinessProbe:
exec:
command: ["sh","-c","exec pg_isready --host $POD_IP"]
failureThreshold: 3
initialDelaySeconds: 5
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 3
volumeMounts:
- mountPath: /var/lib/postgresql/data/pgdata
name: {{ $namePrefix }}-data-vol
subPath: postgres-db
{{- if .Values.postgres.configFile }}
{{ $pgconf := .Values.postgres.configFile }}
{{- if (len (.Files.Get $pgconf)) }}
- name: {{ .Release.Name }}-pgconfig
mountPath: /etc/postgres.conf
subPath: postgres.conf
{{ end -}}
{{ end -}}
{{ $files := .Files.initdb }}
{{ range $files }}
- name: {{ $namePrefix }}-initdb
mountPath: /docker-entrypoint-initdb.d/{{ . }}
subPath: {{ . }}
{{ end }}
volumeClaimTemplates:
- metadata:
name: {{ $namePrefix }}-data-vol
labels: {{ include "baseline-study.selectorLabels" . | nindent 10 }}
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10G
Issue Analytics
- State:
- Created 3 years ago
- Comments:9
Top Results From Across the Web
Question regarding storage on multi-node install #1597 - GitHub
I set up a multi-node install of microk8s 1.19, and enabled storage. However, when I try to deploy a statefulset with a volume...
Read more >Multi Node Questions - TimescaleDB and PostgreSQL
All the views are stored in Access Node and this can become a concern for storage when using lots of data like Stock...
Read more >UiPath Multi Node Orchestrator
Hi, I am trying to setup Multi Tenant Orchestrator in AWS, and I came across the Installation steps. Where it is mentioned that...
Read more >Question regarding Avamar 3.3TB based RAIN/nonRAIN - Dell
This is basically is a 3 node setup consisting of 1 Utility Node and 2 Data Storage Nodes. It allows for double the...
Read more >Frequently asked questions - Product documentation - NetApp
Does ONTAP Select support Storage vMotion? Storage vMotion is supported for all configurations, including single-node and multinode ONTAP Select ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Hi @Namyts. I did use cstore as well as ceph via rook for a few months. Both of them fairly straightforward to setup from what I remember but did need a bit of altering of configuration. Now I only use openebs localpv.
For anything that needs clustered storage I now use minio operator with their direct csi driver. Not because of issues with openebs or rook ceph but because I like a lot of things that minio does. They busy reworking the csi driver to handle different storage classes. Once that’s done I will probably use it for everything not just minio if it works well https://github.com/minio/direct-csi as I’m trying not to have too many different projects doing the same thing.
I didn’t try cstore with loop devices instead of raw block devices. That could potentially be a problem. openebs have a great slack channel. Perhaps ask there.
We have included OpenEBS addon from MicroK8s 1.21, hence closing the issue. Thanks!