question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Unable to get this working.

See original GitHub issue

I have followed the excellent article https://jonathangazeley.com/2021/01/05/using-truenas-to-provide-persistent-storage-for-kubernetes/ to set this up. For testing purposes I used both the iscsi and nfs implementations. All of the issue seems to related to permissions of some sort. In both cases;

  • I use an APIKey, and that seems to be fine.
  • For the SSH, if I use the root account, all the pods start, but when I try to bind a PVC I receive the error “Error: Sorry, user root is not allowed to execute '/usr/local/sbin/zfs create -p … as root on nas.domain.name”
  • For SSH if I use an account I created, then the pods do not start correctly. The democratic-csi-controller remains in a CrashLoopBackOff status.

I have spent several days, trying to change permissons on the Datasets, no difference. I have set the option for root to login with password, no difference.

Honestly, I would prefer to get it going not using root. But cannot see how. Maybe I have missed a step?

When I describe the failing pod, I get

`Name: zfs-nfs-democratic-csi-controller-f7f74948d-thcbl Namespace: csi Priority: 0 Node: k8snode1/192.168.1.32 Start Time: Sun, 18 Apr 2021 04:05:58 +0000 Labels: app.kubernetes.io/csi-role=controller app.kubernetes.io/instance=zfs-nfs app.kubernetes.io/name=democratic-csi pod-template-hash=f7f74948d Annotations: checksum/secret: 8d169505d98469a310c7df1023ef59e5b929723bc6735958c045d9a06ce0dc64 cni.projectcalico.org/podIP: 172.16.249.27/32 cni.projectcalico.org/podIPs: 172.16.249.27/32 Status: Running IP: 172.16.249.27 IPs: IP: 172.16.249.27 Controlled By: ReplicaSet/zfs-nfs-democratic-csi-controller-f7f74948d Containers: external-provisioner: Container ID: docker://1e461dc1ed22c35e800e70f15a5a04aa1bc1f00bfd5bc593da5a4fa8adf28db6 Image: k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0 Image ID: docker-pullable://k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 Port: <none> Host Port: <none> Args: –v=5 –leader-election –leader-election-namespace=csi –timeout=90s –worker-threads=10 –extra-create-metadata –csi-address=/csi-data/csi.sock State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 1 Started: Sun, 18 Apr 2021 04:07:03 +0000 Finished: Sun, 18 Apr 2021 04:07:03 +0000 Ready: False Restart Count: 3 Environment: <none> Mounts: /csi-data from socket-dir (rw) /var/run/secrets/kubernetes.io/serviceaccount from zfs-nfs-democratic-csi-controller-sa-token-97h7r (ro) external-resizer: Container ID: docker://3c36309683781bacb7fba23e9f8a936aacd7eb897d637e1cf95b67df3346f98b Image: k8s.gcr.io/sig-storage/csi-resizer:v1.1.0 Image ID: docker-pullable://k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a Port: <none> Host Port: <none> Args: –v=5 –leader-election –leader-election-namespace=csi –timeout=90s –workers=10 –csi-address=/csi-data/csi.sock State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 255 Started: Sun, 18 Apr 2021 04:07:04 +0000 Finished: Sun, 18 Apr 2021 04:07:04 +0000 Ready: False Restart Count: 3 Environment: <none> Mounts: /csi-data from socket-dir (rw) /var/run/secrets/kubernetes.io/serviceaccount from zfs-nfs-democratic-csi-controller-sa-token-97h7r (ro) external-snapshotter: Container ID: docker://a050c8a089af88f52b7bd4561c1165e6caff5f67125bc1a0bf437117e0945595 Image: k8s.gcr.io/sig-storage/csi-snapshotter:v3.0.3 Image ID: docker-pullable://k8s.gcr.io/sig-storage/csi-snapshotter@sha256:9af9bf28430b00a0cedeb2ec29acadce45e6afcecd8bdf31c793c624cfa75fa7 Port: <none> Host Port: <none> Args: –v=5 –leader-election –leader-election-namespace=csi –timeout=90s –worker-threads=10 –csi-address=/csi-data/csi.sock State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 1 Started: Sun, 18 Apr 2021 04:07:05 +0000 Finished: Sun, 18 Apr 2021 04:07:05 +0000 Ready: False Restart Count: 3 Environment: <none> Mounts: /csi-data from socket-dir (rw) /var/run/secrets/kubernetes.io/serviceaccount from zfs-nfs-democratic-csi-controller-sa-token-97h7r (ro) csi-driver: Container ID: docker://3ccfe3315a9bdf5dc060b5015a44aa100e6c43770802d5f57331acc5fec2fe98 Image: democraticcsi/democratic-csi:latest Image ID: docker-pullable://democraticcsi/democratic-csi@sha256:8cd5f03aa2a8344a8652e6fbb8f626780013e8f2e45b015c20914a3788f2ac06 Port: <none> Host Port: <none> Args: –csi-version=1.2.0 –csi-name=org.democratic-csi.nfs –driver-config-file=/config/driver-config-file.yaml –log-level=info –csi-mode=controller –server-socket=/csi-data/csi.sock State: Running Started: Sun, 18 Apr 2021 04:06:07 +0000 Ready: True Restart Count: 0 Liveness: exec [bin/liveness-probe --csi-version=1.2.0 --csi-address=/csi-data/csi.sock] delay=10s timeout=3s period=60s #success=1 #failure=5 Environment: <none> Mounts: /config from config (rw) /csi-data from socket-dir (rw) /var/run/secrets/kubernetes.io/serviceaccount from zfs-nfs-democratic-csi-controller-sa-token-97h7r (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: socket-dir: Type: EmptyDir (a temporary directory that shares a pod’s lifetime) Medium: SizeLimit: <unset> config: Type: Secret (a volume populated by a Secret) SecretName: zfs-nfs-democratic-csi-driver-config Optional: false zfs-nfs-democratic-csi-controller-sa-token-97h7r: Type: Secret (a volume populated by a Secret) SecretName: zfs-nfs-democratic-csi-controller-sa-token-97h7r Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message


Normal Scheduled 68s default-scheduler Successfully assigned csi/zfs-nfs-democratic-csi-controller-f7f74948d-thcbl to k8snode1 Normal Pulling 61s kubelet Pulling image “democraticcsi/democratic-csi:latest” Normal Pulled 61s kubelet Successfully pulled image “democraticcsi/democratic-csi:latest” in 574.289416ms Normal Created 60s kubelet Created container csi-driver Normal Started 60s kubelet Started container csi-driver Normal Pulled 57s (x2 over 66s) kubelet Container image “k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0” already present on machine Normal Created 57s (x2 over 64s) kubelet Created container external-provisioner Normal Started 56s (x2 over 63s) kubelet Started container external-resizer Normal Pulled 56s (x2 over 63s) kubelet Container image “k8s.gcr.io/sig-storage/csi-snapshotter:v3.0.3” already present on machine Normal Started 56s (x2 over 64s) kubelet Started container external-provisioner Normal Pulled 56s (x2 over 64s) kubelet Container image “k8s.gcr.io/sig-storage/csi-resizer:v1.1.0” already present on machine Normal Created 56s (x2 over 63s) kubelet Created container external-resizer Normal Created 55s (x2 over 62s) kubelet Created container external-snapshotter Normal Started 55s (x2 over 61s) kubelet Started container external-snapshotter Warning BackOff 54s kubelet Back-off restarting failed container Warning BackOff 54s kubelet Back-off restarting failed container Warning BackOff 54s kubelet Back-off restarting failed container ` Any help anyone can offer will be greatly appreciated.

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:27 (13 by maintainers)

github_iconTop GitHub Comments

1reaction
MarkLFTcommented, Apr 24, 2021

Really, something so simple. I have been going through everything checking all the GUID names, account names, passwords etc. but missed the port number. I am really sorry for wasting your time.

1reaction
MarkLFTcommented, Apr 20, 2021

Using what I learned previously, iscsi now works. That is great, many thanks for your help.

Now to test them with some pods.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Error: Unable to get the current working directory
If this directory is not available on all nodes, it tries to use $HOME. If this directory is not available too (or if...
Read more >
“Unable to get working directory” error when using Solve() in ...
I would like to automate my Mechanical simulation (including the solution process) via a Python script. However, if I type.
Read more >
I can't get jQuery to work [closed] - Stack Overflow
So I tried some JQuery from w3schools and that didn't even work. Any idea why I cant get it to work? I have...
Read more >
How to Fix the "This Site Can't Be Reached" Error (5 Ways)
In this article, we'll talk about what causes the “This site can't be reached” issue. Then we'll go over five ways to fix...
Read more >
S Mode Get Button will not work and I can't get out of S MODE
If that fails go to Settings>Apps and highlight Microsoft Store, choose Advanced Settings, then Reset. After it resets, restart PC.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found