TrueNAS Scale 21.08 - Could not log into all portals
See original GitHub issueHello there,
Firstly thank you for making the driver API only, can sleep better without having a root SSH key floating around.
I’m testing democratic-csi v1.3.0 - zfs-api-iscsi on TrueNAS Scale 21.08 however Im getting the error:
{"code":19,"stdout":"Logging in to [iface: default, target: iqn.2005-10.org.freenas.ctl:csi-pvc-9e4c598a-ee71-4bec-8c36-bd0dfef99340-cluster, portal: 10.80.0.2,3260] (multiple)\\n","stderr":"iscsiadm: Could not login to [iface: default, target: iqn.2005-10.org.freenas.ctl:csi-pvc-9e4c598a-ee71-4bec-8c36-bd0dfef99340-cluster, portal: 10.80.0.2,3260].\\niscsiadm: initiator reported error (19 - encountered non-retryable iSCSI login failure)\\niscsiadm: Could not log into all portals\\n"}'
Cluster design is: TrueNAS Scale 21.08 - 2xNICs Rancher K3s running on 4 x Raspberry Pi 4 (3 manager, 1 worker)
My configuration is the following:
csiDriver:
name: "org.democratic-csi.iscsi"
storageClasses:
- name: freenas-iscsi-csi
defaultClass: true
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true
parameters:
fsType: xfs
mountOptions: []
secrets:
provisioner-secret:
controller-publish-secret:
node-stage-secret:
node-publish-secret:
controller-expand-secret:
driver:
config:
driver: freenas-api-iscsi
instance_id: aquila
httpConnection:
protocol: https
host: 192.168.50.10
port: 443
apiKey: <key>
allowInsecure: true
apiVersion: 2
zfs:
datasetParentName: cold/k8s/iscsi/v
detachedSnapshotsDatasetParentName: cold/k8s/iscsi/s
zvolCompression:
zvolDedup:
zvolEnableReservation: false
zvolBlocksize:
iscsi:
targetPortal: "10.80.0.2:3260"
interface: eth0
namePrefix: csi-
nameSuffix: "-cluster"
targetGroups:
- targetGroupPortalGroup: 1
targetGroupInitiatorGroup: 3
targetGroupAuthType: None
targetGroupAuthGroup: null
extentInsecureTpc: true
extentXenCompat: false
extentDisablePhysicalBlocksize: true
extentBlocksize: 4096
extentRpm: "7200"
extentAvailThreshold: 0
Everything on the TrueNAS side seems to be provisioning fine but its just the Kubernetes nodes side of things where the error seems to be.
Issue Analytics
- State:
- Created 2 years ago
- Comments:11 (7 by maintainers)
Note, it’s an issue if the system boots without any targets/luns (unlikely with ongoing usage). If you find yourself in this situation however the following is a work-around (to run directly on the SCALE cli) and must be done after a target/lun have been added:
systemctl restart scst
Great! The API key should already be in a secret (the whole config is a giant secret). At least if you used the helm chart…