Setting up on Google Cloud
See original GitHub issueHello! We are interested in setting up a Reana cluster on Google Cloud Platform (GCP).
We followed the instructions in the zero-to-jupyterhub documentation (https://zero-to-jupyterhub.readthedocs.io/en/stable/) to set up a Kubernetes cluster, and then followed the instructions here: https://reana-cluster.readthedocs.io/en/latest/gettingstarted.html#deploy-locally, but instead of using minikube, we pointed it to our cluster-in-the-clouds. Pretty quickly, we discovered that we can’t write to /reana
on these cloud machines (see: https://cloud.google.com/container-optimized-os/docs/concepts/security). All the pods come crashing down as soon as they try writing (into) this directory. So, we edited the provided default configuration (https://reana-cluster.readthedocs.io/en/latest/userguide.html#configure-reana-cluster) to point to /etc/reana
, which is writeable. This solved most of the problems. The one remaining issue is that the database pod that is still crashing. The logs in this pod are:
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
Data page checksums are disabled.
fixing permissions on existing directory /var/lib/postgresql/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
FATAL: could not write to file "pg_xlog/xlogtemp.29": No space left on device
child process exited with exit code 1
initdb: removing contents of data directory "/var/lib/postgresql/data"
running bootstrap script ...
Which suggest that maybe it’s still trying to write to a disallowed location.
We’re not neccessarily expecting you to fix this, if it’s not currently on your road-map, but we thought it would be good to raise this, and at least document our experiments for future experimenters seeking guidance.
But of course: your thoughts would be appreciated. Thanks!
Issue Analytics
- State:
- Created 5 years ago
- Reactions:1
- Comments:8 (5 by maintainers)
Top GitHub Comments
Indeed REANA needs a shared filesystem at this stage. Support for distributed file systems, say S3, is in the plans later on.
We have not tried yet the installation on GCP but it would be definitely interesting to provide runnable configurations out of the box!
I’ve not been involved lately in the development, so I might not be of much help, but I’m pretty sure you need to be able to have distributed storage available. At CERN we use Volumes provided by CephFS which support the
ReadWriteMany
access mode (see this table https://kubernetes.io/docs/concepts/storage/persistent-volumes/ ) I think on GCP the only option available isCloud Filestore
, https://cloud.google.com/filestore/docs/accessing-fileshares, but I haven’t tried this yet. Maybe @diegodelemos or @tiborsimko can comment whether a shared fs (or even Ceph) is still a hard requirementIn any case: happy to see people interested in deploying REANA, we’ll try to help as much we can!