Testing strategy of Helm Chart
See original GitHub issueIs your feature request related to a problem? Please describe.
I would like to discuss the testing strategy for Helm Chart for Apache Superset to improve the developer experience and avoid issues like #17920 .
I think a DevEX tweaks in this area could improve development efficiency, which will allow address the numerous issues that exist for the Helm Charts. Then we will be able to obtain the certainty necessary for publication on Artifact Repository as an official artifact.
Describe the solution you’d like
I see numerous solution in that area. I am afraid that I do not have experience in the project to see all the required aspects, so I highly recommend any comments from experienced people, both users and developers. Comments and additional requirements are welcome!
First of all, It will be great to provide values.schema.json
( https://helm.sh/docs/topics/charts/#schema-files ). It’s about writing JSON schema for values.yaml
file. I already have some draft in that area. This will improve the experience of developers, but most of all end-users will lose the ability to use wrongly formatted or misindent something values.yaml
file.
Secondly, I think about providing unit tests about rendering helm charts to:
- verify schema for generated manifests (I think it will avoid situations like #17920 ) with K8s expected schema
- use
pytest
to renders helm manifests & use it as a tests framework eg. Apache Airflow use that approach.
Describe alternatives you’ve considered
We might also use kubeval for testing schema format, but this is another tool when we have a few testing framework in place. A large number of different frameworks raise the entry threshold and lower DevEx by constantly changing the context.
We might also use conftest, but again – one more testing framework which does not bring much more values than pytest
We might also start integration tests on CI eg. in minikube or – like test env – on AWS. This can be a great topic, but such tests are long, slow, and thus cover a limited number of scenarios (but provide a very real validation of correctness). I think we should start with something smaller.
Additional context
@wiktor2200 @nytai @mvoitko might be interested in that area as involved in development of our Helm Chart. @mik-laj @potiuk may want to share their thoughts based on the experience of Apache Airflow (another Apache Foundation project).
Issue Analytics
- State:
- Created 2 years ago
- Reactions:4
- Comments:14 (14 by maintainers)
Also another comment for that - in Airlfow we have > 350 unit tests, but we also have “kubernetes integration tests”. Those are run also regularly in our CI and they consist of :
We also run separate “upgrade” tests. We update the chart and upgrade it after installing just to make sure that the upgrade scenario also works (we had some problems with hooks and migration jobs in the past). Those are really small number of tests but they give us confidence that we have not done something disastrous by accident.
I can heartily recommend Kind for this kind of testing, it’s also great for development environment when you would like to reproduce those kind of tests locally.
Fully agree.