[Question] How is HTTPS/SSL termination handled?
See original GitHub issueHi all 👋
I’m trying to figure out how HTTPS / SSL is handled with respect to the Gateway APIs/BFF. What technology handles this? Sure, each Gateway API can be ASP.NET Core or ASP.NET (classic) or Java or Ruby, etc… So i’m more interested in higher level concepts.
Lets review the existing architecture and using a simple scenario -> the mobile app (only because it’s the first Client app in the picture):
So the mobile app communicates with the Mobile-Shopping
BFF. This happens to be an ASP.NET Core 2.1 Web App. The Docker file only exposes port 80. There’s also no mention of forcing HTTPS in the Startup.cs
file.
So to me, this means there’s no strict requirement for HTTPS between the Client App and the BFF. Actually, there’s no HTTPS communication possible.
So, questions:
- Shouldn’t we be enforcing HTTPS between Client Apps <–> Gateway APIs/BFF?
- If yes (to above), then where should this termination be handled?
- Should we be adding reverse proxies in front of Kestrel (for these ASP.NET Core 2.1 Web Api Apps?). Yes, Kestrel has had some hardening, but is this still a good practice to put something in front of that?
- If yes again (to the reverse-proxy question), then should this be inside the Docker container/instance?
This all started out when I made my own Gateway API/BFF and thinking that is my public facing web server, I would need to make sure the communication is over HTTPS … and then totally struggled/failed to get my Docker instance to work with a development cert.
When asking some other internet-friends for help, a suggestion I got was: “Don’t handle the SSL termination in your container/instance, but have something in front, called an Ingress”. e.g.
So i’m now all confused 😦
- What would this Ingress be?
- How would this Ingress be apart of the eShopOnContainers project / architecture?
And finally … just to add one more item into all of this … we heavily use/leverage Cloudflare for our existing projects and would love to keep using them … so therefore I would also be adding them between the Client <–> BFF and again not sure if this impacts any of questions above (usually, not). Cloudflare does have the ability to terminate SSL and then they communicate between our public BFF’s over HTTP instead.
- Is this the Ingress?
- Can we then IP whitelist the BFF’s to Cloudflare (and possibly a few other IP’s like out Office static IP or an Azure VNET IP which we have to VPN into)
Anyways, I sincerely hope this question makes sense.
Keep up the awesome work 😃
Issue Analytics
- State:
- Created 5 years ago
- Reactions:5
- Comments:7 (5 by maintainers)
Top GitHub Comments
👋 @eiximenis thank you heaps for your reply! It’s really, really helps. I’m really new to this (hence why I’m reading up on eShop, trying to learn all of this) so I still have some questions with respect to your answers. Apologies if they are really noobish 😊
and
Ok, so this means that all traffic will pass through the ingress controller. The only way to access anything is via the Ingress? (My assumption from reading the answers above is: Yep)
Hold on … all services ? Or is this just all Gateways/BFF’s? I thought the main idea with these microservices is that access is controlled via the Gateway APIs/BFF’s?
Awesome 😃 I was wondering about this also (internal comms). I usually don’t do any super secret govt classified stuff so my assumption here is that if the internal network has been compromised (and NCA or whatever is now snooping all of this traffic) then I should have bigger problems, than encrypted comms. 😃
Oh nice 😃
Another side question: would information about this topic be useful in the eShop book? Or is this all considered out of scope, etc?
Hi @PureKrome An ingress is a service exposed outside that acts as a endpoint for other services. Conceptually it is similar to a reverse proxy (and technically is usually implemented using a reverse proxy).
In eShopOnContainers we are using a level 7 ingress (which currently is the only provided oob by Kubernetes). A level 7 ingress means it has knowledge of OCI level 7 and thus can made decisions based on http packages, instead of relying only on tcp/ip packages (as a level 4 ingress do).
So, we have only one service accessible from the outside, which is our ingress controller. It is implemented using nginx. We don’t use https but if used, this is the place where you can put it. All traffic goes through ingress, and then ingress re-routes the traffic to specific service (i. e. a api gateway) using internal cluster communication (which can be http).
BTW adding SSL support to k8s is not a hard thing. The hardest part is having a certificate, valid for the DNS used by your cluster. Using a cert provider that supports the ACME protocol (like let’s encrypt) it is possible to use cert-manager to issue a SSL certificate at runtime in a automated manner from the cluster. It is basically a configuration 😃
So, the key points are:
Hope this helps 😃