question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Rancher Desktop issues when connected to company VPN

See original GitHub issue

Hi,

when I am connected to my company vpn (Checkpoint Endpoint Security) the rancher desktop 0.5.0 (Kubernetes) doesn’t start up anymore. When not in VPN - everything works.

image

I also have a second wsl running (Ubuntu 20.04) with docker deamon - this wsl also sets the MTU of the eth0 to 1350 - thats needed that wsl can connect via VPN.

Attached you can find the k3s log file k3s.log

Here are my network connectors:

On the host:

Ethernet-Adapter Ethernet 3:

   Verbindungsspezifisches DNS-Suffix:
   Verbindungslokale IPv6-Adresse  . : fe80::e126:5894:846d:5657%10
   IPv4-Adresse  . . . . . . . . . . : 192.168.99.95
   Subnetzmaske  . . . . . . . . . . : 255.255.255.128
   Standardgateway . . . . . . . . . :

Ethernet-Adapter Ethernet 2:

   Medienstatus. . . . . . . . . . . : Medium getrennt
   Verbindungsspezifisches DNS-Suffix: home

Ethernet-Adapter vEthernet (Default Switch):

   Verbindungsspezifisches DNS-Suffix:
   Verbindungslokale IPv6-Adresse  . : fe80::618a:99e8:a539:75e5%37
   IPv4-Adresse  . . . . . . . . . . : 172.20.176.1
   Subnetzmaske  . . . . . . . . . . : 255.255.240.0
   Standardgateway . . . . . . . . . :

Drahtlos-LAN-Adapter WLAN:

   Medienstatus. . . . . . . . . . . : Medium getrennt
   Verbindungsspezifisches DNS-Suffix:

Drahtlos-LAN-Adapter LAN-Verbindung* 1:

   Medienstatus. . . . . . . . . . . : Medium getrennt
   Verbindungsspezifisches DNS-Suffix:

Drahtlos-LAN-Adapter LAN-Verbindung* 2:

   Medienstatus. . . . . . . . . . . : Medium getrennt
   Verbindungsspezifisches DNS-Suffix:

Ethernet-Adapter Ethernet:

   Verbindungsspezifisches DNS-Suffix: home
   IPv6-Adresse. . . . . . . . . . . : 2001:871:223:34d1:8df8:873e:4ca3:3fe
   Temporäre IPv6-Adresse. . . . . . : 2001:871:223:34d1:c0a5:7eab:dde1:85b7
   Verbindungslokale IPv6-Adresse  . : fe80::8df8:873e:4ca3:3fe%20
   IPv4-Adresse  . . . . . . . . . . : 10.0.0.12
   Subnetzmaske  . . . . . . . . . . : 255.255.255.0
   Standardgateway . . . . . . . . . : fe80::8e59:c3ff:fe43:971%20
                                       10.0.0.138

Ethernet-Adapter Bluetooth-Netzwerkverbindung:

   Medienstatus. . . . . . . . . . . : Medium getrennt
   Verbindungsspezifisches DNS-Suffix:

Ethernet-Adapter vEthernet (WSL):

   Verbindungsspezifisches DNS-Suffix:
   Verbindungslokale IPv6-Adresse  . : fe80::ed2f:841d:d369:7a8b%68
   IPv4-Adresse  . . . . . . . . . . : 172.23.208.1
   Subnetzmaske  . . . . . . . . . . : 255.255.240.0
   Standardgateway . . . . . . . . . :

In the Rancher WSL:

/etc # ifconfig
cni0      Link encap:Ethernet  HWaddr 7A:19:8F:7B:E2:DE
          inet addr:10.42.0.1  Bcast:10.42.0.255  Mask:255.255.255.0
          inet6 addr: fe80::7819:8fff:fe7b:e2de/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1300  Metric:1
          RX packets:10678 errors:0 dropped:0 overruns:0 frame:0
          TX packets:12099 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:3219760 (3.0 MiB)  TX bytes:3712184 (3.5 MiB)

docker0   Link encap:Ethernet  HWaddr 02:42:C8:D6:DA:C1
          inet addr:172.17.0.1  Bcast:172.17.255.255  Mask:255.255.0.0
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

eth0      Link encap:Ethernet  HWaddr 00:15:5D:1A:55:01
          inet addr:172.23.220.104  Bcast:172.23.223.255  Mask:255.255.240.0
          inet6 addr: fe80::215:5dff:fe1a:5501/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1350  Metric:1
          RX packets:513 errors:0 dropped:0 overruns:0 frame:0
          TX packets:362 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:137255 (134.0 KiB)  TX bytes:30071 (29.3 KiB)

flannel.1 Link encap:Ethernet  HWaddr 62:E2:0A:7B:96:C0
          inet addr:10.42.0.0  Bcast:0.0.0.0  Mask:255.255.255.255
          inet6 addr: fe80::60e2:aff:fe7b:96c0/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1300  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:5 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:30229 errors:0 dropped:0 overruns:0 frame:0
          TX packets:30229 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:11765245 (11.2 MiB)  TX bytes:11765245 (11.2 MiB)

veth12cbf751 Link encap:Ethernet  HWaddr 22:30:EA:3B:27:B3
          inet6 addr: fe80::2030:eaff:fe3b:27b3/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1300  Metric:1
          RX packets:3542 errors:0 dropped:0 overruns:0 frame:0
          TX packets:3606 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:243608 (237.8 KiB)  TX bytes:248328 (242.5 KiB)

veth144de2e1 Link encap:Ethernet  HWaddr 6A:AA:85:FB:02:EA
          inet6 addr: fe80::68aa:85ff:fefb:2ea/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1300  Metric:1
          RX packets:1187 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1351 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:328481 (320.7 KiB)  TX bytes:509824 (497.8 KiB)

veth2807426d Link encap:Ethernet  HWaddr 6A:3E:6D:F3:46:EE
          inet6 addr: fe80::683e:6dff:fef3:46ee/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1300  Metric:1
          RX packets:4740 errors:0 dropped:0 overruns:0 frame:0
          TX packets:5986 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:995009 (971.6 KiB)  TX bytes:1819226 (1.7 MiB)

veth2ca4cd5d Link encap:Ethernet  HWaddr AE:D9:61:43:63:D2
          inet6 addr: fe80::acd9:61ff:fe43:63d2/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1300  Metric:1
          RX packets:1948 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2016 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1766258 (1.6 MiB)  TX bytes:1103435 (1.0 MiB)

vethe9b37eac Link encap:Ethernet  HWaddr 36:B0:54:30:89:2D
          inet6 addr: fe80::34b0:54ff:fe30:892d/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1300  Metric:1
          RX packets:2851 errors:0 dropped:0 overruns:0 frame:0
          TX packets:3101 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:287332 (280.5 KiB)  TX bytes:309737 (302.4 KiB)

Any clue whats going on in my instance? The logs have several connection failed logs - so I assume its something related to networking - any instructions how I could debug further?

Issue Analytics

  • State:open
  • Created 2 years ago
  • Reactions:3
  • Comments:16 (3 by maintainers)

github_iconTop GitHub Comments

4reactions
erniedavisAAcommented, Feb 4, 2022

I’m having the same issue, basically without the VPN it works fine but with the VPN it gets stuck in the “Starting Kubernetes…” message.

As far as I saw, it is nothing to do with ip collisions, but probably with SSL certificates

I’m using Zscaler as VPN, but the issue is the same as the ones reported with other VPNs

Here is my scenario :

1. Without the VPN - start Rancher Desktop

2. on a cmd console run `Kubectl get pods` -- I get the expected messsage `No resources found in default namespace`

3. open a wsl instance, in my case I have a Ubuntu distribution -> `>wsl -d Ubuntu-20.04`

4. inside the wsl instance run `Kubectl get pods` -- I get the same expected messsage `No resources found in default namespace`

5. start the VPN

6. inside the wsl instance run again `Kubectl get pods` -- I get the same expected messsage `No resources found in default namespace`

7. on the cmd console run again `Kubectl get pods` -- I get the messsage `Unable to connect to the server: net/http: TLS handshake timeout`

8. Im able to ping the Rancher Desktop instance from cmd without problems

9. This makes me thing it is nothing to do with ip collisions

Also, in my case, even when it gets stuck in the “Starting Kubernetes…” message, the k8s cluster is actually up and running, here is how I’m able to confirm that:

1. **WITH** the VPN - start Rancher Desktop

2. it gets stuck in the "Starting Kubernetes...." message

3. open a wsl instance, in my case I have a Ubuntu distribution -> `>wsl -d Ubuntu-20.04`

4. inside the wsl instance run `Kubectl get pods` -- I get the same expected messsage `No resources found in default namespace` --- I can run any kubectl command from there without issues but the "Starting Kubernetes...." message never goes away

@psaenz - try the following:

changing your /.kube/config entry for Rancher Desktop to ‘localhost’ instead of the assigned IP. download and install following directions for wsl-vpnkit here Change your VPN Adapter Routing Metric by following the instructions here

1reaction
slongheicommented, Aug 3, 2022

1.4.1 obviously still working. If wsl-vpnkit is ultimately the answer it needs to be folded into a more permanent solution because a lot of large clients (and small as well) force users to be on a VPN, especially to work with internal docker registries etc… . The second issue is the problem of having to force users to reset the .kube config file to localhost every time before starting Rancher Desktop. [ **This is the cluster server address I am referring to in .kube/config file: server: https://localhost::6443 ]. This reset is annoying at best but assume this is only because wsl-vpnkit needs it to be set this way in order to discover the appropriate server cluster IP Address. It would be great to automagically reset this every time before Rancher Desktop starts up. Presumably this could be done with custom scripting (startup folder for rancher desktop) to change the address back to localhost but this would probably only apply if the user is running under a VPN. Thoughts on folding this into newer versions of the product (assume now anything later than 1.4.1 now is broken)?

Read more comments on GitHub >

github_iconTop Results From Across the Web

FAQ - Rancher Desktop Docs
A: An alternative DNS resolver for Windows has been implemented to address some of the VPN issues on Windows. It should support DNS...
Read more >
Install Docker on WSL 2 with VPN Support to ... - Medium
Step-by-Step. #1 Disconnect from VPN (if connected). #2 Deinstall Docker for Windows, Feature “Subsystem for Windows” should stay active:.
Read more >
Networking | Rancher Manager
websocket: bad handshake; Failed to connect to proxy; read tcp: i/o timeout. See Google Cloud VPN: MTU Considerations for an ...
Read more >
There's also Rancher Desktop in the same space, which ...
At work, unfortunately all options but Docker Desktop have issues with either 1) Our Cisco AnyConnect VPN, or 2) Our authenticated http proxy....
Read more >
oasis-application-development / spike-rancher-desktop · GitLab
0.0, so you can route to rancher hosted images using localhost or 0.0.0.0 while using Cisco VPN. But if you want to do...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found