question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Port Binding in Container Create Options fails container creation

See original GitHub issue

I have a custom Edge module that needs to expose some ports on the host. This has worked fine until recently, but after updating the IoT Edge runtime, module creation fails. The port binding which fails (tcp 50051) is exposed in the Dockerfile of the edge module.

Expected Behavior

Ports should be exposed and visible from netstat command

Current Behavior

Module start fails

Steps to Reproduce

  1. Set a module with PortBindings in the Container Create options

Context (Environment)

Device (Host) Operating System

Windows 10 Pro

Architecture

64-bit

Container Operating System

Linux Containers

Runtime Versions

iotedged

iotedge 1.0.6~rc1 (8a38db421acf732c5d24f83cb71cb93d0db4ea96) Also tried 1.0.5

Edge Agent

mcr.microsoft.com/azureiotedge-agent:1.0

Edge Hub

mcr.microsoft.com/azureiotedge-hub:1.0

Docker

Client: Docker Engine - Community
 Version:           18.09.1
 API version:       1.39
 Go version:        go1.10.6
 Git commit:        4c52b90
 Built:             Wed Jan  9 19:34:26 2019
 OS/Arch:           windows/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.1
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.6
  Git commit:       4c52b90
  Built:            Wed Jan  9 19:41:49 2019
  OS/Arch:          linux/amd64
  Experimental:     false

Logs

2019-01-24 19:36:33.866 +00:00 [WRN] - Reconcile failed because of the an exception
System.AggregateException: One or more errors occurred. (Error calling start module hiddenmodulename: Could not start module hiddenmodulename
        caused by: Could not start module hiddenmodulename
        caused by: driver failed programming external connectivity on endpoint hiddenmodulename (b9526fd7aeefb585ada4e93f7eb6dd130c0bbba1a2eb9c745a0f9cca5b65ebe1): Error starting userland proxy: Bind for 0.0.0.0:50051: unexpected error Permission denied) ---> Microsoft.Azure.Devices.Edge.Agent.Edgelet.EdgeletCommunicationException: Error calling start module hiddenmodulename: Could not start module hiddenmodulename
        caused by: Could not start module hiddenmodulename
        caused by: driver failed programming external connectivity on endpoint hiddenmodulename (b9526fd7aeefb585ada4e93f7eb6dd130c0bbba1a2eb9c745a0f9cca5b65ebe1): Error starting userland proxy: Bind for 0.0.0.0:50051: unexpected error Permission denied
   at Microsoft.Azure.Devices.Edge.Agent.Edgelet.ModuleManagementHttpClient.Execute[T](Func`1 func, String operation) in /home/vsts/work/1/s/edge-agent/src/Microsoft.Azure.Devices.Edge.Agent.Edgelet/ModuleManagementHttpClient.cs:line 194
   at Microsoft.Azure.Devices.Edge.Agent.Edgelet.ModuleManagementHttpClient.StartModuleAsync(String name) in /home/vsts/work/1/s/edge-agent/src/Microsoft.Azure.Devices.Edge.Agent.Edgelet/ModuleManagementHttpClient.cs:line 142
   at Microsoft.Azure.Devices.Edge.Agent.Core.LoggingCommandFactory.LoggingCommand.ExecuteAsync(CancellationToken token) in /home/vsts/work/1/s/edge-agent/src/Microsoft.Azure.Devices.Edge.Agent.Core/LoggingCommandFactory.cs:line 61
   at Microsoft.Azure.Devices.Edge.Agent.Core.Commands.GroupCommand.ExecuteAsync(CancellationToken token) in /home/vsts/work/1/s/edge-agent/src/Microsoft.Azure.Devices.Edge.Agent.Core/commands/GroupCommand.cs:line 36
   at Microsoft.Azure.Devices.Edge.Agent.Core.LoggingCommandFactory.LoggingCommand.ExecuteAsync(CancellationToken token) in /home/vsts/work/1/s/edge-agent/src/Microsoft.Azure.Devices.Edge.Agent.Core/LoggingCommandFactory.cs:line 61
   at Microsoft.Azure.Devices.Edge.Agent.Core.PlanRunners.OrderedRetryPlanRunner.ExecuteAsync(Int64 deploymentId, Plan plan, CancellationToken token) in /home/vsts/work/1/s/edge-agent/src/Microsoft.Azure.Devices.Edge.Agent.Core/planrunners/OrdererdRetryPlanRunner.cs:line 86
   --- End of inner exception stack trace ---
   at Microsoft.Azure.Devices.Edge.Agent.Core.PlanRunners.OrderedRetryPlanRunner.<>c.<ExecuteAsync>b__7_0(List`1 f) in /home/vsts/work/1/s/edge-agent/src/Microsoft.Azure.Devices.Edge.Agent.Core/planrunners/OrdererdRetryPlanRunner.cs:line 114
   at Microsoft.Azure.Devices.Edge.Agent.Core.PlanRunners.OrderedRetryPlanRunner.ExecuteAsync(Int64 deploymentId, Plan plan, CancellationToken token) in /home/vsts/work/1/s/edge-agent/src/Microsoft.Azure.Devices.Edge.Agent.Core/planrunners/OrdererdRetryPlanRunner.cs:line 115
   at Microsoft.Azure.Devices.Edge.Agent.Core.Agent.ReconcileAsync(CancellationToken token) in /home/vsts/work/1/s/edge-agent/src/Microsoft.Azure.Devices.Edge.Agent.Core/Agent.cs:line 176
   at Microsoft.Azure.Devices.Edge.Agent.Core.Agent.ReconcileAsync(CancellationToken token) in /home/vsts/work/1/s/edge-agent/src/Microsoft.Azure.Devices.Edge.Agent.Core/Agent.cs:line 187
---> (Inner Exception #0) Microsoft.Azure.Devices.Edge.Agent.Edgelet.EdgeletCommunicationException- Message:Error calling start module hiddenmodulename: Could not start module hiddenmodulename
        caused by: Could not start module hiddenmodulename
        caused by: driver failed programming external connectivity on endpoint hiddenmodulename (b9526fd7aeefb585ada4e93f7eb6dd130c0bbba1a2eb9c745a0f9cca5b65ebe1): Error starting userland proxy: Bind for 0.0.0.0:50051: unexpected error Permission denied, StatusCode:500, at:   at Microsoft.Azure.Devices.Edge.Agent.Edgelet.ModuleManagementHttpClient.Execute[T](Func`1 func, String operation) in /home/vsts/work/1/s/edge-agent/src/Microsoft.Azure.Devices.Edge.Agent.Edgelet/ModuleManagementHttpClient.cs:line 194
   at Microsoft.Azure.Devices.Edge.Agent.Edgelet.ModuleManagementHttpClient.StartModuleAsync(String name) in /home/vsts/work/1/s/edge-agent/src/Microsoft.Azure.Devices.Edge.Agent.Edgelet/ModuleManagementHttpClient.cs:line 142
   at Microsoft.Azure.Devices.Edge.Agent.Core.LoggingCommandFactory.LoggingCommand.ExecuteAsync(CancellationToken token) in /home/vsts/work/1/s/edge-agent/src/Microsoft.Azure.Devices.Edge.Agent.Core/LoggingCommandFactory.cs:line 61
   at Microsoft.Azure.Devices.Edge.Agent.Core.Commands.GroupCommand.ExecuteAsync(CancellationToken token) in /home/vsts/work/1/s/edge-agent/src/Microsoft.Azure.Devices.Edge.Agent.Core/commands/GroupCommand.cs:line 36
   at Microsoft.Azure.Devices.Edge.Agent.Core.LoggingCommandFactory.LoggingCommand.ExecuteAsync(CancellationToken token) in /home/vsts/work/1/s/edge-agent/src/Microsoft.Azure.Devices.Edge.Agent.Core/LoggingCommandFactory.cs:line 61
   at Microsoft.Azure.Devices.Edge.Agent.Core.PlanRunners.OrderedRetryPlanRunner.ExecuteAsync(Int64 deploymentId, Plan plan, CancellationToken token) in /home/vsts/work/1/s/edge-agent/src/Microsoft.Azure.Devices.Edge.Agent.Core/planrunners/OrdererdRetryPlanRunner.cs:line 86<---

Additional Information

Container Create Options:

{
    "Hostname": "hiddenmodulename",
    "Cmd": [        
    ],
    "HostConfig": {
        "PortBindings": {
            "50051/tcp": [
                {
                    "HostPort": "50051"
                }
            ],
            "51210/tcp": [
                {
                    "HostPort": "51210"
                }
            ]
        },
        "Binds": [
            "d:/iotedge"
        ],
        "ExtraHosts": [
            "localhost:127.0.0.1"
        ]
    }
}

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Comments:17 (5 by maintainers)

github_iconTop GitHub Comments

1reaction
skvarkcommented, Jun 11, 2020

Thanks! I managed to get the 4th module working by changing the auto-generated HostPort value in deployment.debug.template.json from 5678 to 5681 for the 4th module. Other modules were numbered 5678, 5679 and 5680. For some reason the port number was 5678 for the 4th module which conflicts with the first module.

1reaction
PCSmithycommented, Apr 7, 2020

I am experiencing a the same issue running where Docker cannot start a container in my iotedge solution. Here is the full error as listed from the IoT Edge Security Manager logs:

Apr 07 09:51:36 tx2-quasar-1 iotedged[8257]: 2020-04-07T14:51:36Z [WARN] - Could not start module cps
Apr 07 09:51:36 tx2-quasar-1 iotedged[8257]: 2020-04-07T14:51:36Z [WARN] -         caused by: driver failed programming external connectivity on endpoint cps (d9ff2d4996f6e9076a527fc61f488bbbb68cebda8438eadc4cba86d23a54c5bb): Bind for 0.0.0.0:5678 failed: port is already allocated
Apr 07 09:51:36 tx2-quasar-1 iotedged[8257]: 2020-04-07T14:51:36Z [INFO] - [mgmt] - - - [2020-04-07 14:51:36.555011358 UTC] "POST /modules/cps/start?api-version=2019-01-30 HTTP/1.1" 500 Internal Server Error 283 "-" "-" auth_id(-)

The the edge runtime on a Linux 18.04 machine ( a Jetson TX2)

My iotedge solution was built on a windows 10 pc with docker version 2.2.0.5

This error only occurs when I build and deploy the deployment.debug.template.json. If I use the regular (non debug) deployment template, everything is fine. This is obviously because the non-debug deployment doesn’t open a bunch of ports for debugging.

I don’t open any ports in my custom modules, the only ports specified in the deployment.debug.template.json are the default ones to support debugging (which I’m currently not using, so I may just delete them to get past this…I’m using the debug template to support additional debugging features in the modules without maintaining multiple, similar iotedge solutions).

My solution currently has 3 custom iotedge modules, and it was functioning perfectly fine with just the 3, deploying off of the debug deployment template, with all 3 custom modules binding to the same port 5678 according to the default debug ports. Interestingly, this problem only started for me after I created a 4th custom module. And I’m fairly certain there isn’t an error in the contents on the module, because after several reboots and redeployments through troubleshooting, I’ve seen that when there are 4 custom modules, the one that fails to start is not always predictable, it seems to be whichever module of the 4 boots last.

Does anyone have any idea why adding a 4th custom module is causing docker to fail to start a container on the basis that the port it wants to bind to is already allocated? When running the first 3 modules all binding to the same port wasn’t an issue?

Here is a snippet of my deployment.debug.template.json file of the two modules that are failing to start:

          "cps": {
            "version": "1.0",
            "type": "docker",
            "status": "running",
            "restartPolicy": "always",
            "settings": {
              "image": "${MODULES.cps.debug}",
              "createOptions": {
                "ExposedPorts": {
                  "5678/tcp": {}
                },
                "HostConfig": {
                  "PortBindings": {
                    "5678/tcp": [
                      {
                        "HostPort": "5678"
                      }
                    ]
                  }
                }
              }
            }
          },
          "telemetry": {
            "version": "1.0",
            "type": "docker",
            "status": "running",
            "restartPolicy": "always",
            "settings": {
              "image": "${MODULES.telemetry.debug}",
              "createOptions": {
                "ExposedPorts": {
                  "5678/tcp": {}
                },
                "HostConfig": {
                  "PortBindings": {
                    "5678/tcp": [
                      {
                        "HostPort": "5678"
                      }
                    ]
                  }
                }
              }
            }
          }

Any insight is appreciated, let me know if I need to provide additional logs / configs / etc.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Docker port binding not working as expected
This solved my issue! I was trying to create and run my container from Docker Desktop and couldn't figure out the settings. CLI...
Read more >
Solve the problem of docker on Windows 10 failed to port ...
Completely solve the problem of docker containers running on Windows 10 due to port binding.
Read more >
How to EXPOSE Port on running container
Hello, Port expose and publish has to happen when a container is created. Just stop the existing container and create a new one...
Read more >
Assigning a Port Mapping to a Running Docker Container
While running a new Docker container, we can assign the port mapping in the docker run command using the -p option:.
Read more >
Troubleshoot common issues - Azure Container Instances
This article shows how to troubleshoot common issues for managing or deploying containers to Azure Container Instances.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found