question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Build Not Finishing After Docker Containers Removed

See original GitHub issue

I initially got everything running but it didn’t seem to pick up the environment variables as I was seeing the repeated error below.

ERROR - UpdateSwitchState:228 - Could not establish connection to https://CONTROLLER_IP:8443/api/v1/ because 'NoneType' object has no attribute 'get_endpoints'.

Notice https://CONTROLLER_IP:8443/api/v1/ looks like it’s comming from .plugin_config.yml instead of controller_uri from the parameters which I exported…

export controller_uri=143.117.69.165
export controller_type=faucet
export controller_log_file=/var/log/faucet/faucet.log
export controller_config_file=/etc/faucet/faucet.yaml
export controller_mirror_ports='{"sw1":3}'
export collector_nic=enp0s25 

I decided to stop and remove all of the containers (with the following script) and re-run the ./helpers/run to double check I had definitely exported the correct parameters before running ./helpers/run.

echo "Stopping and removing all docker instances...."
for LOOP in `docker ps | awk '{print $1}' | sed '/CONTAINER/d'`
do
docker stop $LOOP
docker rm $LOOP
done

Ever since I did this, the ./helpers/run command has spun up the containers but seems to get stuck, only outputting a fraction of what it did before, with not very much error output to go on.

./helpers/run 
ea8b574ae2a68de19ed5b6fd92796d444c0fe89138507b62b5cbc5c802b3ab4d
waiting for required containers to build and start (this might take a little while)...done.
2018-06-18T14:23:34+00:00 172.17.0.1 core[1371]: 1:C 18 Jun 13:23:34.693 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
2018-06-18T14:23:34+00:00 172.17.0.1 core[1371]: 1:C 18 Jun 13:23:34.693 # Redis version=4.0.9, bits=64, commit=00000000, modified=0, pid=1, just started
2018-06-18T14:23:34+00:00 172.17.0.1 core[1371]: 1:C 18 Jun 13:23:34.693 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
2018-06-18T14:23:34+00:00 172.17.0.1 core[1371]: 1:M 18 Jun 13:23:34.694 * Running mode=standalone, port=6379.
2018-06-18T14:23:34+00:00 172.17.0.1 core[1371]: 1:M 18 Jun 13:23:34.694 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
2018-06-18T14:23:34+00:00 172.17.0.1 core[1371]: 1:M 18 Jun 13:23:34.694 # Server initialized
2018-06-18T14:23:34+00:00 172.17.0.1 core[1371]: 1:M 18 Jun 13:23:34.694 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
2018-06-18T14:23:34+00:00 172.17.0.1 core[1371]: 1:M 18 Jun 13:23:34.694 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
2018-06-18T14:23:34+00:00 172.17.0.1 core[1371]: 1:M 18 Jun 13:23:34.694 * Ready to accept connections
2018-06-18T14:23:38+00:00 172.17.0.1 core[1371]: + [[ -z redis ]]
2018-06-18T14:23:38+00:00 172.17.0.1 core[1371]: + [[ -z  ]]
2018-06-18T14:23:38+00:00 172.17.0.1 core[1371]: REMOTE_REDIS_PSWD not set. Please SET
2018-06-18T14:23:38+00:00 172.17.0.1 core[1371]: REMOTE_REDIS_HOST=redis REMOTE_REDIS_PORT=6379
2018-06-18T14:23:38+00:00 172.17.0.1 core[1371]: + export REMOTE_REDIS_PORT=6379
2018-06-18T14:23:38+00:00 172.17.0.1 core[1371]: + [[ -z  ]]
2018-06-18T14:23:38+00:00 172.17.0.1 core[1371]: + echo REMOTE_REDIS_PSWD not set. Please SET
2018-06-18T14:23:38+00:00 172.17.0.1 core[1371]: + export REMOTE_REDIS_PSWD=
2018-06-18T14:23:38+00:00 172.17.0.1 core[1371]: + [[ -z  ]]
2018-06-18T14:23:38+00:00 172.17.0.1 core[1371]: + export DASH_PREFIX=/rq
2018-06-18T14:23:38+00:00 172.17.0.1 core[1371]: + export RQ_DASHBOARD_SETTINGS=/rq_dash_settings.py
2018-06-18T14:23:38+00:00 172.17.0.1 core[1371]: + echo REMOTE_REDIS_HOST=redis REMOTE_REDIS_PORT=6379
2018-06-18T14:23:38+00:00 172.17.0.1 core[1371]: + rq-dashboard
2018-06-18T14:23:38+00:00 172.17.0.1 core[1371]: RQ Dashboard, version 0.3.3
2018-06-18T14:23:38+00:00 172.17.0.1 core[1371]:  * Serving Flask app "rq_dashboard.app" (lazy loading)
2018-06-18T14:23:38+00:00 172.17.0.1 core[1371]:  * Environment: production
2018-06-18T14:23:38+00:00 172.17.0.1 core[1371]:    WARNING: Do not use the development server in a production environment.
2018-06-18T14:23:38+00:00 172.17.0.1 core[1371]:    Use a production WSGI server instead.
2018-06-18T14:23:38+00:00 172.17.0.1 core[1371]:  * Debug mode: off
2018-06-18T14:23:38+00:00 172.17.0.1 core[1371]:  * Running on http://0.0.0.0:9181/ (Press CTRL+C to quit)
2018-06-18T14:23:42+00:00 172.17.0.1 core[1371]: 13:23:42 RQ worker 'rq:worker:6f783c34f86e.1' started, version 0.11.0
2018-06-18T14:23:42+00:00 172.17.0.1 core[1371]: 13:23:42 *** Listening on default...
2018-06-18T14:23:42+00:00 172.17.0.1 core[1371]: 13:23:42 Cleaning registries for queue: default
2018-06-18T14:23:46+00:00 172.17.0.1 core[1371]: 13:23:46 RQ worker 'rq:worker:09f35e4b6da6.1' started, version 0.11.0
2018-06-18T14:23:46+00:00 172.17.0.1 core[1371]: 13:23:46 *** Listening on default...
2018-06-18T14:23:46+00:00 172.17.0.1 core[1371]: 13:23:46 Cleaning registries for queue: default
2018-06-18T14:23:50+00:00 172.17.0.1 core[1371]: 13:23:50 RQ worker 'rq:worker:bc4fbfa61eff.1' started, version 0.11.0
2018-06-18T14:23:50+00:00 172.17.0.1 core[1371]: 13:23:50 *** Listening on default...
2018-06-18T14:23:50+00:00 172.17.0.1 core[1371]: 13:23:50 Cleaning registries for queue: default
2018-06-18T14:23:54+00:00 172.17.0.1 core[1371]: 13:23:54 RQ worker 'rq:worker:dc702fc3f5c5.1' started, version 0.11.0
2018-06-18T14:23:54+00:00 172.17.0.1 core[1371]: 13:23:54 *** Listening on default...
2018-06-18T14:23:54+00:00 172.17.0.1 core[1371]: 13:23:54 Cleaning registries for queue: default
2018-06-18T14:24:02+00:00 172.17.0.1 core[1371]: [2018-06-18 13:24:02 +0000] [8] [INFO] Starting gunicorn 19.8.1
2018-06-18T14:24:02+00:00 172.17.0.1 core[1371]: [2018-06-18 13:24:02 +0000] [8] [INFO] Listening at: http://0.0.0.0:8080 (8)
2018-06-18T14:24:02+00:00 172.17.0.1 core[1371]: [2018-06-18 13:24:02 +0000] [8] [INFO] Using worker: gevent
2018-06-18T14:24:02+00:00 172.17.0.1 core[1371]: [2018-06-18 13:24:02 +0000] [11] [INFO] Booting worker with pid: 11
2018-06-18T14:24:02+00:00 172.17.0.1 core[1371]: [2018-06-18 13:24:02 +0000] [12] [INFO] Booting worker with pid: 12
2018-06-18T14:24:02+00:00 172.17.0.1 core[1371]: [2018-06-18 13:24:02 +0000] [13] [INFO] Booting worker with pid: 13
2018-06-18T14:24:02+00:00 172.17.0.1 core[1371]: /usr/local/lib/python3.6/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016
2018-06-18T14:24:02+00:00 172.17.0.1 core[1371]:   monkey.patch_all(subprocess=True)
2018-06-18T14:24:02+00:00 172.17.0.1 core[1371]: /usr/local/lib/python3.6/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016
2018-06-18T14:24:02+00:00 172.17.0.1 core[1371]:   monkey.patch_all(subprocess=True)
2018-06-18T14:24:02+00:00 172.17.0.1 core[1371]: /usr/local/lib/python3.6/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016
2018-06-18T14:24:02+00:00 172.17.0.1 core[1371]:   monkey.patch_all(subprocess=True)
2018-06-18T14:24:02+00:00 172.17.0.1 core[1371]: [2018-06-18 13:24:02 +0000] [14] [INFO] Booting worker with pid: 14
2018-06-18T14:24:02+00:00 172.17.0.1 core[1371]: /usr/local/lib/python3.6/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016
2018-06-18T14:24:02+00:00 172.17.0.1 core[1371]:   monkey.patch_all(subprocess=True)
2018-06-18T14:24:02+00:00 172.17.0.1 plugin[1371]: [2018-06-18 13:24:02 +0000] [1] [INFO] Starting gunicorn 19.8.1
2018-06-18T14:24:02+00:00 172.17.0.1 plugin[1371]: [2018-06-18 13:24:02 +0000] [1] [INFO] Listening at: http://0.0.0.0:8000 (1)
2018-06-18T14:24:02+00:00 172.17.0.1 plugin[1371]: [2018-06-18 13:24:02 +0000] [1] [INFO] Using worker: gevent
2018-06-18T14:24:02+00:00 172.17.0.1 plugin[1371]: [2018-06-18 13:24:02 +0000] [9] [INFO] Booting worker with pid: 9
2018-06-18T14:24:02+00:00 172.17.0.1 plugin[1371]: [2018-06-18 13:24:02 +0000] [10] [INFO] Booting worker with pid: 10
2018-06-18T14:24:02+00:00 172.17.0.1 plugin[1371]: [2018-06-18 13:24:02 +0000] [11] [INFO] Booting worker with pid: 11
2018-06-18T14:24:02+00:00 172.17.0.1 plugin[1371]: /usr/local/lib/python3.6/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016
2018-06-18T14:24:02+00:00 172.17.0.1 plugin[1371]:   monkey.patch_all(subprocess=True)
2018-06-18T14:24:02+00:00 172.17.0.1 plugin[1371]: /usr/local/lib/python3.6/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016
2018-06-18T14:24:02+00:00 172.17.0.1 plugin[1371]:   monkey.patch_all(subprocess=True)
2018-06-18T14:24:02+00:00 172.17.0.1 plugin[1371]: /usr/local/lib/python3.6/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016
2018-06-18T14:24:02+00:00 172.17.0.1 plugin[1371]:   monkey.patch_all(subprocess=True)
2018-06-18T14:24:02+00:00 172.17.0.1 plugin[1371]: [2018-06-18 13:24:02 +0000] [23] [INFO] Booting worker with pid: 23
2018-06-18T14:24:02+00:00 172.17.0.1 plugin[1371]: /usr/local/lib/python3.6/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016
2018-06-18T14:24:02+00:00 172.17.0.1 plugin[1371]:   monkey.patch_all(subprocess=True)
2018-06-18T14:24:06+00:00 172.17.0.1 plugin[1371]: INFO: Accepting connections at http://localhost:5000

Currently running Ubuntu:16.04 to host Faucet locally and a Mininet docker container, all of which I am trying to connect Poseidon to. Also experiencing above error on Ubuntu 18.04. Any advice would be much appreciated. Thanks.

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Comments:34 (23 by maintainers)

github_iconTop GitHub Comments

1reaction
scottkelsocommented, Jun 22, 2018

Ah yes, my collector_nic would have been wrong. I was initially seeing the same repeated endpoint output as @toiletduck123 but have managed to get it working. See steps below.

2018-06-19T09:36:15+00:00 172.17.0.1 plugin[1473]: 2018-06-19 08:36:15,063 - DEBUG - faucet:92  - get_endpoints found:
2018-06-19T09:36:15+00:00 172.17.0.1 plugin[1473]: 2018-06-19 08:36:15,063 - DEBUG - UpdateSwitchState:235 - MACHINES:[]
2018-06-19T09:36:16+00:00 172.17.0.1 plugin[1473]: 2018-06-19 08:36:16,034 - DEBUG - poseidonMonitor:586 - woke from sleeping
2018-06-19T09:36:16+00:00 172.17.0.1 plugin[1473]: 2018-06-19 08:36:16,034 - DEBUG - poseidonMonitor:584 - ***************CTRL_C:{'STOP': False}
2018-06-19T09:36:16+00:00 172.17.0.1 plugin[1473]: 2018-06-19 08:36:16,070 - DEBUG - poseidonMonitor:233 - scheduler woke st_worker
2018-06-19T09:36:17+00:00 172.17.0.1 plugin[1473]: 2018-06-19 08:36:17,035 - DEBUG - poseidonMonitor:586 - woke from sleeping
2018-06-19T09:36:17+00:00 172.17.0.1 plugin[1473]: 2018-06-19 08:36:17,036 - DEBUG - poseidonMonitor:584 - ***************CTRL_C:{'STOP': False}
2018-06-19T09:36:17+00:00 172.17.0.1 plugin[1473]: 2018-06-19 08:36:17,072 - DEBUG - poseidonMonitor:233 - scheduler woke st_worker
2018-06-19T09:36:18+00:00 172.17.0.1 plugin[1473]: 2018-06-19 08:36:18,037 - DEBUG - poseidonMonitor:586 - woke from sleeping
2018-06-19T09:36:18+00:00 172.17.0.1 plugin[1473]: 2018-06-19 08:36:18,037 - DEBUG - poseidonMonitor:584 - ***************CTRL_C:{'STOP': False}
2018-06-19T09:36:18+00:00 172.17.0.1 plugin[1473]: 2018-06-19 08:36:18,073 - DEBUG - poseidonMonitor:233 - scheduler woke st_worker
2018-06-19T09:36:19+00:00 172.17.0.1 plugin[1473]: 2018-06-19 08:36:19,038 - DEBUG - poseidonMonitor:586 - woke from sleeping
2018-06-19T09:36:19+00:00 172.17.0.1 plugin[1473]: 2018-06-19 08:36:19,038 - DEBUG - poseidonMonitor:584 - ***************CTRL_C:{'STOP': False}
2018-06-19T09:36:19+00:00 172.17.0.1 plugin[1473]: 2018-06-19 08:36:19,074 - DEBUG - poseidonMonitor:233 - scheduler woke st_worker
2018-06-19T09:36:20+00:00 172.17.0.1 plugin[1473]: 2018-06-19 08:36:20,039 - DEBUG - poseidonMonitor:586 - woke from sleeping
2018-06-19T09:36:20+00:00 172.17.0.1 plugin[1473]: 2018-06-19 08:36:20,039 - DEBUG - poseidonMonitor:584 - ***************CTRL_C:{'STOP': False}
2018-06-19T09:36:20+00:00 172.17.0.1 plugin[1473]: 2018-06-19 08:36:20,075 - DEBUG - poseidonMonitor:233 - scheduler woke st_worker
2018-06-19T09:36:20+00:00 172.17.0.1 plugin[1473]: 2018-06-19 08:36:20,076 - DEBUG - poseidonMonitor:68  - kick

I was still getting this output even when done the steps in the following order…

  • Have Poseidon running (./helpers/run)
  • Restart Faucet (systemctl restart faucet.service)
  • Start Mininet (mn --topo single,3 --mac --controller=remote,ip=143.117.69.165,port=6653 --controller=remote,ip=143.117.69.165,port=6654 --switch ovsk)
  • Wait for L2 to expire in logs
  • Mininet hosts ping (h1 ping h2)

The only difference here is I haven’t completed all of the RabbitMQ steps because I already have a naively installed and running instance of Faucet, Guage, Prometheus & Grafana-server (hence Faucet restart command systemctl restart faucet.service). I wasn’t sure how to export these facuet configurations to this native Faucet…

$ export FAUCET_EVENT_SOCK=1
$ export FAUCET_CONFIG_STAT_RELOAD=1
$ export FA_RABBIT_HOST=192.168.0.7 #unnecessary because not running rabbitmq 

so I stopped them and did the docker-compose command as you said above instead…

$ systemctl stop faucet.service guage.service prometheus.service grafana-server.service
$ cd faucet/
$ docker-compose -f docker-compose.yaml -f adapters/vendors/rabbitmq/docker-compose.yaml up --build -d

Which eventually worked as long as I still did everything in that particular order as defined above.

Again thank you for your help and patience @cglewis!

0reactions
cglewiscommented, Jun 22, 2018

Yeah, that’s a good question regarding putting environments variables into Faucet when it’s running as a service, I suspect you’d have to edit the systemd unit for faucet.service to include the environment variables on service start. Going to go ahead and close this issue - thanks for sticking with it @scottkelso @jaiken06, and feel to reach out if you run into other issues.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Docker containers are not removed after a build completes
Cause. Non-detached containers are run with the --rm argument meaning that Docker should remove them when the process exits. However, if ...
Read more >
Docker build does not proceed after finishing RUN statement
I got the issue solved by adding a WORKDIR statement to the dockerfile that sets the working directory to /opt/ . After that,...
Read more >
docker build exits successfully but no built image show up in ...
i have successfully run docker build command but image does not show up in the docker images command. ... Describe the results you...
Read more >
Prune unused Docker objects
This will remove: - all stopped containers - all networks not used by at least one container - all dangling images - all...
Read more >
Cleaning Up After Docker - vsupalov.com
Have you been running and re-running docker-compose lately? If you're using Docker for development purposes, you're running lots of containers, building new ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found