'Docker compose up' locks up during sharding
See original GitHub issueIssue Description
Docker compose was working fine before commit da600ded45cb24ae0c9733b0ae4ca8f379dd2b19
. Running docker-compose up --build
now locks up during sharding of the DB after exiting with exit code: 5e-srd-api_api_1 exited with code 127
.
Log:
Step 1/6 : FROM node:12.4.0-alpine
---> d4edda39fb81
Step 2/6 : WORKDIR /app
---> Using cache
---> b7b3b122f618
Step 3/6 : COPY package.json package-lock.json /app/
---> Using cache
---> a60478496159
Step 4/6 : RUN npm install
---> Using cache
---> 2ef20360480e
Step 5/6 : COPY . /app
---> e19b20dd0f4e
Step 6/6 : CMD ["npm", "start"]
---> Running in ac712dda1de1
Removing intermediate container ac712dda1de1
---> c73fed19d7f2
Successfully built c73fed19d7f2
Successfully tagged 5e-srd-api_api:latest
Attaching to 5e-srd-api_db_1, 5e-srd-api_api_1
[33mdb_1 |[0m jq: error (at /tmp/docker-entrypoint-config.json:1): Cannot index string with string "systemLog"
[33mdb_1 |[0m jq: error (at /tmp/docker-entrypoint-config.json:1): Cannot index string with string "net"
[33mdb_1 |[0m 2020-01-25T18:27:52.797+0000 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
[36mapi_1 |[0m /usr/local/bin/docker-entrypoint.sh: exec: line 8: exec: not found
[33mdb_1 |[0m 2020-01-25T18:27:52.799+0000 I CONTROL [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db2 64-bit host=d52bcf9c1819
[33mdb_1 |[0m 2020-01-25T18:27:52.799+0000 I CONTROL [initandlisten] db version v4.2.2
[33mdb_1 |[0m 2020-01-25T18:27:52.799+0000 I CONTROL [initandlisten] git version: a0bbbff6ada159e19298d37946ac8dc4b497eadf
[33mdb_1 |[0m 2020-01-25T18:27:52.799+0000 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.1.1 11 Sep 2018
[33mdb_1 |[0m 2020-01-25T18:27:52.799+0000 I CONTROL [initandlisten] allocator: tcmalloc
[33mdb_1 |[0m 2020-01-25T18:27:52.799+0000 I CONTROL [initandlisten] modules: none
[33mdb_1 |[0m 2020-01-25T18:27:52.799+0000 I CONTROL [initandlisten] build environment:
[33mdb_1 |[0m 2020-01-25T18:27:52.799+0000 I CONTROL [initandlisten] distmod: ubuntu1804
[33mdb_1 |[0m 2020-01-25T18:27:52.799+0000 I CONTROL [initandlisten] distarch: x86_64
[33mdb_1 |[0m 2020-01-25T18:27:52.799+0000 I CONTROL [initandlisten] target_arch: x86_64
[33mdb_1 |[0m 2020-01-25T18:27:52.799+0000 I CONTROL [initandlisten] options: { config: "/etc/mongodb.conf", net: { bindIp: "*" }, storage: { dbPath: "/data/db2" } }
[33mdb_1 |[0m 2020-01-25T18:27:52.799+0000 I STORAGE [initandlisten] Detected data files in /data/db2 created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
[33mdb_1 |[0m 2020-01-25T18:27:52.799+0000 I STORAGE [initandlisten]
[33mdb_1 |[0m 2020-01-25T18:27:52.799+0000 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
[33mdb_1 |[0m 2020-01-25T18:27:52.799+0000 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem
[33mdb_1 |[0m 2020-01-25T18:27:52.799+0000 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=3344M,cache_overflow=(file_max=0M),session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress],
[33mdb_1 |[0m 2020-01-25T18:27:53.702+0000 I STORAGE [initandlisten] WiredTiger message [1579976873:702934][1:0x7fd1d74adb00], txn-recover: Recovering log 3 through 4
[33mdb_1 |[0m 2020-01-25T18:27:53.761+0000 I STORAGE [initandlisten] WiredTiger message [1579976873:761715][1:0x7fd1d74adb00], txn-recover: Recovering log 4 through 4
[33mdb_1 |[0m 2020-01-25T18:27:53.832+0000 I STORAGE [initandlisten] WiredTiger message [1579976873:832741][1:0x7fd1d74adb00], txn-recover: Main recovery loop: starting at 3/4736 to 4/256
[33mdb_1 |[0m 2020-01-25T18:27:53.922+0000 I STORAGE [initandlisten] WiredTiger message [1579976873:922488][1:0x7fd1d74adb00], txn-recover: Recovering log 3 through 4
[33mdb_1 |[0m 2020-01-25T18:27:54.107+0000 I STORAGE [initandlisten] WiredTiger message [1579976874:107594][1:0x7fd1d74adb00], txn-recover: Recovering log 4 through 4
[33mdb_1 |[0m 2020-01-25T18:27:54.164+0000 I STORAGE [initandlisten] WiredTiger message [1579976874:164317][1:0x7fd1d74adb00], txn-recover: Set global recovery timestamp: (0,0)
[33mdb_1 |[0m 2020-01-25T18:27:54.714+0000 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)
[33mdb_1 |[0m 2020-01-25T18:27:54.722+0000 I STORAGE [initandlisten] Timestamp monitor starting
[33mdb_1 |[0m 2020-01-25T18:27:54.907+0000 I CONTROL [initandlisten]
[33mdb_1 |[0m 2020-01-25T18:27:54.907+0000 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.
[33mdb_1 |[0m 2020-01-25T18:27:54.907+0000 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.
[33mdb_1 |[0m 2020-01-25T18:27:54.907+0000 I CONTROL [initandlisten]
[33mdb_1 |[0m 2020-01-25T18:27:54.926+0000 I SHARDING [initandlisten] Marking collection local.system.replset as collection version: <unsharded>
[33mdb_1 |[0m 2020-01-25T18:27:54.928+0000 I STORAGE [initandlisten] Flow Control is enabled on this deployment.
[33mdb_1 |[0m 2020-01-25T18:27:54.928+0000 I SHARDING [initandlisten] Marking collection admin.system.roles as collection version: <unsharded>
[33mdb_1 |[0m 2020-01-25T18:27:54.928+0000 I SHARDING [initandlisten] Marking collection admin.system.version as collection version: <unsharded>
[33mdb_1 |[0m 2020-01-25T18:27:54.929+0000 I SHARDING [initandlisten] Marking collection local.startup_log as collection version: <unsharded>
[33mdb_1 |[0m 2020-01-25T18:27:54.930+0000 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db2/diagnostic.data'
[33mdb_1 |[0m 2020-01-25T18:27:54.931+0000 I SHARDING [LogicalSessionCacheRefresh] Marking collection config.system.sessions as collection version: <unsharded>
[33mdb_1 |[0m 2020-01-25T18:27:54.931+0000 I NETWORK [initandlisten] Listening on /tmp/mongodb-27017.sock
[33mdb_1 |[0m 2020-01-25T18:27:54.931+0000 I NETWORK [initandlisten] Listening on 0.0.0.0
[33mdb_1 |[0m 2020-01-25T18:27:54.931+0000 I SHARDING [LogicalSessionCacheReap] Marking collection config.transactions as collection version: <unsharded>
[33mdb_1 |[0m 2020-01-25T18:27:54.931+0000 I NETWORK [initandlisten] waiting for connections on port 27017
[33mdb_1 |[0m 2020-01-25T18:27:55.000+0000 I SHARDING [ftdc] Marking collection local.oplog.rs as collection version: <unsharded>
[36m5e-srd-api_api_1 exited with code 127
[0mGracefully stopping... (press Ctrl+C again to force)
Issue Analytics
- State:
- Created 3 years ago
- Comments:10
Top Results From Across the Web
'Docker compose up' locks up during sharding · Issue #51
Running docker-compose up --build now locks up during sharding of the DB after exiting with exit code: 5e-srd-api_api_1 exited with code 127 ...
Read more >docker-compose up hangs forever. How to debug?
The fix for me was to install haveged , which immediately increased my available entropy to over 2500, and caused docker compose to...
Read more >Mongo DB Sharding - docker example
The first step is to create 2 replicaSets for future shards. In this case, the first and second replicas are replica sets of...
Read more >Deploy a Multi-node Elasticsearch Cluster with Docker ...
Step #1 — Create a file called /docker-compose.yml and copy/paste the snippet above. Next, we'll set up our Elasticsearch nodes. Elasticsearch ...
Read more >A Docker Tutorial for Beginners
Learn to build and deploy your distributed applications easily to the cloud with Docker.
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
@bagelbits Thanks for looking into that. Just pulled the latest changes and it builds up fine now for me.
That should be fixed! Feel free to reopen if you’re still having issues!