Some experiments in multi-experiment MPI SAMS runs hang
See original GitHub issueI’m still seeing an issue where some experiments in multi-experiment MPI SAMS runs will hang and not make progress, or restart and not make progress. I’ve attached an example logfile where you can see both behaviors.
No CRITICAL
logs appear in this experiment.
Issue Analytics
- State:
- Created 5 years ago
- Comments:9 (9 by maintainers)
Top Results From Across the Web
CRITICAL logs show unusual exception · Issue #1108 - GitHub
@andrrizzi andrrizzi mentioned this issue on Oct 13, 2018. Some experiments in multi-experiment MPI SAMS runs hang #1109.
Read more >Chem C1000 (V 2.0) Chemistry - Sam's Club
Experiment with two well-known metals, iron and copper. Investigate carbon dioxide. Dissolve metals with electrochemical reactions. Explore water and its ...
Read more >Member's Mark 4-Burner Outdoor Gas Griddle - Sam's Club
This Member's Mark outdoor griddle has a lower shelf for storing larger items and two convenient side shelves that provide a total of...
Read more >BinaxNOW COVID-19 Antigen Self Test (2 ct.) - Sam's Club
BinaxNOW COVID-19 Antigen Self Test (2 Tests) is a simple solution for COVID-19 infection detection, with rapid results in the convenience of your...
Read more >Tools - Sam's Club
Find tools, power tools, tool kits and more at great prices. You'll find low prices and high quality with tools at Sam's Club.com....
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
That set of experiments was generated with
using the YAML file
These simulations were run in 6-hour blocks chained together by LSF.
Separately, I tested whether chained 1-hour simulations were likely to show the same behavior more quickly, suspecting that perhaps the second job would start before the first job’s NetCDF file was fully synced by the filesystem. If I add
to the beginning of the bash script (
run-lsf-neutral-rmsd-safe.sh
andneutral-sams-rmsd-safe.yaml
) I seem to get the same sort of behavior where replicas stall out, though I get a bit more information becauseverbose
is on. SeeClosing this for now. Let me know if more netcdf corruptions crop up.