Scrapli fall down with timeout error even though timeouts are increased.
See original GitHub issueHello. Describe the bug When I perform a long-term operation, scapli fails with the timeout exception. But the timers are increased
To Reproduce
def test_scrapli():] my_device = { "host": "X.X.X.X", "auth_username": "username", "auth_password": "password", "auth_strict_key": False, 'port': 22, 'timeout_ops': 120, 'timeout_socket': 120, 'timeout_transport': 120, 'timeout_exit': False, 'ssh_config_file': False, } conn = IOSXEDriver(**my_device) conn.open() response = conn.send_command("cellular 0 lte sim activate slot 0") print(response.result)
Screenshots
OS (please complete the following information):
- OS: Linux Ubuntu 18
- scrapli version: scrapli==2020.7.4
- version of any optional extras (paramiko|ssh2-python|textfsm, etc.): scrapli-ssh2==2020.6.6
Additional context logs.txt
Issue Analytics
- State:
- Created 3 years ago
- Comments:7 (4 by maintainers)
Top GitHub Comments
@carlmontanari Checked it. Works perfect. Script is waiting for prompt as expected. Thank you very much for your lib and fast response:)
Hi @carlmontanari . Thanks for the instant response. I’ll try to cover all your questions:
I hope I helped you to debug. If you have any questions - please feel free to ask.