napalm grains not available during template rendering
See original GitHub issueDescribe the bug Grains collected by the napalm.py script are not available during rendering of pillar SLS files
Steps To Reproduce I have a minion with the following grains:
# salt-sproxy cr-testing* grains.get id
cr-testing01.lab1:
cr-testing01.lab1
# salt-sproxy cr-testing* grains.get model
cr-testing01.lab1:
VMX
adding a test pillar:
# cat /srv/pillar/shared/test.sls
foo: {{ grains.get('id') }}
bar: {{ grains.get('model') }}
# cat /srv/pillar/top.sls
base:
'*':
- 'shared.general'
'cr-testing01.lab1':
- shared.test
result:
# salt-sproxy cr-testing* pillar.get foo
cr-testing01.lab1:
cr-testing01.lab1
# salt-sproxy cr-testing* pillar.get bar
cr-testing01.lab1:
None
Expected behavior Expect the “bar” pillar data to contain the information of grains[‘model’]
Versions Report
Salt Version:
Salt: 3001.1
Salt SProxy: 2020.7.0
Dependency Versions:
Ansible: Not Installed
cffi: 1.14.2
dateutil: 2.7.3
docker-py: Not Installed
gitdb: 2.0.5
gitpython: 2.1.11
Jinja2: 2.10
junos-eznc: 2.5.3
jxmlease: Not Installed
libgit2: 0.27.7
M2Crypto: Not Installed
Mako: Not Installed
msgpack-pure: Not Installed
msgpack-python: 0.5.6
NAPALM: 3.1.0
ncclient: 0.6.9
Netmiko: 3.2.0
paramiko: 2.7.2
pycparser: 2.19
pycrypto: 2.6.1
pycryptodome: 3.6.1
pyeapi: 0.8.3
pygit2: 0.27.4
PyNetBox: Not Installed
PyNSO: Not Installed
Python: 3.7.3 (default, Jul 25 2020, 13:03:44)
python-gnupg: Not Installed
PyYAML: 3.13
PyZMQ: 17.1.2
scp: 0.13.2
smmap: 2.0.5
textfsm: 1.1.0
timelib: Not Installed
Tornado: 4.5.3
ZMQ: 4.3.1
System Versions:
dist: debian 10 buster
locale: utf-8
machine: x86_64
release: 4.19.0-10-amd64
system: Linux
version: Debian GNU/Linux 10 buster
Additional context It seems that the napalm grains (salt/grains/napalm.py) are only merged into the grains dict after the pillars have been rendered:
2020-10-13 12:34:05,142 [salt.template :127 ][DEBUG ][14313] Rendered data from file: /srv/pillar/shared/test.sls:
[...]
2020-10-13 12:34:08,780 [salt.loaded.ext.runners.proxy:665 ][DEBUG ][14313] Caching Grains for cr-testing01.lab1
2020-10-13 12:34:08,780 [salt.loaded.ext.runners.proxy:666 ][DEBUG ][14313] OrderedDict([('foo', 'bar'), ('cwd', '/root'), ('ip_gw', True), ('ip4_gw', '62.138.167.49'), ('ip6_gw', False), ('dns', {'nameservers': ['80.237.128.144', '80.237.128.145', '8.8.8.8'], 'ip4_nameservers': ['80.237.128.144', '80.237.128.145', '8.8.8.8'], 'ip6_nameservers': [], 'sortlist': [], 'domain': '', 'search': ['bb.gdinf.net', 'bb.godaddy.com', 'lab.mass.systems', 'cse.mass.systems', 'mass.systems', 'intern.hosteurope.de', 'hosteurope.de'], 'options': []}), ('fqdns', []), ('machine_id', 'f6183af91209426f812aca156ae54f5a'), ('master', 'salt'), ('hwaddr_interfaces', {'lo': '00:00:00:00:00:00', 'eth0': '02:ce:0a:5d:c0:49'}), ('id', 'cr-testing01.lab1'), ('kernelparams', [('BOOT_IMAGE', '/boot/vmlinuz-4.19.0-10-amd64'), ('root', None), ('ro', None), ('quiet', None)]), ('locale_info', {}), ('num_gpus', 0), ('gpus', []), ('kernel', 'proxy'), ('nodename', 'salt-gbone.lab.mass.systems'), ('kernelrelease', 'proxy'), ('kernelversion', 'proxy'), ('cpuarch', 'x86_64'), ('osrelease', 'proxy'), ('os', 'junos'), ('os_family', 'proxy'), ('osfullname', 'proxy'), ('osarch', 'x86_64'), ('mem_total', 0), ('virtual', 'LXC'), ('ps', 'ps -efHww'), ('osrelease_info', ('proxy',)), ('osfinger', 'proxy-proxy'), ('path', '/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'), ('systempath', ['/usr/local/sbin', '/usr/local/bin', '/usr/sbin', '/usr/bin', '/sbin', '/bin']), ('pythonexecutable', '/usr/bin/python3'), ('pythonpath', ['/usr/lib/python3/dist-packages/git/ext/gitdb', '/usr/local/bin', '/usr/lib/python37.zip', '/usr/lib/python3.7', '/usr/lib/python3.7/lib-dynload', '/usr/local/lib/python3.7/dist-packages', '/usr/lib/python3/dist-packages', '/usr/lib/python3/dist-packages/gitdb/ext/smmap']), ('pythonversion', [3, 7, 3, 'final', 0]), ('saltpath', '/usr/lib/python3/dist-packages/salt'), ('saltversion', '3001.1'), ('saltversioninfo', [3001, 1]), ('zmqversion', '4.3.1'), ('disks', []), ('ssds', ['sdb', 'sda']), ('shell', '/bin/bash'), ('username', None), ('groupname', 'root'), ('pid', 14313), ('gid', 0), ('uid', 0), ('zfs_support', False), ('zfs_feature_flags', False), ('host', 'cr-testing01.lab1'), ('hostname', 'cr-testing01.lab1'), ('interfaces', ['ge-0/0/0', 'lc-0/0/0', 'pfe-0/0/0', 'pfh-0/0/0', 'ge-0/0/1', 'ge-0/0/2', 'ge-0/0/3', 'ge-0/0/4', 'ge-0/0/5', 'ge-0/0/6', 'ge-0/0/7', 'ge-0/0/8', 'ge-0/0/9', 'cbp0', 'demux0', 'dsc', 'em1', 'esi', 'fxp0', 'gre', 'ipip', 'irb', 'jsrv', 'lo0', 'lsi', 'mtun', 'pimd', 'pime', 'pip0', 'pp0', 'rbeb', 'tap', 'vtep']), ('model', 'VMX'), ('optional_args', {'config_lock': False, 'keepalive': 5}), ('serial', 'VM5B598A6585'), ('uptime', 2505569), ('vendor', 'Juniper'), ('version', '17.4R1.16')])
After modifying the test.sls like so
# cat /srv/pillar/shared/test.sls
foo: {{ grains.get('id') }}
bar: {{ grains.get('model') }}
{% for k in grains.keys() %}
{%- do salt.log.error(k) -%}
{% endfor %}
these are the grains keys available at SLS rendering:
# salt-sproxy cr-testing* pillar.get bar
[ERROR ] cwd
[ERROR ] ip_gw
[ERROR ] ip4_gw
[ERROR ] ip6_gw
[ERROR ] dns
[ERROR ] fqdns
[ERROR ] machine_id
[ERROR ] master
[ERROR ] hwaddr_interfaces
[ERROR ] id
[ERROR ] kernelparams
[ERROR ] locale_info
[ERROR ] num_gpus
[ERROR ] gpus
[ERROR ] kernel
[ERROR ] nodename
[ERROR ] kernelrelease
[ERROR ] kernelversion
[ERROR ] cpuarch
[ERROR ] osrelease
[ERROR ] os
[ERROR ] os_family
[ERROR ] osfullname
[ERROR ] osarch
[ERROR ] mem_total
[ERROR ] virtual
[ERROR ] ps
[ERROR ] osrelease_info
[ERROR ] osfinger
[ERROR ] path
[ERROR ] systempath
[ERROR ] pythonexecutable
[ERROR ] pythonpath
[ERROR ] pythonversion
[ERROR ] saltpath
[ERROR ] saltversion
[ERROR ] saltversioninfo
[ERROR ] zmqversion
[ERROR ] disks
[ERROR ] ssds
[ERROR ] shell
[ERROR ] username
[ERROR ] groupname
[ERROR ] pid
[ERROR ] gid
[ERROR ] uid
[ERROR ] zfs_support
[ERROR ] zfs_feature_flags
cr-testing01.lab1:
None
As you can see it’s missing the following keys
-host
-hostname
-interfaces
-model
-optional_args
-serial
-uptime
-vendor
-version
which as far as I can see are all gathered in napalm.py
Issue Analytics
- State:
- Created 3 years ago
- Comments:10 (5 by maintainers)
Top Results From Across the Web
salt.modules.napalm_mod - Salt Project Documentation
Note, not all renderer modules can work with strings; the 'py' renderer ... the NAPALM os Grain and the Netmiko device_type , e.g.,...
Read more >Getting your facts straight. - Said van de Klundert
Setting your own grains is pretty easy. Let's look at the following execution module I created : /srv/salt/_modules/example.py.
Read more >Configuration management - Mircea Ulinic
For network devices, currently there are three ways to apply configuration changes: on the fly; template rendering; states. Using Salt's ability ...
Read more >Network automation at scale - RIPE 74
Get the device vendor from the grains. Hostname already specified in the pillar. Configuration management. Cross vendor templating (1) ...
Read more >Release 2020.10.0 — salt-sproxy documentation - Why salt-sproxy
--priv-password : Specify the SSH private key file's passphrase when ... #181 “napalm grains not available during template rendering” - fixed via #187....
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Hi @mirceaulinic , yes you are right,
grains.get('model')
should work now that the pillar is compiled a second time. I had that in there initially but changed it when it wasn’t working due to the fact the template was only rendered once before the grain was available and thustarget_software_version
was always empty. To track down the issue I changed it to the above to actually get an error message.I will change it back to get rid of the error message now. Thank you!
Hi @syntax-terr0r. I see what happens, the error is purely cosmetical, which you can prevent by having
if grains.get('model')
instead - which would work whether you have cached data or not. However, when relying on cached data, it wouldn’t hurt to look it up during the initial pillar compilation: #193.