IndexError when running the her_ddpg_fetchreach.py
See original GitHub issue(garage) liemzuvon@liemzuvon-MS-7B23:~/code/garage$ ./examples/tf/her_ddpg_fetchreach.py WARNING: Logging before flag parsing goes to stderr. W0716 21:13:03.309802 140498767222592 deprecation_wrapper.py:119] From /home/liemzuvon/anaconda3/envs/garage/lib/python3.6/site-packages/garage-2019.6.0.dev0-py3.6.egg/garage/tf/algos/ddpg.py:78: The name tf.train.AdamOptimizer is deprecated. Please use tf.compat.v1.train.AdamOptimizer instead.
W0716 21:13:03.642822 140498767222592 deprecation.py:506] From /home/liemzuvon/anaconda3/envs/garage/lib/python3.6/site-packages/tensorflow/python/ops/init_ops.py:1251: calling VarianceScaling.init (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version. Instructions for updating: Call initializer instance with the dtype argument instead of passing it to the constructor python -m garage.experiment.experiment_wrapper --snapshot_mode ‘last’ --seed ‘1’ --exp_name ‘experiment_2019_07_16_21_13_02_0001’ --log_dir ‘/home/liemzuvon/code/garage/data/local/experiment/experiment_2019_07_16_21_13_02_0001’ --use_cloudpickle ‘True’ --args_data ‘gASV1wcAAAAAAACMF2Nsb3VkcGlja2xlLmNsb3VkcGlja2xllIwOX2ZpbGxfZnVuY3Rpb26Uk5QoaACMD19tYWtlX3NrZWxfZnVuY5STlGgAjA1fYnVpbHRpbl90eXBllJOUjAhDb2RlVHlwZZSFlFKUKEsBSwBLCUsZS0dD4nQAfABkAY0Bj859AnQBdAJqA2QCgwGDAX0DdAR8A2oFZANkBI0CfQR0BnwDagVkBWQGZAZkBmcDdAdqCGoJdAdqCGoKZAdkCI0GfQV0C3wDagVkCWQGZAZkBmcDdAdqCGoJZAdkCo0FfQZ0DHwDagV0DWQLgwFkDGQNfANqDmQOjQV9B3QPfANqBXwFZA9kD3wGfAdkEGQRZAxkEmQTfAR0B2oQahF0B2oQahFkBmQHZBSNEH0IfAJqEnwIfANkFY0CAQB8AmoQZBZkDGQRZBeNAwEAVwBkAFEAUgBYAGQAUwCUKE6MD3NuYXBzaG90X2NvbmZpZ5SFlIwNRmV0Y2hSZWFjaC12MZRHP8mZmZmZmZqMBXNpZ21hlIWUjAZQb2xpY3mUTQABiCiMCGVudl9zcGVjlIwEbmFtZZSMDGhpZGRlbl9zaXplc5SME2hpZGRlbl9ub25saW5lYXJpdHmUjBNvdXRwdXRfbm9ubGluZWFyaXR5lIwSaW5wdXRfaW5jbHVkZV9nb2FslHSUjAlRRnVuY3Rpb26UKGgRaBJoE2gUaBZ0lEdBLoSAAAAAAEtkRz/ZmZmZmZmaKGgRjBNzaXplX2luX3RyYW5zaXRpb25zlIwMdGltZV9ob3Jpem9ulIwIcmVwbGF5X2uUjApyZXdhcmRfZnVulHSURz9QYk3S8an8Rz+pmZmZmZmaSxRLKEc/7MzMzMzMzShoEYwGcG9saWN5lIwJcG9saWN5X2xylIwFcWZfbHKUjAJxZpSMDXJlcGxheV9idWZmZXKUjBF0YXJnZXRfdXBkYXRlX3RhdZSMDm5fZXBvY2hfY3ljbGVzlIwPbWF4X3BhdGhfbGVuZ3RolIwNbl90cmFpbl9zdGVwc5SMCGRpc2NvdW50lIwUZXhwbG9yYXRpb25fc3RyYXRlZ3mUjBBwb2xpY3lfb3B0aW1pemVylIwMcWZfb3B0aW1pemVylIwRYnVmZmVyX2JhdGNoX3NpemWUaBZ0lIwEYWxnb5SMA2VudpSGlEsyjAhuX2Vwb2Noc5SMCmJhdGNoX3NpemWUaCWHlHSUKIwLTG9jYWxSdW5uZXKUjAVUZkVudpSMA2d5bZSMBG1ha2WUjApPVVN0cmF0ZWd5lIwEc3BlY5SMHENvbnRpbnVvdXNNTFBQb2xpY3lXaXRoTW9kZWyUjAJ0ZpSMAm5ulIwEcmVsdZSMBHRhbmiUjBZDb250aW51b3VzTUxQUUZ1bmN0aW9ulIwPSGVyUmVwbGF5QnVmZmVylIwDaW50lIwOY29tcHV0ZV9yZXdhcmSUjARERFBHlIwFdHJhaW6UjA1BZGFtT3B0aW1pemVylIwFc2V0dXCUdJQoaAuMAV+UjAZydW5uZXKUaC+MDGFjdGlvbl9ub2lzZZRoH2giaCOMBGRkcGeUdJSMJC4vZXhhbXBsZXMvdGYvaGVyX2RkcGdfZmV0Y2hyZWFjaC5weZSMCHJ1bl90YXNrlEsXQ1IAAQwBDgIOAgIBBAECAQgBBgEGAQgDAgEEAQIBCAEGAQgDAgEEAQYBAgECAQoCAgEEAQIBAgECAQIBAgECAQIBAgECAQIBAgEGAQYBAgEIAw4ClCkpdJRSlEr/////fZQojAtfX3BhY2thZ2VfX5ROjAhfX25hbWVfX5SMCF9fbWFpbl9flIwIX19maWxlX1+UjCQuL2V4YW1wbGVzL3RmL2hlcl9kZHBnX2ZldGNocmVhY2gucHmUdYeUUpR9lCiMB2dsb2JhbHOUfZQoaDyMKnRlbnNvcmZsb3cucHl0aG9uLnV0aWwuZGVwcmVjYXRpb25fd3JhcHBlcpSMEkRlcHJlY2F0aW9uV3JhcHBlcpSTlCmBlIwKdGVuc29yZmxvd5RiaESMFGdhcmFnZS50Zi5hbGdvcy5kZHBnlGhEk5RoQYwmZ2FyYWdlLnJlcGxheV9idWZmZXIuaGVyX3JlcGxheV9idWZmZXKUaEGTlGg7jDNnYXJhZ2UudGYucG9saWNpZXMuY29udGludW91c19tbHBfcG9saWN5X3dpdGhfbW9kZWyUaDuTlGg1jCFnYXJhZ2UuZXhwZXJpbWVudC5sb2NhbF90Zl9ydW5uZXKUaDWTlGg2jBNnYXJhZ2UudGYuZW52cy5iYXNllGg2k5RoOYwsZ2FyYWdlLm5wLmV4cGxvcmF0aW9uX3N0cmF0ZWdpZXMub3Vfc3RyYXRlZ3mUaDmTlGhAjC9nYXJhZ2UudGYucV9mdW5jdGlvbnMuY29udGludW91c19tbHBfcV9mdW5jdGlvbpRoQJOUaDdoAIwJc3ViaW1wb3J0lJOUaDeFlFKUdYwIZGVmYXVsdHOUTowEZGljdJR9lIwOY2xvc3VyZV92YWx1ZXOUTowGbW9kdWxllGhWaBJoT4wDZG9jlE6MF19jbG91ZHBpY2tsZV9zdWJtb2R1bGVzlF2UjAthbm5vdGF0aW9uc5R9lIwIcXVhbG5hbWWUaE+MCmt3ZGVmYXVsdHOUTnV0Ui4=’ --variant_data ‘gAN9cQBYCAAAAGV4cF9uYW1lcQFYIwAAAGV4cGVyaW1lbnRfMjAxOV8wN18xNl8yMV8xM18wMl8wMDAxcQJzLg==’ WARNING: Logging before flag parsing goes to stderr. W0716 21:13:04.870317 139828127979328 deprecation_wrapper.py:119] From /home/liemzuvon/anaconda3/envs/garage/lib/python3.6/site-packages/garage-2019.6.0.dev0-py3.6.egg/garage/experiment/deterministic.py:24: The name tf.set_random_seed is deprecated. Please use tf.compat.v1.set_random_seed instead.
/home/liemzuvon/anaconda3/envs/garage/lib/python3.6/site-packages/garage-2019.6.0.dev0-py3.6.egg/garage/sampler/parallel_sampler.py:76: LoggerWarning: No outputs have been added to the logger. logger.log(‘Setting seed to %d’ % seed) /home/liemzuvon/anaconda3/envs/garage/lib/python3.6/site-packages/garage-2019.6.0.dev0-py3.6.egg/garage/sampler/parallel_sampler.py:76: LoggerWarning: Log data of type str was not accepted by any output logger.log(‘Setting seed to %d’ % seed) W0716 21:13:05.236229 139828127979328 deprecation_wrapper.py:119] From /home/liemzuvon/anaconda3/envs/garage/lib/python3.6/site-packages/garage-2019.6.0.dev0-py3.6.egg/garage/tf/algos/ddpg.py:78: The name tf.train.AdamOptimizer is deprecated. Please use tf.compat.v1.train.AdamOptimizer instead.
W0716 21:13:05.249488 139828127979328 deprecation.py:506] From /home/liemzuvon/anaconda3/envs/garage/lib/python3.6/site-packages/tensorflow/python/ops/init_ops.py:1251: calling VarianceScaling.init (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version. Instructions for updating: Call initializer instance with the dtype argument instead of passing it to the constructor W0716 21:13:05.584244 139828127979328 deprecation_wrapper.py:119] From /home/liemzuvon/anaconda3/envs/garage/lib/python3.6/site-packages/garage-2019.6.0.dev0-py3.6.egg/garage/experiment/local_tf_runner.py:77: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.
2019-07-16 21:13:05.585273: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcuda.so.1 2019-07-16 21:13:05.597303: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2019-07-16 21:13:05.597587: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 0 with properties: name: GeForce GTX 1060 6GB major: 6 minor: 1 memoryClockRate(GHz): 1.7335 pciBusID: 0000:01:00.0 2019-07-16 21:13:05.597771: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudart.so.10.0 2019-07-16 21:13:05.598617: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcublas.so.10.0 2019-07-16 21:13:05.599407: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcufft.so.10.0 2019-07-16 21:13:05.599594: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcurand.so.10.0 2019-07-16 21:13:05.600615: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcusolver.so.10.0 2019-07-16 21:13:05.601400: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcusparse.so.10.0 2019-07-16 21:13:05.603958: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudnn.so.7 2019-07-16 21:13:05.604068: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2019-07-16 21:13:05.604369: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2019-07-16 21:13:05.604588: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1763] Adding visible gpu devices: 0 2019-07-16 21:13:05.604889: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA 2019-07-16 21:13:05.654016: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2019-07-16 21:13:05.654348: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55da71cc0030 executing computations on platform CUDA. Devices: 2019-07-16 21:13:05.654366: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): GeForce GTX 1060 6GB, Compute Capability 6.1 2019-07-16 21:13:05.656022: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2808000000 Hz 2019-07-16 21:13:05.656218: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55da72983170 executing computations on platform Host. Devices: 2019-07-16 21:13:05.656249: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): <undefined>, <undefined> 2019-07-16 21:13:05.656601: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2019-07-16 21:13:05.656835: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 0 with properties: name: GeForce GTX 1060 6GB major: 6 minor: 1 memoryClockRate(GHz): 1.7335 pciBusID: 0000:01:00.0 2019-07-16 21:13:05.656875: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudart.so.10.0 2019-07-16 21:13:05.656889: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcublas.so.10.0 2019-07-16 21:13:05.656901: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcufft.so.10.0 2019-07-16 21:13:05.656913: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcurand.so.10.0 2019-07-16 21:13:05.656925: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcusolver.so.10.0 2019-07-16 21:13:05.656938: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcusparse.so.10.0 2019-07-16 21:13:05.656951: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudnn.so.7 2019-07-16 21:13:05.656991: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2019-07-16 21:13:05.657237: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2019-07-16 21:13:05.657452: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1763] Adding visible gpu devices: 0 2019-07-16 21:13:05.657479: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudart.so.10.0 2019-07-16 21:13:05.658175: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181] Device interconnect StreamExecutor with strength 1 edge matrix: 2019-07-16 21:13:05.658186: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1187] 0 2019-07-16 21:13:05.658191: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1200] 0: N 2019-07-16 21:13:05.658290: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2019-07-16 21:13:05.658605: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2019-07-16 21:13:05.658838: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1326] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 5581 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:01:00.0, compute capability: 6.1) W0716 21:13:05.659701 139828127979328 deprecation_wrapper.py:119] From /home/liemzuvon/anaconda3/envs/garage/lib/python3.6/site-packages/garage-2019.6.0.dev0-py3.6.egg/garage/experiment/local_tf_runner.py:92: The name tf.get_default_session is deprecated. Please use tf.compat.v1.get_default_session instead.
W0716 21:13:06.520743 139828127979328 deprecation_wrapper.py:119] From /home/liemzuvon/anaconda3/envs/garage/lib/python3.6/site-packages/garage-2019.6.0.dev0-py3.6.egg/garage/tf/policies/continuous_mlp_policy_with_model.py:94: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.
W0716 21:13:06.521972 139828127979328 deprecation_wrapper.py:119] From /home/liemzuvon/anaconda3/envs/garage/lib/python3.6/site-packages/garage-2019.6.0.dev0-py3.6.egg/garage/tf/policies/continuous_mlp_policy_with_model.py:96: The name tf.variable_scope is deprecated. Please use tf.compat.v1.variable_scope instead.
W0716 21:13:06.522377 139828127979328 deprecation.py:323] From /home/liemzuvon/anaconda3/envs/garage/lib/python3.6/site-packages/garage-2019.6.0.dev0-py3.6.egg/garage/tf/core/mlp.py:85: dense (from tensorflow.python.layers.core) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.dense instead.
W0716 21:13:07.224435 139828127979328 deprecation.py:506] From /home/liemzuvon/anaconda3/envs/garage/lib/python3.6/site-packages/garage-2019.6.0.dev0-py3.6.egg/garage/tf/core/layers.py:334: calling RandomUniform.init (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Call initializer instance with the dtype argument instead of passing it to the constructor
W0716 21:13:07.757966 139828127979328 deprecation.py:323] From /home/liemzuvon/anaconda3/envs/garage/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py:1354: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
2019-07-16 21:13:07.828616: W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1412] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=–tf_xla_cpu_global_jit was not set. If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU. To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=–xla_hlo_profile.
Traceback (most recent call last):
File “/home/liemzuvon/anaconda3/envs/garage/lib/python3.6/runpy.py”, line 193, in _run_module_as_main
“main”, mod_spec)
File “/home/liemzuvon/anaconda3/envs/garage/lib/python3.6/runpy.py”, line 85, in _run_code
exec(code, run_globals)
File “/home/liemzuvon/anaconda3/envs/garage/lib/python3.6/site-packages/garage-2019.6.0.dev0-py3.6.egg/garage/experiment/experiment_wrapper.py”, line 294, in <module>
run_experiment(sys.argv)
File “/home/liemzuvon/anaconda3/envs/garage/lib/python3.6/site-packages/garage-2019.6.0.dev0-py3.6.egg/garage/experiment/experiment_wrapper.py”, line 204, in run_experiment
method_call(snapshot_config, variant_data)
File “./examples/tf/her_ddpg_fetchreach.py”, line 74, in run_task
runner.train(n_epochs=50, batch_size=100, n_epoch_cycles=20)
File “/home/liemzuvon/anaconda3/envs/garage/lib/python3.6/site-packages/garage-2019.6.0.dev0-py3.6.egg/garage/experiment/local_tf_runner.py”, line 338, in train
return self.algo.train(self, batch_size)
File “/home/liemzuvon/anaconda3/envs/garage/lib/python3.6/site-packages/garage-2019.6.0.dev0-py3.6.egg/garage/tf/algos/off_policy_rl_algorithm.py”, line 73, in train
runner.step_itr, batch_size)
File “/home/liemzuvon/anaconda3/envs/garage/lib/python3.6/site-packages/garage-2019.6.0.dev0-py3.6.egg/garage/experiment/local_tf_runner.py”, line 190, in obtain_samples
return self.sampler.obtain_samples(itr, batch_size)
File “/home/liemzuvon/anaconda3/envs/garage/lib/python3.6/site-packages/garage-2019.6.0.dev0-py3.6.egg/garage/tf/samplers/off_policy_vectorized_sampler.py”, line 97, in obtain_samples
itr, obs_normalized, self.algo.policy)
File “/home/liemzuvon/anaconda3/envs/garage/lib/python3.6/site-packages/garage-2019.6.0.dev0-py3.6.egg/garage/np/exploration_strategies/ou_strategy.py”, line 82, in get_actions
actions, agent_infos = policy.get_actions(observations)
File “/home/liemzuvon/anaconda3/envs/garage/lib/python3.6/site-packages/garage-2019.6.0.dev0-py3.6.egg/garage/tf/policies/continuous_mlp_policy_with_model.py”, line 150, in get_actions
flat_obs = self.observation_space.flatten_n(observations)
File “/home/liemzuvon/anaconda3/envs/garage/lib/python3.6/site-packages/akro-0.0.6-py3.6.egg/akro/dict.py”, line 88, in flatten_n
File “/home/liemzuvon/anaconda3/envs/garage/lib/python3.6/site-packages/akro-0.0.6-py3.6.egg/akro/dict.py”, line 88, in <listcomp>
File “/home/liemzuvon/anaconda3/envs/garage/lib/python3.6/site-packages/akro-0.0.6-py3.6.egg/akro/dict.py”, line 57, in flatten
File “/home/liemzuvon/anaconda3/envs/garage/lib/python3.6/site-packages/akro-0.0.6-py3.6.egg/akro/dict.py”, line 57, in <listcomp>
IndexError: only integers, slices (:
), ellipsis (...
), numpy.newaxis (None
) and integer or boolean arrays are valid indices
Issue Analytics
- State:
- Created 4 years ago
- Comments:9 (9 by maintainers)
Top GitHub Comments
I am fixing it in #1009
@13331151 let us know if you find any more issues!