question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

there is not python3.4 with keras2.1.4 and tensorflow1.0.1 ; so , i install python3.5 with keras2.1.4 and tensorflow1.0.1

See original GitHub issue

there is not python3.4 with keras2.1.4 and tensorflow1.0.1 ; so , i install python3.5 with keras2.1.4 and tensorflow1.0.1

@TitanX:~$ conda search tensorflow
Loading channels: done
Name                       Version                   Build  Channel        
tensorflow                 0.10.0rc0           np111py27_0  https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
tensorflow                 0.10.0rc0           np111py27_0  https://mirrors.ustc.edu.cn/anaconda/pkgs/free
tensorflow                 0.10.0rc0           np111py27_0  defaults       
tensorflow                 0.10.0rc0           np111py34_0  https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
tensorflow                 0.10.0rc0           np111py34_0  https://mirrors.ustc.edu.cn/anaconda/pkgs/free
tensorflow                 0.10.0rc0           np111py34_0  defaults       
tensorflow                 0.10.0rc0           np111py35_0  https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
tensorflow                 0.10.0rc0           np111py35_0  https://mirrors.ustc.edu.cn/anaconda/pkgs/free
tensorflow                 0.10.0rc0           np111py35_0  defaults       
tensorflow                 1.0.1               np112py27_0  https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
tensorflow                 1.0.1               np112py27_0  https://mirrors.ustc.edu.cn/anaconda/pkgs/free
tensorflow                 1.0.1               np112py27_0  defaults       
tensorflow                 1.0.1               np112py35_0  https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
tensorflow                 1.0.1               np112py35_0  https://mirrors.ustc.edu.cn/anaconda/pkgs/free
tensorflow                 1.0.1               np112py35_0  defaults       
tensorflow                 1.0.1               np112py36_0  https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
tensorflow                 1.0.1               np112py36_0  https://mirrors.ustc.edu.cn/anaconda/pkgs/free
tensorflow                 1.0.1               np112py36_0  defaults       
tensorflow                 1.1.0               np111py27_0  https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
tensorflow                 1.1.0               np111py27_0  https://mirrors.ustc.edu.cn/anaconda/pkgs/free
keras                      2.1.2                    py35_0  defaults       
keras                      2.1.2                    py36_0  defaults       
keras                      2.1.3                    py27_0  defaults       
keras                      2.1.3                    py35_0  defaults       
keras                      2.1.3                    py36_0  defaults       
keras                      2.1.4                    py27_0  defaults       
keras                      2.1.4                    py35_0  defaults       
keras                      2.1.4                    py36_0  defaults       
keras                      2.1.5                    py27_0  defaults 

there is not python3.4 with keras2.1.4 and tensorflow1.0.1 ; so , i install python3.5 with keras2.1.4 and tensorflow1.0.1 . Then, happen follow error:

/home/anaconda2/envs/dronet/bin/python -u /home/pytest/dronet/rpg_public_dronet-master1/cnn.py --experiment_rootdir='./model/test_1' --train_dir='/home/datafile/dronet_data/collision_dataset/training' --val_dir='/home/datafile/dronet_data/collision_dataset/validation' --batch_size=16 --epochs=150 --log_rate=25
Using TensorFlow backend.
Found 63169 images belonging to 132 experiments.
Found 1035 images belonging to 3 experiments.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_1 (InputLayer)            (None, 200, 200, 1)  0                                            
__________________________________________________________________________________________________
conv2d_1 (Conv2D)               (None, 100, 100, 32) 832         input_1[0][0]                    
__________________________________________________________________________________________________
max_pooling2d_1 (MaxPooling2D)  (None, 49, 49, 32)   0           conv2d_1[0][0]                   
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, 49, 49, 32)   128         max_pooling2d_1[0][0]            
__________________________________________________________________________________________________
activation_1 (Activation)       (None, 49, 49, 32)   0           batch_normalization_1[0][0]      
__________________________________________________________________________________________________
conv2d_2 (Conv2D)               (None, 25, 25, 32)   9248        activation_1[0][0]               
__________________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, 25, 25, 32)   128         conv2d_2[0][0]                   
__________________________________________________________________________________________________
activation_2 (Activation)       (None, 25, 25, 32)   0           batch_normalization_2[0][0]      
__________________________________________________________________________________________________
conv2d_4 (Conv2D)               (None, 25, 25, 32)   1056        max_pooling2d_1[0][0]            
__________________________________________________________________________________________________
conv2d_3 (Conv2D)               (None, 25, 25, 32)   9248        activation_2[0][0]               
__________________________________________________________________________________________________
add_1 (Add)                     (None, 25, 25, 32)   0           conv2d_4[0][0]                   
                                                                 conv2d_3[0][0]                   
__________________________________________________________________________________________________
batch_normalization_3 (BatchNor (None, 25, 25, 32)   128         add_1[0][0]                      
__________________________________________________________________________________________________
activation_3 (Activation)       (None, 25, 25, 32)   0           batch_normalization_3[0][0]      
__________________________________________________________________________________________________
conv2d_5 (Conv2D)               (None, 13, 13, 64)   18496       activation_3[0][0]               
__________________________________________________________________________________________________
batch_normalization_4 (BatchNor (None, 13, 13, 64)   256         conv2d_5[0][0]                   
__________________________________________________________________________________________________
activation_4 (Activation)       (None, 13, 13, 64)   0           batch_normalization_4[0][0]      
__________________________________________________________________________________________________
conv2d_7 (Conv2D)               (None, 13, 13, 64)   2112        add_1[0][0]                      
__________________________________________________________________________________________________
conv2d_6 (Conv2D)               (None, 13, 13, 64)   36928       activation_4[0][0]               
__________________________________________________________________________________________________
add_2 (Add)                     (None, 13, 13, 64)   0           conv2d_7[0][0]                   
                                                                 conv2d_6[0][0]                   
__________________________________________________________________________________________________
batch_normalization_5 (BatchNor (None, 13, 13, 64)   256         add_2[0][0]                      
__________________________________________________________________________________________________
activation_5 (Activation)       (None, 13, 13, 64)   0           batch_normalization_5[0][0]      
__________________________________________________________________________________________________
conv2d_8 (Conv2D)               (None, 7, 7, 128)    73856       activation_5[0][0]               
__________________________________________________________________________________________________
batch_normalization_6 (BatchNor (None, 7, 7, 128)    512         conv2d_8[0][0]                   
__________________________________________________________________________________________________
activation_6 (Activation)       (None, 7, 7, 128)    0           batch_normalization_6[0][0]      
__________________________________________________________________________________________________
conv2d_10 (Conv2D)              (None, 7, 7, 128)    8320        add_2[0][0]                      
__________________________________________________________________________________________________
conv2d_9 (Conv2D)               (None, 7, 7, 128)    147584      activation_6[0][0]               
__________________________________________________________________________________________________
add_3 (Add)                     (None, 7, 7, 128)    0           conv2d_10[0][0]                  
                                                                 conv2d_9[0][0]                   
__________________________________________________________________________________________________
flatten_1 (Flatten)             (None, 6272)         0           add_3[0][0]                      
__________________________________________________________________________________________________
activation_7 (Activation)       (None, 6272)         0           flatten_1[0][0]                  
__________________________________________________________________________________________________
dropout_1 (Dropout)             (None, 6272)         0           activation_7[0][0]               
__________________________________________________________________________________________________
dense_2 (Dense)                 (None, 1)            6273        dropout_1[0][0]                  
__________________________________________________________________________________________________
dense_1 (Dense)                 (None, 1)            6273        dropout_1[0][0]                  
__________________________________________________________________________________________________
activation_8 (Activation)       (None, 1)            0           dense_2[0][0]                    
==================================================================================================
Total params: 321,634
Trainable params: 320,930
Non-trainable params: 704
__________________________________________________________________________________________________
None
configure_output_dir: not storing the git diff, probably because you're not in a git repo
Logging data to ./model/test_1/log.txt
/home/anaconda2/envs/dronet/lib/python3.5/site-packages/tensorflow/python/ops/gradients_impl.py:91: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
  "Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
Epoch 1/150
1.0
0.0
Traceback (most recent call last):
  File "/home/anaconda2/envs/dronet/lib/python3.5/site-packages/keras/utils/data_utils.py", line 564, in get
    inputs = self.queue.get(block=True).get()
  File "/home/anaconda2/envs/dronet/lib/python3.5/multiprocessing/pool.py", line 644, in get
    raise self._value
  File "/home/anaconda2/envs/dronet/lib/python3.5/multiprocessing/pool.py", line 119, in worker
    result = (True, func(*args, **kwds))
  File "/home/anaconda2/envs/dronet/lib/python3.5/site-packages/keras/utils/data_utils.py", line 390, in get_index
    return _SHARED_SEQUENCES[uid][i]
  File "/home/anaconda2/envs/dronet/lib/python3.5/site-packages/keras/preprocessing/image.py", line 799, in __getitem__
    return self._get_batches_of_transformed_samples(index_array)
  File "/home/anaconda2/envs/dronet/lib/python3.5/site-packages/keras/preprocessing/image.py", line 845, in _get_batches_of_transformed_samples
    raise NotImplementedError
NotImplementedError

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/pytest/dronet/rpg_public_dronet-master1/cnn.py", line 176, in <module>
    main(sys.argv)
  File "/home/pytest/dronet/rpg_public_dronet-master1/cnn.py", line 172, in main
    _main()
  File "/home/pytest/dronet/rpg_public_dronet-master1/cnn.py", line 161, in _main
    trainModel(train_generator, val_generator, model, initial_epoch)
  File "/home/pytest/dronet/rpg_public_dronet-master1/cnn.py", line 89, in trainModel
    initial_epoch=initial_epoch)
  File "/home/anaconda2/envs/dronet/lib/python3.5/site-packages/keras/legacy/interfaces.py", line 91, in wrapper
    return func(*args, **kwargs)
  File "/home/anaconda2/envs/dronet/lib/python3.5/site-packages/keras/engine/training.py", line 2212, in fit_generator
    generator_output = next(output_generator)
  File "/home/anaconda2/envs/dronet/lib/python3.5/site-packages/keras/utils/data_utils.py", line 570, in get
    six.raise_from(StopIteration(e), e)
  File "<string>", line 2, in raise_from
StopIteration

Issue Analytics

  • State:open
  • Created 5 years ago
  • Comments:11

github_iconTop GitHub Comments

3reactions
MerouaneBcommented, May 7, 2018

I used the versions fixed before (python 3.4 / keras 2.1.4 / Tensorflow 1.5.0). I substituted the next function with the 2 functions above. In the trainModel function in cnn.py, “decay” is not recognized as argument of the compile function despite the fact that is a learning rate for the optimizer used to compile the model. So i written it like this optimizer=optimizers.Adam(decay=1e-5) model.compile(loss=[utils.hard_mining_mse(model.k_mse), utils.hard_mining_entropy(model.k_entropy)], optimizer=optimizer, loss_weights=[model.alpha, model.beta])

2reactions
antonilocommented, May 9, 2018

@MerouaneB thanks for your feedback. I will soon update the repo to adjust for the new changes in Keras.

Read more comments on GitHub >

github_iconTop Results From Across the Web

How to correctly install Keras and Tensorflow
Click to install Keras and Tensorflow together using pip. Understand how to use these Python libraries for machine learning use cases.
Read more >
python - Cannot install TensorFlow 1.x
It works for me to install 1.x tensorflow with the following command: pip3 install https://storage ...
Read more >
Installing TensorFlow on the M1 Mac | by Wei-Meng Lee
If you are a Mac user, you probably have one of the latest machines running Apple Silicon. To utilize Apple's ML Compute framework...
Read more >
PyDev | Linux-Blog – Dr. Mönchmeyer / anracon – Augsburg | Page 2
I do this for an Opensuse Leap system, but a transfer to other Linux distributions ... In my case this is KDE –...
Read more >
version: TensorFlow
TensorFlow, an open source software library for machine learning. ... It includes Python 3.6, TensorFlow 1.4, Keras 2, XGBoost, LightGBM and Vowpal Wabbit....
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found