question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

UnicodeDecodeError for hello-world example of keras-tuner

See original GitHub issue

When I copy the hello-world example of keras-tuner and run in the env (tensorflow2.1.0 and python 3.7) directly in jupyter notebook, I get the following error:

UnicodeDecodeError: 'utf-8' codec can't decode byte 0xd5 in position 246: invalid continuation byte

Does it mean there is some issues about decoding mnist dataset ? I am not sure what to do next.

The detailed output of error is:

Search space summary
|-Default search space size: 4
num_layers (Int)
|-default: None
|-max_value: 20
|-min_value: 2
|-sampling: None
|-step: 1
units_0 (Int)
|-default: None
|-max_value: 512
|-min_value: 32
|-sampling: None
|-step: 32
units_1 (Int)
|-default: None
|-max_value: 512
|-min_value: 32
|-sampling: None
|-step: 32
learning_rate (Choice)
|-default: 0.01
|-ordered: True
|-values: [0.01, 0.001, 0.0001]
Train on 10000 samples, validate on 10000 samples
Epoch 1/3
 9728/10000 [============================>.] - ETA: 6:52 - loss: 2.3025 - accuracy: 0.06 - ETA: 41s - loss: 2.3024 - accuracy: 0.1063 - ETA: 20s - loss: 2.3023 - accuracy: 0.123 - ETA: 13s - loss: 2.3022 - accuracy: 0.120 - ETA: 10s - loss: 2.3020 - accuracy: 0.118 - ETA: 8s - loss: 2.3016 - accuracy: 0.116 - ETA: 7s - loss: 2.3010 - accuracy: 0.13 - ETA: 5s - loss: 2.2998 - accuracy: 0.14 - ETA: 5s - loss: 2.2990 - accuracy: 0.15 - ETA: 4s - loss: 2.2973 - accuracy: 0.16 - ETA: 4s - loss: 2.2948 - accuracy: 0.17 - ETA: 3s - loss: 2.2912 - accuracy: 0.17 - ETA: 3s - loss: 2.2857 - accuracy: 0.17 - ETA: 3s - loss: 2.2786 - accuracy: 0.17 - ETA: 2s - loss: 2.2694 - accuracy: 0.18 - ETA: 2s - loss: 2.2569 - accuracy: 0.18 - ETA: 2s - loss: 2.2410 - accuracy: 0.18 - ETA: 2s - loss: 2.2215 - accuracy: 0.18 - ETA: 2s - loss: 2.2035 - accuracy: 0.19 - ETA: 1s - loss: 2.1845 - accuracy: 0.20 - ETA: 1s - loss: 2.1661 - accuracy: 0.20 - ETA: 1s - loss: 2.1501 - accuracy: 0.21 - ETA: 1s - loss: 2.1369 - accuracy: 0.21 - ETA: 1s - loss: 2.1220 - accuracy: 0.21 - ETA: 1s - loss: 2.1086 - accuracy: 0.21 - ETA: 0s - loss: 2.0918 - accuracy: 0.22 - ETA: 0s - loss: 2.0780 - accuracy: 0.22 - ETA: 0s - loss: 2.0627 - accuracy: 0.23 - ETA: 0s - loss: 2.0492 - accuracy: 0.23 - ETA: 0s - loss: 2.0354 - accuracy: 0.24 - ETA: 0s - loss: 2.0236 - accuracy: 0.24 - ETA: 0s - loss: 2.0121 - accuracy: 0.24 - ETA: 0s - loss: 2.0014 - accuracy: 0.2521
---------------------------------------------------------------------------
_FallbackException                        Traceback (most recent call last)
C:\WWW\anaconda3\envs\tf2\lib\site-packages\tensorflow_core\python\ops\gen_io_ops.py in save_v2(prefix, tensor_names, shape_and_slices, tensors, name)
   1700         _ctx._context_handle, tld.device_name, "SaveV2", name,
-> 1701         tld.op_callbacks, prefix, tensor_names, shape_and_slices, tensors)
   1702       return _result

_FallbackException: This function does not handle the case of the path where all inputs are not already EagerTensors.

During handling of the above exception, another exception occurred:

UnicodeDecodeError                        Traceback (most recent call last)
<ipython-input-3-cd4a152dd45c> in <module>
     33              y=y,
     34              epochs=3,
---> 35              validation_data=(val_x, val_y))
     36 
     37 tuner.results_summary()

C:\WWW\anaconda3\envs\tf2\lib\site-packages\kerastuner\engine\base_tuner.py in search(self, *fit_args, **fit_kwargs)
    128 
    129             self.on_trial_begin(trial)
--> 130             self.run_trial(trial, *fit_args, **fit_kwargs)
    131             self.on_trial_end(trial)
    132         self.on_search_end()

C:\WWW\anaconda3\envs\tf2\lib\site-packages\kerastuner\engine\multi_execution_tuner.py in run_trial(self, trial, *fit_args, **fit_kwargs)
     94 
     95             model = self.hypermodel.build(trial.hyperparameters)
---> 96             history = model.fit(*fit_args, **copied_fit_kwargs)
     97             for metric, epoch_values in history.history.items():
     98                 if self.oracle.objective.direction == 'min':

C:\WWW\anaconda3\envs\tf2\lib\site-packages\tensorflow_core\python\keras\engine\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
    817         max_queue_size=max_queue_size,
    818         workers=workers,
--> 819         use_multiprocessing=use_multiprocessing)
    820 
    821   def evaluate(self,

C:\WWW\anaconda3\envs\tf2\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py in fit(self, model, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
    395                       total_epochs=1)
    396                   cbks.make_logs(model, epoch_logs, eval_result, ModeKeys.TEST,
--> 397                                  prefix='val_')
    398 
    399     return model.history

C:\WWW\anaconda3\envs\tf2\lib\contextlib.py in __exit__(self, type, value, traceback)
    117         if type is None:
    118             try:
--> 119                 next(self.gen)
    120             except StopIteration:
    121                 return False

C:\WWW\anaconda3\envs\tf2\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py in on_epoch(self, epoch, mode)
    769       if mode == ModeKeys.TRAIN:
    770         # Epochs only apply to `fit`.
--> 771         self.callbacks.on_epoch_end(epoch, epoch_logs)
    772       self.progbar.on_epoch_end(epoch, epoch_logs)
    773 

C:\WWW\anaconda3\envs\tf2\lib\site-packages\tensorflow_core\python\keras\callbacks.py in on_epoch_end(self, epoch, logs)
    300     logs = logs or {}
    301     for callback in self.callbacks:
--> 302       callback.on_epoch_end(epoch, logs)
    303 
    304   def on_train_batch_begin(self, batch, logs=None):

C:\WWW\anaconda3\envs\tf2\lib\site-packages\tensorflow_core\python\keras\callbacks.py in on_epoch_end(self, epoch, logs)
    990           self._save_model(epoch=epoch, logs=logs)
    991       else:
--> 992         self._save_model(epoch=epoch, logs=logs)
    993     if self.model._in_multi_worker_mode():
    994       # For multi-worker training, back up the weights and current training

C:\WWW\anaconda3\envs\tf2\lib\site-packages\tensorflow_core\python\keras\callbacks.py in _save_model(self, epoch, logs)
   1025               self.best = current
   1026               if self.save_weights_only:
-> 1027                 self.model.save_weights(filepath, overwrite=True)
   1028               else:
   1029                 self.model.save(filepath, overwrite=True)

C:\WWW\anaconda3\envs\tf2\lib\site-packages\tensorflow_core\python\keras\engine\network.py in save_weights(self, filepath, overwrite, save_format)
   1121              'saved.\n\nConsider using a TensorFlow optimizer from `tf.train`.')
   1122             % (optimizer,))
-> 1123       self._trackable_saver.save(filepath, session=session)
   1124       # Record this checkpoint so it's visible from tf.train.latest_checkpoint.
   1125       checkpoint_management.update_checkpoint_state_internal(

C:\WWW\anaconda3\envs\tf2\lib\site-packages\tensorflow_core\python\training\tracking\util.py in save(self, file_prefix, checkpoint_number, session)
   1166     file_io.recursive_create_dir(os.path.dirname(file_prefix))
   1167     save_path, new_feed_additions = self._save_cached_when_graph_building(
-> 1168         file_prefix=file_prefix_tensor, object_graph_tensor=object_graph_tensor)
   1169     if new_feed_additions:
   1170       feed_dict.update(new_feed_additions)

C:\WWW\anaconda3\envs\tf2\lib\site-packages\tensorflow_core\python\training\tracking\util.py in _save_cached_when_graph_building(self, file_prefix, object_graph_tensor)
   1114         or context.executing_eagerly() or ops.inside_function()):
   1115       saver = functional_saver.MultiDeviceSaver(named_saveable_objects)
-> 1116       save_op = saver.save(file_prefix)
   1117       with ops.device("/cpu:0"):
   1118         with ops.control_dependencies([save_op]):

C:\WWW\anaconda3\envs\tf2\lib\site-packages\tensorflow_core\python\training\saving\functional_saver.py in save(self, file_prefix)
    228         # _SingleDeviceSaver will use the CPU device when necessary, but initial
    229         # read operations should be placed on the SaveableObject's device.
--> 230         sharded_saves.append(saver.save(shard_prefix))
    231 
    232     with ops.control_dependencies(sharded_saves):

C:\WWW\anaconda3\envs\tf2\lib\site-packages\tensorflow_core\python\training\saving\functional_saver.py in save(self, file_prefix)
     70         tensor_slices.append(spec.slice_spec)
     71     with ops.device("cpu:0"):
---> 72       return io_ops.save_v2(file_prefix, tensor_names, tensor_slices, tensors)
     73 
     74   def restore(self, file_prefix):

C:\WWW\anaconda3\envs\tf2\lib\site-packages\tensorflow_core\python\ops\gen_io_ops.py in save_v2(prefix, tensor_names, shape_and_slices, tensors, name)
   1705         return save_v2_eager_fallback(
   1706             prefix, tensor_names, shape_and_slices, tensors, name=name,
-> 1707             ctx=_ctx)
   1708       except _core._SymbolicException:
   1709         pass  # Add nodes to the TensorFlow graph.

C:\WWW\anaconda3\envs\tf2\lib\site-packages\tensorflow_core\python\ops\gen_io_ops.py in save_v2_eager_fallback(prefix, tensor_names, shape_and_slices, tensors, name, ctx)
   1727   _attrs = ("dtypes", _attr_dtypes)
   1728   _result = _execute.execute(b"SaveV2", 0, inputs=_inputs_flat, attrs=_attrs,
-> 1729                              ctx=ctx, name=name)
   1730   _result = None
   1731   return _result

C:\WWW\anaconda3\envs\tf2\lib\site-packages\tensorflow_core\python\eager\execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
     59     tensors = pywrap_tensorflow.TFE_Py_Execute(ctx._handle, device_name,
     60                                                op_name, inputs, attrs,
---> 61                                                num_outputs)
     62   except core._NotOkStatusException as e:
     63     if name is not None:

UnicodeDecodeError: 'utf-8' codec can't decode byte 0xd5 in position 246: invalid continuation byte

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:5

github_iconTop GitHub Comments

4reactions
chr6192commented, May 22, 2020

Yeah, I have found the problem. Too long a filename (path) causes the compilation failed. I got the same error when I give model.save_weights() a long path.

0reactions
shaoxiang777commented, Mar 7, 2022

Yeah, I have found the problem. Too long a filename (path) causes the compilation failed. I got the same error when I give model.save_weights() a long path. How do you solve this? Even I give a new shorter name. It doesn’t work for me.

Read more comments on GitHub >

github_iconTop Results From Across the Web

python UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b ...
try this if you get the error utf-8 codec can't decode byte. 2. # Assuming your file is pipe delimited otherwise remove sep='|'....
Read more >
keras-tuner error in hyperparameter tuning - Stack Overflow
i am trying my first time to get a keras-tuner tuned deep learning model. My tuning code goes like ...
Read more >
Getting started with KerasTuner
In the following code example, we define a Keras model with two Dense layers. We want to tune the number of units in...
Read more >
Introduction to the Keras Tuner | TensorFlow Core
The Keras Tuner is a library that helps you pick the optimal set of hyperparameters for your TensorFlow program. The process of selecting...
Read more >
kerastuneR: Interface to 'Keras Tuner'
The number of randomly generated samples as initial training data for Bayesian optimization. If not specified, a value of 3 times the dimen-...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found