question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Colab: FileNotFoundError: File b'tpu_train/metadata/results.csv' does not exist

See original GitHub issue

The stock Google Colab link in the README.md isn’t working correctly. I added a line to download the titanic.csv, then hit run all. Full stack trace below:



Solving a binary_classification problem, maximizing accuracy using tensorflow.

Modeling with field specifications:
PassengerId: numeric
Pclass: categorical
Name: ignore
Sex: categorical
Age: numeric
SibSp: categorical
Parch: categorical
Ticket: ignore
Fare: numeric
Cabin: categorical
Embarked: categorical

0% 0/100 [00:00<?, ?trial/s]
0% 0/20 [00:00<?, ?epoch/s]

---------------------------------------------------------------------------

FileNotFoundError                         Traceback (most recent call last)

<ipython-input-5-17dc9e2d602c> in <module>()
      2                    target_field='Survived',
      3                    model_name='tpu',
----> 4                    tpu_address = tpu_address)

/usr/local/lib/python3.6/dist-packages/automl_gs/automl_gs.py in automl_grid_search(csv_path, target_field, target_metric, framework, model_name, context, num_trials, split, num_epochs, col_types, gpu, tpu_address)
     85         # and append to the metrics CSV.
     86         results = pd.read_csv(os.path.join(train_folder, 
---> 87                                         "metadata", "results.csv"))
     88         results = results.assign(**params)
     89         results.insert(0, 'trial_id', uuid.uuid4())

/usr/local/lib/python3.6/dist-packages/pandas/io/parsers.py in parser_f(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, escapechar, comment, encoding, dialect, tupleize_cols, error_bad_lines, warn_bad_lines, skipfooter, skip_footer, doublequote, delim_whitespace, as_recarray, compact_ints, use_unsigned, low_memory, buffer_lines, memory_map, float_precision)
    707                     skip_blank_lines=skip_blank_lines)
    708 
--> 709         return _read(filepath_or_buffer, kwds)
    710 
    711     parser_f.__name__ = name

/usr/local/lib/python3.6/dist-packages/pandas/io/parsers.py in _read(filepath_or_buffer, kwds)
    447 
    448     # Create the parser.
--> 449     parser = TextFileReader(filepath_or_buffer, **kwds)
    450 
    451     if chunksize or iterator:

/usr/local/lib/python3.6/dist-packages/pandas/io/parsers.py in __init__(self, f, engine, **kwds)
    816             self.options['has_index_names'] = kwds['has_index_names']
    817 
--> 818         self._make_engine(self.engine)
    819 
    820     def close(self):

/usr/local/lib/python3.6/dist-packages/pandas/io/parsers.py in _make_engine(self, engine)
   1047     def _make_engine(self, engine='c'):
   1048         if engine == 'c':
-> 1049             self._engine = CParserWrapper(self.f, **self.options)
   1050         else:
   1051             if engine == 'python':

/usr/local/lib/python3.6/dist-packages/pandas/io/parsers.py in __init__(self, src, **kwds)
   1693         kwds['allow_leading_cols'] = self.index_col is not False
   1694 
-> 1695         self._reader = parsers.TextReader(src, **kwds)
   1696 
   1697         # XXX

pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader.__cinit__()

pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._setup_parser_source()

FileNotFoundError: File b'tpu_train/metadata/results.csv' does not exist


Issue Analytics

  • State:open
  • Created 4 years ago
  • Reactions:3
  • Comments:10 (1 by maintainers)

github_iconTop GitHub Comments

2reactions
plartoocommented, May 2, 2019

I have created a StackOverflow question [https://stackoverflow.com/q/55959256/1330974] related to this issue. If anyone finds a solution to this, please share. Thank you!

0reactions
thicccattocommented, Sep 1, 2020

I tried adding %tensorflow_version 1.x in the colab document but now i get a different error. Now i get and Index Error for the line: train_results = results.tail(1).to_dict('records')[0] My guess is that tensorflow.train.cosine_decay does not exist in the current version of tensorflow. EDIT: Just changed from TPU and still added the tensorflow version and its working!

Read more comments on GitHub >

github_iconTop Results From Across the Web

Error: File b'tpu_train/metadata/results.csv' does not exist
I'm wondering if anyone on StackOverflow tried this automl_gs module on Colab and get it working. Based on the guide, it is supposed...
Read more >
How to upload a CSV file into Google Colab - YouTube
This talks about how to use google drive for your csv files where you need to do data analysis. For other Google Colab...
Read more >
Get Started: 3 Ways to Load CSV files into Colab | by A Apte
I will show you three ways to load a CSV file into Colab and insert it into a Pandas dataframe. (Note: there are...
Read more >
Ways to import CSV files in Google Colab - GeeksforGeeks
Now in the Notebook, at the top-left, there is a File menu and then click on Locate in Drive, and then find your...
Read more >
How do I read a CSV file from Google Drive using Python ...
Seven steps to read a CSV file using PyDrive Tired of that old story: download CSV file, upload into the Google Colab, read/load...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found