question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Keep outputting '0it [00:00, ?it/s]'

See original GitHub issue

Describe the bug I use the following code to run a demo on SNLI dataset. It keeps outputting ‘0it [00:00, ?it/s]’

The output file looks like this:

 FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]

Minimum Reproducible Example

def trim(string):
    try:
        string = ' '.join(string.split(' ')[:256]).rstrip('\n')
        return string       
    except:
        raise ValueError(f'{string}')

def read_file(file):
    with open(file) as f:
        lines=[]
        for line in f:
            line = trim(line)
            lines.append(line)
    return lines

if __name__ == "__main__":

    trainX1 = read_file('premise_snli_1.0_train.txt')
    trainX2 = read_file('hypothesis_snli_1.0_train.txt')
    trainY = read_file('label_snli_1.0_train.txt')

    testX1 = read_file('premise_snli_1.0_test.txt')
    testX2 = read_file('hypothesis_snli_1.0_test.txt')
    testY = read_file('label_snli_1.0_test.txt')

    model = Entailment(verbose=True)
    model.fit(trainX1, trainX2, trainY)
    model.save('./saved_snli_model')
    pred_result = model.predict(testX1, testX2)

premise_snli_1.0_train.txt is a file where each line is a sentence. In config.py file i set the max_length to be 258, batch_size to be 8

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Comments:5 (4 by maintainers)

github_iconTop GitHub Comments

2reactions
benleetownsendcommented, Jul 27, 2018

The default validation settings will be very aggressive for a dataset like SNLI. By default, Finetune does 5% validation every 150 steps. This would be much more reasonable around 0.5% and every 5k steps. Or to bring it in line with the OpenAI code, the validation can be turned off completely. Its very likely that this will make up a significant amount of the differences in timings.

I have been able to run 2 * 400k lines of data on a comparison task in around 8 hrs on a single 1080ti with a batch size of 2.

0reactions
madisonmaycommented, Aug 6, 2018

Closing this issue as the original issue with unnecessary TQDM logs has now been resolved on the master branch. Thanks for the bug report, feel free to open another issue if you have other Qs we might be able to help out with!

Read more comments on GitHub >

github_iconTop Results From Across the Web

Loops keep printing 0 - C Board
I put it inside the while loop to rest the counter everytime a new input is entered. @_@ it keeps giving a. 0000....
Read more >
how to make saveAsTextFile NOT split output into multiple file?
The reason it saves it as multiple files is because the computation is distributed. If the output is small enough such that you...
Read more >
Displaying status of z/OS UNIX System Services - IBM
Displays all the file systems that were mounted by the user whose effective UID is uid . If the specified uid is 0,...
Read more >
FANUC 0i-D Data Input and Output Settings
The boot system can be used to save and restore all SRAM data in one lump. Restoring all SRAM data in one lump...
Read more >
5.1 FSM with outputs - Math
starting state A and when input 0 is received, it produces output p and moves to ... recognize) the same theorem will hold...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found