Keep outputting '0it [00:00, ?it/s]'
See original GitHub issueDescribe the bug I use the following code to run a demo on SNLI dataset. It keeps outputting ‘0it [00:00, ?it/s]’
The output file looks like this:
FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
Minimum Reproducible Example
def trim(string):
try:
string = ' '.join(string.split(' ')[:256]).rstrip('\n')
return string
except:
raise ValueError(f'{string}')
def read_file(file):
with open(file) as f:
lines=[]
for line in f:
line = trim(line)
lines.append(line)
return lines
if __name__ == "__main__":
trainX1 = read_file('premise_snli_1.0_train.txt')
trainX2 = read_file('hypothesis_snli_1.0_train.txt')
trainY = read_file('label_snli_1.0_train.txt')
testX1 = read_file('premise_snli_1.0_test.txt')
testX2 = read_file('hypothesis_snli_1.0_test.txt')
testY = read_file('label_snli_1.0_test.txt')
model = Entailment(verbose=True)
model.fit(trainX1, trainX2, trainY)
model.save('./saved_snli_model')
pred_result = model.predict(testX1, testX2)
premise_snli_1.0_train.txt is a file where each line is a sentence.
In config.py file i set the max_length to be 258, batch_size to be 8
Issue Analytics
- State:
- Created 5 years ago
- Comments:5 (4 by maintainers)
Top Results From Across the Web
Loops keep printing 0 - C Board
I put it inside the while loop to rest the counter everytime a new input is entered. @_@ it keeps giving a. 0000....
Read more >how to make saveAsTextFile NOT split output into multiple file?
The reason it saves it as multiple files is because the computation is distributed. If the output is small enough such that you...
Read more >Displaying status of z/OS UNIX System Services - IBM
Displays all the file systems that were mounted by the user whose effective UID is uid . If the specified uid is 0,...
Read more >FANUC 0i-D Data Input and Output Settings
The boot system can be used to save and restore all SRAM data in one lump. Restoring all SRAM data in one lump...
Read more >5.1 FSM with outputs - Math
starting state A and when input 0 is received, it produces output p and moves to ... recognize) the same theorem will hold...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found

The default validation settings will be very aggressive for a dataset like SNLI. By default, Finetune does 5% validation every 150 steps. This would be much more reasonable around 0.5% and every 5k steps. Or to bring it in line with the OpenAI code, the validation can be turned off completely. Its very likely that this will make up a significant amount of the differences in timings.
I have been able to run 2 * 400k lines of data on a comparison task in around 8 hrs on a single 1080ti with a batch size of 2.
Closing this issue as the original issue with unnecessary TQDM logs has now been resolved on the master branch. Thanks for the bug report, feel free to open another issue if you have other Qs we might be able to help out with!