question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

[Bug] Suspiciously slow calls to `Sequential.predictSoftly`

See original GitHub issue

Calls to Sequential.predictSoftly start out fairly slow and each call takes a bit more time than the last one.

My example model is quite small, but the first call already takes about 25ms on my machine. (for comparison: fitting 1.000 random data points with 10 epochs takes 415ms) After calling predictSoftly 10.000 times, each successive call takes a third of a second.

image

Code to reproduce:

@OptIn(ExperimentalTime::class)
fun main() {
    Sequential.of(
        Input(36),
        Dense(36),
        Dense(36),
        Dense(36),
        Dense(36),
        Dense(36),
        Dense(16),
        Dense(8),
        Dense(3),
    ).use { model ->
        model.compile(
            optimizer = Adam(),
            loss = Losses.SOFT_MAX_CROSS_ENTROPY_WITH_LOGITS,
            metric = Metrics.MSE,
        )
        model.init()
        val features = FloatArray(36) { Random.nextFloat() }
        var predictionCalls = 0
        var predictionTimeOfBatch = Duration.ZERO
        val predictionTimes = mutableListOf<Double>()
        repeat(100_000) {
            val timing = measureTimedValue { model.predict(features) }
            predictionCalls++
            predictionTimeOfBatch += timing.duration
            predictionTimes += timing.duration.inMilliseconds
            if (predictionCalls % 100 == 0) {
                val csv = predictionTimes
                    .withIndex()
                    .joinToString("\n") { (i,t)->"${i + 1},$t" }
                File("timing.csv").writeText(csv)
                println("$predictionCalls calls done. (${predictionTimeOfBatch / 100} per call)")
                predictionTimeOfBatch = Duration.ZERO
            }
        }
    }
}

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Reactions:3
  • Comments:7 (1 by maintainers)

github_iconTop GitHub Comments

1reaction
zaleslawcommented, Dec 21, 2020

Great idea for pet-project, so, hope it will be helpful, please create a separate ticket for this case as an feature request. So, I will release it in 0.1.1 but it takes time and will be released at the mid of January (not earlier, sorry)

1reaction
LostMekkacommented, Dec 21, 2020

c.) There should be a means to call predictSoftly on a batch of inputs. At the moment this is only possible for predict.

That would be awesome as well. My current pet project to try out this library is a board game AI that very crudely learns through self play, where the neural net is the search heuristic. The AI plays a semi-random game sequence and for each pair of successive moves I create a training data point. (needs 1 predictSoftly per data point) Building these self-play datasets would probably greatly benefit from batch soft-predicting 😄

Should I add a second issue for this?

Read more comments on GitHub >

github_iconTop Results From Across the Web

[Bug] Suspiciously slow calls to Sequential.predictSoftly #25
Calls to Sequential.predictSoftly start out fairly slow and each call takes a bit more time than the last one.
Read more >
Time Series Prediction with LSTM Recurrent Neural Networks ...
Unlike regression predictive modeling, time series also adds the complexity of a sequence dependence among the input variables.
Read more >
What should I do when my neural network doesn't learn?
Verify that your code is bug free · NA or NaN or Inf values in your data creating NA or NaN or Inf...
Read more >
Keras model training is slow without "tf.compat.v1 ...
It seems like the trained model is somehow corrupted. This does not happen when the eager mode is activated. Where this difference comes...
Read more >
Of 2 Minds: How Fast and Slow Thinking Shape Perception ...
You experienced slow thinking as you proceeded through a sequence of steps. You first retrieved from memory the cognitive program for ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found