question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

channel warm up per thread

See original GitHub issue

Hi, all. I’m doing an experiment with grpc and find out that the first usage of grpc in a thread costs much more time than the subsequent ones doing the same thing. I tested it on a 2-core laptop and 24-core machines, and the same phenomena occurred on all of them. So I’m wondering if I’m using grpc wrong or something in grpc-java could be improved to avoid such situation. Below is my code and part of the report.

import java.text.SimpleDateFormat;
import java.util.Date;

import me.gdbc.mocker.MockServer;
import me.proto.query.QueryProtocol.QueryRequest;
import me.proto.query.QueryProtocol.QueryResponse;

import me.proto.query.QueryServiceInterfaceGrpc.METHOD_QUERY;

import io.grpc.CallOptions;
import io.grpc.ManagedChannel;
import io.grpc.netty.NettyChannelBuilder;

public class WarmUp {

public static SimpleDateFormat df = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");

public static QueryRequest request = QueryRequest.newBuilder().setQueryString("").build();
public static class QueryThread extends Thread {

    private String name;
    private ManagedChannel client;

    public QueryThread(String name, ManagedChannel client) {
        this.name = name;
        this.client = client;
    }

    public void run() {
        for (int i = 0 ; i < 300; i++) {
            long start = System.currentTimeMillis();
            try {
                //ClientCall<QueryRequest, QueryResponse> call =  client.newCall(METHOD_QUERY, CallOptions.DEFAULT);
                QueryResponse ret = io.grpc.stub.ClientCalls.blockingUnaryCall(client, METHOD_QUERY, CallOptions.DEFAULT, request);
                //QueryResponse ret = io.grpc.stub.ClientCalls.blockingUnaryCall(call, request);
                long duration = System.currentTimeMillis() - start;
                String show  = "" + ret + "\n" + df.format(new Date()) + "\n" + "cost " + duration + "ms in thread " + name;
                System.out.println(show);
            } catch (Exception e) {
                e.printStackTrace();
            }
        }

    }   
}

public static void main(String[] args) {
    int port = 9012;
    MockServer server = new MockServer(port);
    try {
        server.start();
        ManagedChannel channel = NettyChannelBuilder.forAddress("127.0.0.1", port).usePlaintext(true).build();
        io.grpc.stub.ClientCalls.blockingUnaryCall(channel, METHOD_QUERY, CallOptions.DEFAULT, request);
        for (int i = 0; i < 4; i++) {
            QueryThread th = new QueryThread(String.valueOf(i), channel);
            th.start();

        }

    } catch (Exception e) {
        e.printStackTrace();
    }
}

}
May 01, 2016 2:28:09 PM me.gdbc.mocker.MockServer start
INFO: Server started, listenning on 9012
May 01, 2016 2:28:09 PM io.grpc.internal.ManagedChannelImpl <init>
INFO: [ManagedChannelImpl@326de728] Created with target 127.0.0.1:9012
result {
  props {
    intVal: 1
  }
}

2016-05-01 14:28:10
cost 76ms in thread 0
result {
  props {
    intVal: 1
  }
}

2016-05-01 14:28:10
cost 67ms in thread 1
result {
  props {
    intVal: 1
  }
}

2016-05-01 14:28:10
cost 68ms in thread 3
result {
  props {
    intVal: 1
  }
}

2016-05-01 14:28:10
cost 66ms in thread 2
result {
  props {
    intVal: 1
  }
}

2016-05-01 14:28:10
cost 8ms in thread 0
result {
  props {
    intVal: 1
  }
}

2016-05-01 14:28:10
cost 8ms in thread 1
result {
  props {
    intVal: 1
  }
}

2016-05-01 14:28:10
cost 10ms in thread 3
result {
  props {
    intVal: 1
  }
}

2016-05-01 14:28:10
cost 10ms in thread 2
result {
  props {
    intVal: 1
  }
}

2016-05-01 14:28:10
cost 7ms in thread 1
result {
  props {
    intVal: 1
  }
}

2016-05-01 14:28:10
cost 7ms in thread 0
result {
  props {
    intVal: 1
  }
}

2016-05-01 14:28:10
cost 8ms in thread 3
result {
  props {
    intVal: 1
  }
}

2016-05-01 14:28:10
cost 10ms in thread 2
result {
  props {
    intVal: 1
  }
}

2016-05-01 14:28:10
cost 7ms in thread 1
result {
  props {
    intVal: 1
  }
}

2016-05-01 14:28:10
cost 7ms in thread 0
result {
  props {
    intVal: 1
  }
}

2016-05-01 14:28:10
cost 7ms in thread 3
result {
  props {
    intVal: 1
  }
}

2016-05-01 14:28:10
cost 4ms in thread 1
result {
  props {
    intVal: 1
  }
}

2016-05-01 14:28:10
cost 6ms in thread 0
result {
  props {
    intVal: 1
  }
}

2016-05-01 14:28:10
cost 7ms in thread 3
result {
  props {
    intVal: 1
  }
}

2016-05-01 14:28:10
cost 7ms in thread 1
result {
  props {
    intVal: 1
  }
}

2016-05-01 14:28:10
cost 7ms in thread 0
result {
  props {
    intVal: 1
  }
}

2016-05-01 14:28:10
cost 6ms in thread 3
result {
  props {
    intVal: 1
  }
}

2016-05-01 14:28:10
cost 6ms in thread 1
result {
  props {
    intVal: 1
  }
}

2016-05-01 14:28:10
cost 6ms in thread 0
result {
  props {
    intVal: 1
  }
}

2016-05-01 14:28:10
cost 6ms in thread 3
result {
  props {
    intVal: 1
  }
}

Issue Analytics

  • State:open
  • Created 7 years ago
  • Comments:16 (4 by maintainers)

github_iconTop GitHub Comments

9reactions
tobegit3hubcommented, Apr 28, 2017

We have the similar project when using TensorFlow Serving with Java gRPC clients. Is there any way to fix that? @ejona86

7reactions
ferrari6666commented, Oct 4, 2019

Please let us know any fix on this issue or workaround. @ejona86

Read more comments on GitHub >

github_iconTop Results From Across the Web

Understand and Eliminate JVM Warm-up Overhead in Data
Specifically, we made the following three major findings. First, JVM warm-up overhead is significant even in I/O intensive workloads. We ...
Read more >
Performance best practices with gRPC - Microsoft Learn
A gRPC channel should be reused when making gRPC calls. Reusing a channel allows calls to be multiplexed through an existing HTTP/2 connection....
Read more >
Analyzing and Tuning Warm-up - Azul Docs
Here the JVM monitors the method to build a profile of how many times each method is called, with which code paths, and...
Read more >
Email Warm Up: IP Warming Strategy & Tools To Use
Email warm-up is an essential component behind the success of your cold emails. It represents a method of preparing an email address for ......
Read more >
Built for IoT - Thread Group
The Internet runs on IP. From phones, to routers, to connections across the globe, IP is how devices communicate directly with each other,...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found