The thread pool is implemented in cooperation with future, but the main request thread is blocked. High concurrency will still cause too many threads. CPU context switching. Through future, n requests can be sent concurrently, and then wait for the slowest return. The total response time is the time returned by the slowest request
public class Test{
final static ExecutorService executor = Executors.newFixedThreadPool(2);
public static void main(String[] args){
RpcService rpcService = new RpcService();
HttpService httpService = new HttpService();
Future<Map<String,String>> future1 = null;
Future<Integer> future2 = null;
try{
future1 = executor.submit(()->rpcService.getRpcResult());
future2 = executor.submit(()->httpService.getHttpResult());
}catch(Exception e){
if(future1 != null){
future1.cancel(true);
}
if(future2 != null){
future2.cancel(true);
}
throw new RuntimeEException(e);
}
}
static class RpcService{
Map<String,String> gerRpcResult() throws Exception{
//Calling remote methods, taking 10ms
}
}
static class HttpService{
Integer getHttpResult() throws Exception{
//Calling remote methods, taking 30ms
}
}
}