Tag Archives: concurrent

Asynchronous Future Parallel processing request code example

The thread pool is implemented in cooperation with future, but the main request thread is blocked. High concurrency will still cause too many threads. CPU context switching. Through future, n requests can be sent concurrently, and then wait for the slowest return. The total response time is the time returned by the slowest request

public class Test{
  
  final static ExecutorService executor = Executors.newFixedThreadPool(2);

  public static void main(String[] args){
    RpcService rpcService = new RpcService();
    HttpService httpService = new HttpService();
    Future<Map<String,String>> future1 = null;
    Future<Integer> future2 = null;
    try{
      future1 = executor.submit(()->rpcService.getRpcResult());
      future2 = executor.submit(()->httpService.getHttpResult());
    }catch(Exception e){
       if(future1 != null){
         future1.cancel(true);
       }
       if(future2 != null){
         future2.cancel(true);
       }
       throw new RuntimeEException(e);
    }
  }
  
  static class RpcService{
    Map<String,String> gerRpcResult() throws Exception{
      //Calling remote methods, taking 10ms
    }
  }

  static class HttpService{
    Integer getHttpResult() throws Exception{
      //Calling remote methods, taking 30ms
    }
  }
}

Python parallel processing makes full use of CPU to realize acceleration

Recently, Python is used to process the public image database. Due to the large amount of data, it takes too long to process the images one by one in serial. Therefore, it is decided to adopt the parallel mode to make full use of the CPU on the host to speed up the processing process, which can greatly reduce the total processing time.

Here we use the concurrent.futures Module, which can use multiprocessing to achieve real parallel computing.

The core principle is: concurrent.futures It will run multiple Python interpreters in parallel in the form of sub processes, so that Python programs can use multi-core CPU to improve the execution speed. Since the subprocess is separated from the main interpreter, their global interpreter locks are also independent of each other. Each subprocess can use a CPU core completely.

The specific implementation is also very simple, the code is as follows. How many CPU cores the host has will start how many Python processes for parallel processing.

import concurrent.futures


def function(files):
    # To do what you want
    # files: file list that you want to process


if __name__ == '__main__':
    with concurrent.futures.ProcessPoolExecutor() as executor:
        executor.map(function, files)

After changing to parallel processing, my 12 CPUs run at full load, and the processing speed is significantly faster.