The thing i want to ask from you is : i have several steps of a big source code (each step has a virtual computation time and virtual communication data,i am taking only virtual as i wants to model the latency and somehow i managed to measure them throughout the source code) . I need to test this by make the code sleep for the computation time and transferring the data equivalent to the communication data . Can you suggest some configuration models to the same ? My aim is to minimizing the overall execution time of the program and thus obviously i want to diminish the overhead that the process can have.
The simplest that strikes my mind are :
- Do the computation on all processes and send the virtual data by making asynchronous call to the root processes
- Do the same with Synchronous call .
- Assume the communication time is linear with the communication data . Use some algorithm to divide the tasks formerly to each process (inspired from load balancing)
- Start from the first task with the root process and sends data to the next process , do sleep on that process and show on.
can you please give me some idea or verify ,if this strategy makes a lot of difference ?
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…