[[abstract]]For parallel algorithms, their execution time consists of two parts: the computation time and the communication time. The execution time reaches a minimum when the other two are balanced. This paper proposes a strategy to reduce the execution time when it is dominated by the communication time. The strategy is to use fewer processors to decrease the communication time and therefore reduce the execution time. The authors have successfully reduced time complexities of semigroup computations. To be more precise, any parallel algorithm performing semigroup computations of N data items can be improved if it uses N processors (each holds one data item) and has time complexity O(N super(q)log super(r)N), q greater than or equal to 0 an...
Abstract. Megiddo introduced a technique for using a parallel algorithm for one problem to construct...
By restricting weight functions to satisfy the quadrangle inequality or the inverse quadrangle inequ...
Many parallel applications from scientific computing use MPI collective communication operations to ...
AbstractSuppose we have a completely-connected network of random-access machines which communicate b...
[[abstract]]Two-dimensional mesh-connected computers with multiple broadcasting (2-MCCMBs) are studi...
[[abstract]]Semigroup and prefix computations on two-dimensional mesh-connected computers with multi...
Consider a network of processor elements arranged in a d-dimensional grid, where each processor can ...
[[abstract]]We discuss how to design parallel algorithms based upon the divide-and-conquer strategy....
We study the effect of limited communication throughput on parallel computation in a setting where t...
AbstractThis paper outlines a theory of parallel algorithms that emphasizes two crucial aspects of p...
Parallelizing large sized problem in parallel systems has always been a challenge for programmer. Th...
AbstractWe study the effect of limited communication throughput on parallel computation in a setting...
Recent advances in microelectronics have brought closer to feasibility the construction of computer...
The parallelism within an algorithm at any stage of execution can be defined as the number of indepe...
In this paper, we consider a parallel algorithm for the patience sorting. The problem is not known t...
Abstract. Megiddo introduced a technique for using a parallel algorithm for one problem to construct...
By restricting weight functions to satisfy the quadrangle inequality or the inverse quadrangle inequ...
Many parallel applications from scientific computing use MPI collective communication operations to ...
AbstractSuppose we have a completely-connected network of random-access machines which communicate b...
[[abstract]]Two-dimensional mesh-connected computers with multiple broadcasting (2-MCCMBs) are studi...
[[abstract]]Semigroup and prefix computations on two-dimensional mesh-connected computers with multi...
Consider a network of processor elements arranged in a d-dimensional grid, where each processor can ...
[[abstract]]We discuss how to design parallel algorithms based upon the divide-and-conquer strategy....
We study the effect of limited communication throughput on parallel computation in a setting where t...
AbstractThis paper outlines a theory of parallel algorithms that emphasizes two crucial aspects of p...
Parallelizing large sized problem in parallel systems has always been a challenge for programmer. Th...
AbstractWe study the effect of limited communication throughput on parallel computation in a setting...
Recent advances in microelectronics have brought closer to feasibility the construction of computer...
The parallelism within an algorithm at any stage of execution can be defined as the number of indepe...
In this paper, we consider a parallel algorithm for the patience sorting. The problem is not known t...
Abstract. Megiddo introduced a technique for using a parallel algorithm for one problem to construct...
By restricting weight functions to satisfy the quadrangle inequality or the inverse quadrangle inequ...
Many parallel applications from scientific computing use MPI collective communication operations to ...