AbstractWe describe the design and use of Distributed Maple, an environment for executing parallel computer algebra programs on multiprocessors and heterogeneous clusters. The system embeds kernels of the computer algebra system Maple as computational engines into a networked coordination layer implemented in the programming language Java. On the basis of a comparatively high-level programming model, one may write parallel Maple programs that show good speedups in medium-scaled environments. We report on the use of the system for the parallelization of various functions of the algebraic geometry library CASA and demonstrate how design decisions affect the dynamic behaviour and performance of a parallel application. Numerous experimental res...
This paper describes the design and development of a Java Distributed Computation Library, which pr...
The growing processing power of standard workstations, along with the relatively easy way in which t...
This paper demonstrates that it is possible to obtain good, scalable parallel performance by coordin...
AbstractWe describe the design and use of Distributed Maple, an environment for executing parallel c...
We ported the computer algebra system Maple V to the Intel Paragon, a massively parallel, distribute...
The Maple computer algebra system is described. Brief sample sessions show the user syntax and the m...
We ported the computer algebra system Maple V to the Intel Paragon a massively parallel distribute...
Parallel performance optimization is being applied and further improvements are studied for parallel...
of some characteristics of softwares for parallel computer algebra. SBSH means Sugarbush. PCLBSTM m...
International audienceIn this paper, we focus on a distributed and parallel programming paradigm for...
This paper reports three phases of development of a. Java-based distributed system for the implement...
This paper discusses the design of linear algebra libraries for high performance computers. Particul...
Parallel or distributed processing is key to getting highest performance workstations. However, desi...
This paper demonstrates that it is possible to obtain good, scalable parallel performance by coordi...
In symbolic computation on computers, also known as computer algebra, keyboard and display replace t...
This paper describes the design and development of a Java Distributed Computation Library, which pr...
The growing processing power of standard workstations, along with the relatively easy way in which t...
This paper demonstrates that it is possible to obtain good, scalable parallel performance by coordin...
AbstractWe describe the design and use of Distributed Maple, an environment for executing parallel c...
We ported the computer algebra system Maple V to the Intel Paragon, a massively parallel, distribute...
The Maple computer algebra system is described. Brief sample sessions show the user syntax and the m...
We ported the computer algebra system Maple V to the Intel Paragon a massively parallel distribute...
Parallel performance optimization is being applied and further improvements are studied for parallel...
of some characteristics of softwares for parallel computer algebra. SBSH means Sugarbush. PCLBSTM m...
International audienceIn this paper, we focus on a distributed and parallel programming paradigm for...
This paper reports three phases of development of a. Java-based distributed system for the implement...
This paper discusses the design of linear algebra libraries for high performance computers. Particul...
Parallel or distributed processing is key to getting highest performance workstations. However, desi...
This paper demonstrates that it is possible to obtain good, scalable parallel performance by coordi...
In symbolic computation on computers, also known as computer algebra, keyboard and display replace t...
This paper describes the design and development of a Java Distributed Computation Library, which pr...
The growing processing power of standard workstations, along with the relatively easy way in which t...
This paper demonstrates that it is possible to obtain good, scalable parallel performance by coordin...