We have designed and implemented an asynchronous data-parallel scheduler for the SML/NJ ML compiler. Using this general scheduler we built a data-parallel module that provides new operators to manipulate sequences (i.e., arrays, vectors) in parallel. Parallelization concerns such as thread creation and synchronization are hidden from the application programmer by ML\u27s module abstraction. We find that languages with modules, higher-order functions and automatic parallel storage management can, in this manner, seamlessly support data-parallel operators. An implementation of applications using the new sequence module on an eight-processor shared-memory machine indicates that in some cases useful speedup is possible with our approach
We investigate the construction and application of parallel software caches in shared memory multipr...
Data parallelislm is one of the more successful efforts to introduce explicit parallelism to high le...
Data parallelislm is one of the more successful efforts to introduce explicit parallelism to high le...
We have designed and implemented an asynchronous data-parallel scheduler for the SML/NJ ML compiler....
Over the past decade processor manufacturers have pivoted from increasing uniprocessor performance t...
technical reportAn abstract machine for parallel graph reduction on a shared memory multiprocessor i...
Much work has been done in the áreas of and-parallelism and data parallelism in Logic Programs. Such...
Much work has been done in the áreas of and-parallelism and data parallelism in Logic Programs. Such...
We investigate the construction and application of parallel software caches in shared memory multipr...
technical reportAn abstract machine suitable for parallel graph reduction on a shared memory multipr...
This paper presents the semantics of a new primitive for parallel composition in a parallel function...
With ubiquitous multi-core architectures, a major challenge is how to effectively use these machines...
Abstract We present the work on automatic parallelization of array-oriented programs for multi-core ...
Modern hardware contains parallel execution resources that are well-suited for data-parallelism vect...
15 pagesInternational audienceProgramming parallelmachines as effectively as sequential ones would i...
We investigate the construction and application of parallel software caches in shared memory multipr...
Data parallelislm is one of the more successful efforts to introduce explicit parallelism to high le...
Data parallelislm is one of the more successful efforts to introduce explicit parallelism to high le...
We have designed and implemented an asynchronous data-parallel scheduler for the SML/NJ ML compiler....
Over the past decade processor manufacturers have pivoted from increasing uniprocessor performance t...
technical reportAn abstract machine for parallel graph reduction on a shared memory multiprocessor i...
Much work has been done in the áreas of and-parallelism and data parallelism in Logic Programs. Such...
Much work has been done in the áreas of and-parallelism and data parallelism in Logic Programs. Such...
We investigate the construction and application of parallel software caches in shared memory multipr...
technical reportAn abstract machine suitable for parallel graph reduction on a shared memory multipr...
This paper presents the semantics of a new primitive for parallel composition in a parallel function...
With ubiquitous multi-core architectures, a major challenge is how to effectively use these machines...
Abstract We present the work on automatic parallelization of array-oriented programs for multi-core ...
Modern hardware contains parallel execution resources that are well-suited for data-parallelism vect...
15 pagesInternational audienceProgramming parallelmachines as effectively as sequential ones would i...
We investigate the construction and application of parallel software caches in shared memory multipr...
Data parallelislm is one of the more successful efforts to introduce explicit parallelism to high le...
Data parallelislm is one of the more successful efforts to introduce explicit parallelism to high le...