As a widely used programming model for the purposes of processing large data sets, MapReduce (MR) becomes inevitable in data clusters or grids, e.g. a Hadoop environment. However, experienced programmers are needed to decide the number of reducers used during the reduce phase of the MR, which makes the quality of MR scripts differ. In this paper, an extreme learning method is employed to recommend potential number of reducer a mapped task needs. Execution time is also predicted for user to better arrange their tasks. According to the results, our method can provide fast prediction than SVM with similar accuracy maintained
Map Reduce is the preferred computing framework used in large data analysis and processing applicati...
Mobile cloud computing offers an augmented infrastructure that allows resource-constrained devices t...
Abstract—In order to solve the problem of how to improve the scalability of data processing capabili...
Big data and its analysis are in the focus of current era. The volume of data production is tremendo...
A heterogeneous cloud system, for example, a Hadoop 2.6.0 platform, provides distributed but cohesiv...
Today, we are living in a data-exploding era, in which the volume of data is expanding in an unbelie...
Nowadays MapReduce and its open source implementation, Apache Hadoop, are the most widespread soluti...
Abstract- Cloud computing has become a feasible mainstream solution for data processing, storage and...
Masteroppgave i informasjons- og kommunikasjonsteknologi IKT590 2011 – Universitetet i Agder, Grims...
This paper presents a novel machine learning algorithm with an improved accuracy and a faster learni...
Traditional resource management techniques that rely on simple heuristics often fail to achieve pred...
© Springer International Publishing AG 2016. Regression is one of the most basic problems in machine...
Nowadays, analyzing large amount of data is of paramount importance for many companies. Big data and...
We are entering a Big Data world. Many sectors of our economy are now guided by data-driven decision...
A large volume of datasets is available in various fields that are stored to be somewhere which is c...
Map Reduce is the preferred computing framework used in large data analysis and processing applicati...
Mobile cloud computing offers an augmented infrastructure that allows resource-constrained devices t...
Abstract—In order to solve the problem of how to improve the scalability of data processing capabili...
Big data and its analysis are in the focus of current era. The volume of data production is tremendo...
A heterogeneous cloud system, for example, a Hadoop 2.6.0 platform, provides distributed but cohesiv...
Today, we are living in a data-exploding era, in which the volume of data is expanding in an unbelie...
Nowadays MapReduce and its open source implementation, Apache Hadoop, are the most widespread soluti...
Abstract- Cloud computing has become a feasible mainstream solution for data processing, storage and...
Masteroppgave i informasjons- og kommunikasjonsteknologi IKT590 2011 – Universitetet i Agder, Grims...
This paper presents a novel machine learning algorithm with an improved accuracy and a faster learni...
Traditional resource management techniques that rely on simple heuristics often fail to achieve pred...
© Springer International Publishing AG 2016. Regression is one of the most basic problems in machine...
Nowadays, analyzing large amount of data is of paramount importance for many companies. Big data and...
We are entering a Big Data world. Many sectors of our economy are now guided by data-driven decision...
A large volume of datasets is available in various fields that are stored to be somewhere which is c...
Map Reduce is the preferred computing framework used in large data analysis and processing applicati...
Mobile cloud computing offers an augmented infrastructure that allows resource-constrained devices t...
Abstract—In order to solve the problem of how to improve the scalability of data processing capabili...