has launched another cloud computing service, this time one that automates Hadoop implementations.

Called Amazon Elastic MapReduce, the cloud computing service is aimed at developers whose applications need to crunch large amounts of data, for which Hadoop is ideally suited.

With Amazon Elastic MapReduce, many tasks that developers would need to handle manually related to Hadoop are automated, the company's Amazon Web Services (AWS) cloud computing division said in an official blog.

"Using Elastic MapReduce, you can create, run, monitor, and control Hadoop jobs with point-and-click ease. You don't have to go out and buys scads of hardware. You don't have to rack it, network it, or administer it. You don't have to worry about running out of resources or sharing them with other members of your organisation. You don't have to monitor it, tune it, or spend time upgrading the system or application software on it," the blog posting reads.

AWS decided to create this service after learning it has customers running Hadoop jobs on the Amazon Elastic Compute Cloud (EC2) service, which provides hosted computing capacity. Because Hadoop is becoming increasingly popular, Amazon aims to make it easier for other developers to take advantage of this open source implementation of MapReduce.

Elastic MapReduce works in conjunction with EC2 and the Amazon Simple Storage Service (S3) hosted storage cloud service. "Elastic MapReduce automatically spins up a Hadoop implementation of the MapReduce framework on Amazon EC2 instances, sub-dividing the data in a job flow into smaller chunks so that they can be processed - the 'map' function - in parallel, and eventually recombining the processed data into the final solution - the 'reduce' function. Amazon S3 serves as the source for the data being analysed, and as the output destination for the end results," according to a separate description of the service.

As with other AWS cloud services, Amazon charges for Elastic MapReduce based on its usage, without a minimum fee.