Pig and Hadoop support in Amazon Elastic MapReduce |
Written by Alex Denham |
Tuesday, 13 December 2011 |
Amazon has announced support for running job flows using Hadoop 0.20.205 and Pig 0.9.1 in Amazon Elastic MapReduce.
Elastic MapReduce is a web service that you can use to process large amounts of data. It makes use of a hosted Hadoop framework running on the web-scale infrastructure of Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Simple Storage Service (Amazon S3). The Apache Hadoop software library is a framework that you can use to carry out distributed processing of large data sets across clusters of computers using a simple programming model. Apache Pig is an open source analytics package that runs on top of Hadoop. Pig is an interesting package to work with; you write your queries using a SQL-like language called Pig Latin to give your users the means to summarize and query data sources stored in Amazon S3. Pig Latin also includes map/reduce functions and complex extensible user defined data types, so you can create queries that can be used on complex and unstructured data sources such as text documents. In addition to the support for the new versions of Hadoop and Pig, Amazon has added support for running job flows in an Amazon Virtual Private Cloud (Amazon VPC). This overcomes potential security worries if you need to process sensitive data or access resources on your internal network. See Running Job Flows on an Amazon VPC for more information on this.
To be informed about new articles on I Programmer, subscribe to the RSS feed, follow us on Google+, Twitter or Facebook or sign up for our weekly newsletter.
|
Last Updated ( Tuesday, 13 December 2011 ) |