Skip to content


Getting Started with Hadoop and Map Reduce

Have you been wanting to learn Hadoop, but have no idea how to get started? Carlo Scarioni has a basic Hadoop tutorial that covers installing Hadoop, creating a Hadoop Distributed File System (HDFS), moving files into HDFS, and creating a simple Hadoop application. The tutorial also introduces the basic concepts of Map Reduce.

It doesn’t, however, get into distributing the application, which is the main point of using Hadoop in the first place. Scarioni leaves that to a future tutorial. But if you want to get your feet wet with Hadoop and/or Map Reduce, this seems like a pretty good place to start.

Sponsor

also gives us a pretty concise explanation of what Hadoop is:

Hadoop is an open source project for processing large datasets in parallel with the use of low level commodity machines.

Hadoop is build on two main parts. An special file system called Hadoop Distributed File System (HDFS) and the Map Reduce Framework.

The HDFS File System is an optimized file system for distributed processing of very large datasets on commodity hardware.

The map reduce framework works in two main phases to process the data. Which are the Map phase and the Reduce phase.

See also: The Rise of the Data Scientist.

Discuss


Posted in General, Technology, Web.

Tagged with .


0 Responses

Stay in touch with the conversation, subscribe to the RSS feed for comments on this post.



Some HTML is OK

or, reply to this post via trackback.