What you’ll learn

  • Commissioning & Decommissioning in Hadoop
  • Working of Namenodes
  • Setup the Environment on different OS
  • Basics to Start with an Introduction


Hadoop is an open-source framework that allows to store and process big data in a distributed environment across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage.

This course provides a quick introduction to Big Data, MapReduce algorithm, and Hadoop Distributed File System.

What Comes Under Big Data?

Big data involves the data produced by different devices and applications. Given below are some of the fields that come under the umbrella of Big Data.

  • Black Box Data − It is a component of helicopter, airplanes, and jets, etc. It captures voices of the flight crew, recordings of microphones and earphones, and the performance information of the aircraft.
  • Social Media Data − Social media such as Facebook and Twitter hold information and the views posted by millions of people across the globe.
  • Stock Exchange Data − The stock exchange data holds information about the ‘buy’ and ‘sell’ decisions made on a share of different companies made by the customers.
  • Power Grid Data − The power grid data holds information consumed by a particular node with respect to a base station.
  • Transport Data − Transport data includes model, capacity, distance and availability of a vehicle.
  • Search Engine Data − Search engines retrieve lots of data from different databases

This course has been prepared for professionals aspiring to learn the basics of Big Data Analytics using Hadoop Framework and become a Hadoop Developer. Software Professionals, Analytics Professionals, and ETL developers are the key beneficiaries of this course.

Before you start proceeding with this course, we assume that you have prior exposure to Core Java, database concepts, and any of the Linux operating system flavors

Link description