HADOOP ( ADMIN & DEVELOPMENT)

 One comment

APACHE HADOOP is an open source software frame work for storage and large-scale processing of data-sets on clusters of commodity hardware. HADOOP is an Apache top-level project being built and used by a global community of contributors and users.

All the modules in HADOOP are designed with a fundamental assumption that hardware failures (of individual machines, or racks of machines) are common and thus should be automatically handled in software by the framework. Apache HADOOP’S MapReduce and HDFS components originally derived respectively from Google’s Map Reduce and Google File System (GFS) papers.

HISTORY OF HADOOP :

Hadoop was created by DOUG CUTTING and MIKE CAFARELLA in 2005. Cutting, who was working at YAHOO! at the time, named it after his son’s toy elephant. It was originally developed to support distribution for the NUTCH search engine project.

 

OUR FACULTY PROFILE FOR HADOOP:

Veer A Nagaraju

Mr. Nagaraju has over 19 years of diversified IT experience in the areas of Project/Program Management, Service Management, Application Development and Maintenance, ETL & Datawarehousing and Education/Training.

He is seasoned trainer in Hadoop, .NET and Oracle technologies. He also conducted many workshops on Project Mangement topics such as PMP certification and Microsoft Office Project. He has more than 4 years of teaching experience that includes two years on Hadoop Technology. So far he impacted 400+ professionals on Hadoop through class room, online and corporate trainings. He provides mentoring and consulting services for individuals and companies on Hadoop.

He handled large, medium and small size mission critical projects on various domains like Public Transportation, Finance, Banking/Credit card processing, e-Commerce, Content Management and HealthCare & Health Insurance. He served world class clients and delivered solutions on Hadoop, Java, PHP, ASP, Oracle, SQL Server, Sybase, Mysql, ETL and Datawarehousing.

Nagaraju worked in USA for 9 years and served for fortune 500 companies like American Express, ProgressRail Services, Indiana State Department of Health, and various other medium and small size companies.

 

HADOOP ADMINISTRATION AND DEVELOPMENT

TRAINING Curriculum

1.       Introduction

1.1   Big Data Introduction

  • What is Big Data
  • Data Analytics
  • Bigdata Challenges
  • Technologies supported by big data

 

1.2   Hadoop Introduction

  • What is Hadoop?
  • History of Hadoop
  • Basic Concepts
  • Future of Hadoop
  • The Hadoop Distributed File System
  • Anatomy of a Hadoop Cluster
  • Breakthroughs of Hadoop
  • Hadoop Distributions:
    • Apache Hadoop
    • Cloudera Hadoop
    • Horton Networks Hadoop
    • MapR Hadoop

 

2.       Hadoop Daemon Processes

  • Name Node
  • Data Node
  • Secondary Name Node
  • Job Tracker
  • Task Tracker

 

3.       HDFS (Hadoop Distributed File System)

  • Blocks and Input Splits
  • Data Replication
  • Hadoop Rack Awareness
  • Cluster Architecture and Block Placement
  • Accessing HDFS
    • JAVA Approach
    • CLI Approach

     

 4.       Hadoop Installation Modes and HDFS

  • Local Mode
  • Pseudo-distributed Mode
  • Fully distributed mode
  • Pseudo Mode installation and configurations
  • HDFS basic file operations

 5.       Hadoop Developer Tasks

5.1   Writing a MapReduce Program

 

  • Basic API Concepts
  • The Driver Class
  • The Mapper Class
  • The Reducer Class
  • The Combiner Class
  • The Partitioner Class
  • Examining a Sample MapReduce Program with several examples
  • Hadoop’s Streaming API

 

5.2   Performing several hadoop jobs

 

  • Sequence Files
  • Record Reader
  • Record Writer
  • Role of Reporter
  • Output Collector
  • Processing XML files
  • Counters
  • Directly Accessing HDFS
  • ToolRunner
  • Using The Distributed Cache

 

5.3   Advanced MapReduce Programming

 

  • A Recap of the MapReduce Flow
  • The Secondary Sort
  • Customized Input Formats and Output Formats
  • Map-Side Joins
  • Reduce-Side Joins

 

5.4   Monitoring and debugging on a Production Cluster

 

  • Counters
  • Skipping Bad Records
  • Rerunning failed tasks with Isolation Runner

 

5.5   Tuning for Performance in MapReduce

 

  • Reducing network traffic with Combiner, Partitioner classes
  • Reducing the amount of input data using compression
  • Reusing the JVM
  • Running with speculative execution
  • Input Formatters
  • Output Formatters
  • Schedulers
    • FIFO schedulers
    • FAIR Schedulers
    • CAPACITY Schedulers

 

5.6   Debugging MapReduce Programs

 

  • Testing with MRUnit
  • Logging
  • Other Debugging Strategies

 

 6.       Hadoop Ecosystems

6.1   PIG

  • PIG concepts
  • Install and configure PIG on a cluster
  • PIG Vs MapReduce and SQL
  • PIG Vs HIVE
  • Write sample PIG Latin scripts
  • Modes of running PIG
  • Programming in Eclipse
  • Running as Java program
  • PIG UDFs
  • PIG Macros

 

6.2   HIVE

 

  • Hive concepts
  • Hive architecture
  • Installing and configuring HIVE
  • Managed tables and external tables
  • Partitioned tables
  • Bucketed tables
  • Joins in HIVE
  • Multiple ways of inserting data in HIVE tables
  • CTAS, views, alter tables
  • User defined functions in HIVE
  • Hive UDF
  • Hive UDAF
  • Hive UDTF

 

6.3   SQOOP

 

  • SQOOP concepts
  • SQOOP architecture
  • Install and configure SQOOP
  • Connecting to RDBMS
  • Internal mechanism of import/export
  • Import data from Oracle/Mysql to HIVE
  • Export data to Oracle/Mysql
  • Other SQOOP commands

 

6.4   HBASE

 

  • HBASE concepts
  • ZOOKEEPER concepts
  • HBASE and Region server architecture
  • File storage architecture
  • NoSQL vs SQL
  • Defining Schema and basic operations
  • DDLs
  • DMLs
    • HBASE use cases
    • Access data stored in HBASE using clients like CLI, and Java
    • Map Reduce client to access the HBASE data
    • HBASE admin tasks

 

6.5   OOZIE

 

  • OOZIE concepts
  • OOZIE architecture
  • Workflow engine
  • Job coordinator
    • Install and configuring OOZIE
    • HPDL and XML for creating Workflows
    • Nodes in OOZIE
    • Action nodes
    • Control nodes
      • Accessing OOZIE jobs through CLI, and web console
      • Develop sample workflows in OOZIE on various Hadoop distributions
      • Run HDFS file operations
      • Run MapReduce programs
      • Run PIG scripts
      • Run HIVE jobs
      • Run SQOOP Imports/Exports

 

 

6.6   FLUME

 

  • FLUME Concepts
  • FLUME architecture
  • Installation and configurations
  • Executing FLUME jobs

 

 7.       Integrations

  • Mapreduce and HIVE integration
  • Mapreduce and HBASE integration
  • Java and HIVE integration
  • HIVE – HBASE Integration

 

8.       Hadoop Administrative Tasks:

       Setup Hadoop cluster: Apache, Cloudera and VMware

  • Install and configure Apache Hadoop on a multi node cluster
  • Install and configure Cloudera Hadoop distribution in fully distributed mode
  • Install and configure different ecosystems
  • Monitoring the cluster
  • Name Node in Safe mode
  • Meta Data Backup
  • Integrating Kerberos security in hadoop
  • Ganglia and Nagios – Cluster monitoring

 

 9.       Course Deliverable’s

 

  • Workshop style coaching
  • Interactive approach
  • Course material
  • Hands on practice exercises for each topic
  • Quiz at the end of each major topic
  • Tips and techniques on Cloudera  Certification Examination
  • Linux concepts and basic commands
  • On Demand Services
  • Mock interviews for each individual will be conducted on need basis
  • SQL basics on need basis
  • Core Java concepts on need basis
  • Resume  preparation and guidance
  • Interview questions

 

duration of the classes will be 60-70hrs.
For more information please contact us at +91 9985432343.

 

One comment to HADOOP ( ADMIN & DEVELOPMENT)

  • tech  says:

    Awesome website and even better content! Will undoubtedly be back soon.

Leave a reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>