Radical Technologies
Aundh:+91 8055223360 | Kharadi: +91 8448448706

HADOOP DEV + SPARK & SCALA

  • Solution for BigData Problem
  • Open Source Technology
  • Based on open source platforms
  • Contains several tool for entire ETL data processing Framework
  • It can process Distributed data and no need to store entire data in centralized storage as it is required for SQL based tools.
Satisfied Learners
One time class room registraion to click here Fee 1000/-

Clasroom training batch schedules:

Location Day/Duration Date Time Type
Kharadi Weekend 21/07/2018 02:00 PM New Batch Quick Enquiry

Event batch schedules:

Location Day/Duration Start Date ₹ Price Book Seat
Pune 2 days 31/03/2018 ₹ 12000.00 Enroll Now

Online training batch schedules:

Mode Day/Duration Start Date End Date ₹ Price Book Seat
Online 8 Weeks, 4 Days 30/07/2018 27/09/2018 ₹ 16000.00 Enroll Now

Hadoop Developer / Analyst / SPARK + SCALA / Hadoop (Java + Non- Java) Track

HADOOP DEV + SPARK & SCALA + NoSQL + Splunk + HDFS (Storage) + YARN (Hadoop Processing Framework) + MapReduce using Java (Processing Data) +  Apache Hive + Apache Pig + HBASE (Real NoSQL ) + Sqoop + Flume + Oozie  + Kafka With ZooKeeper + Cassandra + MongoDB + Apache Splunk

Best Bigdata Hadoop Training with 2 Real-time Projects with 1 TB Data set

Duration of the Training : 8 to 10 weekends 

 

Bigdata Hadoop Syllabus

For whom Hadoop is?

IT folks who want to change their profile in a most demanding technology which is in demand by almost all clients in all domains because of below mentioned reasons-

  •  Hadoop is open source (Cost saving / Cheaper)
  •  Hadoop solves Big Data problem which is very difficult or impossible to solve using highly paid tools in market
  •  It can process Distributed data and no need to store entire data in centralized storage as it is there with other tools.
  •  Now a days there is job cut in market in so many existing tools and technologies because clients are moving towards a cheaper and efficient solution in market named HADOOP
  •  There will be almost 4.4 million jobs in market on Hadoop by next year.

Please refer below mentioned links:

http://www.computerworld.com/article/2494662/business-intelligence/hadoop-will-be-in-most-advanced-analytics-products-by-2015–gartner-says.html

Can I Learn Hadoop If I Don’t know Java?

Yes,

It is a big myth that if a guy don’t know Java then he can’t learn Hadoop. The truth is that Only Map Reduce framework needs Java except Map Reduce all other components are based on different terms like Hive is similar to SQL, HBase is similar to RDBMS and Pig is script based.

Only MR requires Java but there are so many organizations who started hiring on specific skill set also like HBASE developer or Pig and Hive specific requirements. Knowing MapReuce also is just like become all-rounder in Hadoop for any requirement.

Why Hadoop?

  • Solution for BigData Problem
  • Open Source Technology
  • Based on open source platforms
  • Contains several tool for entire ETL data processing Framework
  • It can process Distributed data and no need to store entire data in centralized storage as it is required for SQL based tools. 

 

Training Syllabus                                                   ,

HADOOP DEV + SPARK & SCALA + NoSQL + Splunk + HDFS (Storage) + YARN (Hadoop Processing Framework) + MapReduce using Java (Processing Data) +  Apache Hive + Apache Pig + HBASE (Real NoSQL ) + Sqoop + Flume + Oozie  + Kafka With ZooKeeper + Cassandra + MongoDB + Apache Splunk

 Big data

Distributed computing

Data management – Industry Challenges

Overview of Big Data

Characteristics of Big Data

Types of data

Sources of Big Data

Big Data examples

What is streaming data?

Batch vs Streaming data processing

Overview of Analytics

Big data Hadoop opportunities

Hadoop                                      

Why we need Hadoop

Data centers and Hadoop Cluster overview

Overview of Hadoop Daemons

Hadoop Cluster and Racks

Learning Linux required for Hadoop

Hadoop ecosystem tools overview

Understanding the Hadoop configurations and Installation.

HDFS (Storage)

HDFS

HDFS Daemons – Namenode, Datanode, Secondary Namenode

Hadoop FS and Processing Environment’s UIs

Fault Tolerant 

High Availability

Block Replication

How to read and write files

Hadoop FS shell commands

YARN (Hadoop Processing Framework)

YARN

YARN Daemons – Resource Manager, NodeManager etc.

Job assignment & Execution flow

MapReduce using Java (Processing Data)

The introduction of MapReduce.

MapReduce Architecture

Data flow in MapReduce

Understand Difference Between Block and InputSplit

Role of RecordReader

Basic Configuration of MapReduce

MapReduce life cycle

How MapReduce Works

Writing and Executing the Basic MapReduce Program using Java

Submission & Initialization of MapReduce Job.

File Input/Output Formats in MapReduce Jobs

Text Input Format

Key Value Input Format

Sequence File Input Format

NLine Input Format

Joins

Map-side Joins

Reducer-side Joins

Word Count Example(or) Election Vote Count

Will cover five to Ten Map Reduce Examples with real time data.

 Apache Hive

Data warehouse basics

OLTP vs OLAP Concepts

Hive

Hive Architecture

Metastore DB and Metastore Service

Hive Query Language (HQL)

Managed and External Tables

Partitioning & Bucketing

Query Optimization

Hiveserver2 (Thrift server)

JDBC , ODBC connection to Hive

Hive Transactions

Hive UDFs

Working with Avro Schema and AVRO file format

Hands on Multiple Real Time datasets. 

Apache Pig

Apache Pig

Advantage of Pig over MapReduce

Pig Latin (Scripting language for Pig)

Schema and Schema-less data in Pig

Structured , Semi-Structure data processing in Pig

Pig UDFs

HCatalog

Pig vs Hive Use case

Hands On Two more examples daily use case data analysis in google. And Analysis on Date time dataset

HBASE (Real NoSQL )

Introduction to HBASE

Basic Configurations of HBASE

Fundamentals of HBase

What is NoSQL?

HBase Data Model

Table and Row.

Column Family and Column Qualifier.

Cell and its Versioning

Categories of NoSQL Data Bases

Key-Value Database

Document Database

Column Family Database

HBASE Architecture

HMaster

Region Servers

Regions

MemStore

Store

SQL vs. NOSQL

How HBASE is differed from RDBMS

HDFS vs. HBase

Client-side buffering or bulk uploads

HBase Designing Tables

HBase Operations

Get

Scan

Put

Delete

Live Dataset

Sqoop

Sqoop commands

Sqoop practical implementation 

Importing data to HDFS

Importing data to Hive

Exporting data to RDBMS

Sqoop connectors

Flume

Flume commands

Configuration of Source, Channel and Sink

Fan-out flume agents

How to load data in Hadoop that is coming from web server or other storage

How to load streaming data from Twitter data in HDFS using Hadoop

Oozie

Oozie

Action Node and Control Flow node

Designing workflow jobs

How to schedule jobs using Oozie

How to schedule jobs which are time based

Oozie Conf file

Scala

Scala 

Syntax formation, Datatypes , Variables

Classes and Objects

Basic Types and Operations

Functional Objects

Built-in Control Structures

Functions and Closures

Composition and Inheritance

Scala’s Hierarchy

Traits

Packages and Imports

Working with Lists, Collections

Abstract Members

Implicit Conversions and Parameters

For Expressions Revisited

The Scala Collections API

Extractors

Modular Programming Using Objects

Spark

Spark

Architecture and Spark APIs

Spark components 

Spark master

Driver

Executor

Worker

Significance of Spark context

Concept of Resilient distributed datasets (RDDs)

Properties of RDD

Creating RDDs

Transformations in RDD

Actions in RDD

Saving data through RDD

Key-value pair RDD

Invoking Spark shell

Loading a file in shell

Performing some basic operations on files in Spark shell

Spark application overview

Job scheduling process

DAG scheduler

RDD graph and lineage

Life cycle of spark application

How to choose between the different persistence levels for caching RDDs

Submit in cluster mode

Web UI – application monitoring

Important spark configuration properties

Spark SQL overview

Spark SQL demo

SchemaRDD and data frames

Joining, Filtering and Sorting Dataset

Spark SQL example program demo and code walk through

Kafka With ZooKeeper

What is Kafka

Cluster architecture With Hands On

Basic operation

Integration with spark

Integration with Camel

Additional Configuration

Security and Authentication

Apache Kafka With Spring Boot Integration

Running 

Usecase

Apache Splunk

Introduction & Installing Splunk

Play with Data and Feed the Data

Searching & Reporting

Visualizing Your Data

Advanced Splunk Concepts 

Cassandra + MongoDB 

Introduction of NoSQL 

What is NOSQL & N0-SQL Data Types

System Setup Process

MongoDB Introduction

MongoDB Installation 

DataBase Creation in MongoDB

ACID and CAP Theorum 

What is JSON and what all are JSON Features? 

JSON and XML Difference 

CRUD Operations – Create , Read, Update, Delete

Cassandra Introduction

Cassandra – Different Data Supports 

Cassandra – Architecture in Detail 

Cassandra’s SPOF & Replication Factor

Cassandra – Installation & Different Data Types

Database Creation in Cassandra 

Tables Creation in Cassandra 

Cassandra Database and Table Schema and Data 

Update, Delete, Insert Data in Cassandra Table 

Insert Data From File in Cassandra Table 

Add & Delete Columns in Cassandra Table 

Cassandra Collections

Summary
Review Date
Reviewed Item
I completed my Hadoop in Radical.My Trainers teaching is probably the best training one could get for sure. You don't only get to learn but you also get the Experience level training here.
Author Rating
51star1star1star1star1star

Available certifications under Cloudera and Hortonworks

Depends upon the students requirements and the topics covered in the curriculam , we can prepare the candidates for the exam

Cloudera Certified Associate (CCA)

CCA Spark and Hadoop Developer

CCA Data Analyst

CCA Administrator

Cloudera Certified Professional (CCP)

CCP Data Engineer

HDPCD

HDP CERTIFIED DEVELOPER

for Hadoop developers using frameworks like Pig, Hive, Sqoop and Flume.

HDPCD-Spark

HDP CERTIFIED APACHE SPARK DEVELOPER

for developers responsible for developing Spark Core and Spark SQL applications in Scala or Python.

HDPCD-Java

HDP CERTIFIED JAVA DEVELOPER

for developers who design, develop and architect Hadoop-based solutions written in the Java programming language.

HDPCA

HDP CERTIFIED ADMINISTRATOR

Hortonworks certification for administrators who deploy and manage Hadoop clusters.

HCA

HORTONWORKS CERTIFIED ASSOCIATE

for an entry point and fundamental skills required to progress to the higher levels of the Hortonworks big data certification program.

Hadoop Certifications : Radical is accredited with Pearson Vue and Kriterion etc. We do conduct Exams in every  month and we have 100% Passing record for all the students who completed course form Radical technologies .Most demanding Hadoop Exams are Hortonworks  and Cloudera certifications .

Exam Preparation : After the course we provide for all our candidates free exam preparation session , which will guide them to pass the Respective modules of Hadoop exams.

Trainer is having 17 year experience in IT with 10 years in data warehousing &ETL experience. It has been six years now that he has been working extensively in BigData ecosystem toolsets for few of the banking-retail-manufacturing clients. He is a certified HDP-Spark Developer and Cloudera certified Hbase specialist. He also have done corporate sessions and seminars both in India and abroadRecently he was engaged by Pune University for 40 hour sessions on BigData analytics to the senior professors of Pune.

All faculties at our organization are currently working on the technologies in reputed organization. The curriculum that is imparted is not just some theory or talk with some PPTs. We absolutely frame the forum in such a way so that at the end the lessons are imparted in easy language and the contents are well absorbed by the candidates. The sessions are backed by hands-on assignment. Also that the faculties are industry experience so during the course he does showcase his practical stories.

  • How we are Different from Others : Covers each topics with Real Time Examples . Covers 8 Real time project and more than 72+ Assignments which is divided into Basic , Intermediate and  Advanced . Trainer from Real Time Industry with 9 years experience in DWH. Working as BI and Hadoop consultant having 3+ years in Bigdata & Hadoop real time implementation and migrations.
    This is completely hands own training , which covers 90% Practical And 10% Theory .Here in Radical Technologies , we will take all prerequisite like Java ,SQL, which is required to learn Hadoop Developer and Analytical skills. This way We will accommodate technology illiterate and Technical experts in the same session and at the end of the training , they will gain the confidence  that , they got up-skilled to a different level. 
    • 8 Domain Based Project With Real Time Data ( with one trainer – two project. If you req more projects , you are free to attend any other trainers project orientations sessions )
    • 5 POC
    • 72 Assignments
    • 25 Real Time Scenarios On 16 Node Clusters ( Aws Cloud setup )
    • Basic Java
    • DWH Concept
    • Pig|Hive|Mapreduce|Nosql|Hbase|Zookeeper|Sqoop|Flume|Oozie|Yarn|Hue|Spark |Scala

    42 Hours Classroom Section

    30 Hours of assignments

    25 hours for One Project and 50 Hrs for 2 Project ( Candidates should prepare with mentor support . 50 hours mentioned is total hours spent on project by each trainer )

    350+ Interview Questions

    Administration and Manual Installation of Hadoop with other Domain based projects will be done on regular basis apart from our normal batch schedule .

    We do have projects from Healthcare , Financial , Automotive ,Insurance , Banking , Retail etc , which will be given to our students as per their requirements .

    • Training By 14+ Years experienced Real Time Trainer
    • A pool of 200+ real time Practical Sessions on Bigdata Hadoop
    • Scenarios and Assignments to make sure you compete with current Industry standards
    • World class training methods
    • Training  until the candidate get placed
    • Certification and Placement Support until you get certified and placed
    • All training in reasonable cost
    • 10000+ Satisfied candidates
    • 5000+ Placement Records
    • Corporate and Online Training in reasonable Cost
    • Complete End-to-End Project with Each Course
    • World Class Lab Facility which facilitates I3 /I5 /I7 Servers and Cisco UCS Servers
    •  Covers Topics other than from Books which is required for the IT Industry
    • Resume And Interview preparation with 100% Hands-on Practical sessions
    • Doubt clearing sessions any time after the course
    • Happy to help you any time after the course

ML and GraphX ,’R’ Language

Data Analytics / Science

Cloudera Certified Professional (CCP)

CCP Data Engineer

Our Courses

Drop A Query