Radical Technologies
Call :+91 8055223360 | 8103400400

Data Science and Data Analytics

Data Science and Data Analytics  – Python / R / SAS

Learn Data Science, Deep Learning, & Machine Learning  using Python / R  /SAS With Live Machine Learning & Deep Learning Projects 

Duration : 3 Months – Weekends 3 Hours on Saturday and Sundays

Real Time Projects , Assignments , scenarios are part of this course

Data Sets , installations , Interview Preparations , Repeat the session until 6 months are all attractions of this particular course

Satisfied Learners
One time class room registraion to click here Fee 1000/-

Clasroom training batch schedules:

Location Day/Duration Date Time Type
Kharadi Weekend 29/02/2020 12:00 PM Demo Batch Enquiry
Hinjewadi Weekend 29/02/2020 10:00 AM Demo Batch Enquiry
Hinjewadi Weekend 01/03/2020 09:00 AM New Batch Enquiry
Aundh Weekend 29/02/2020 05:00 PM Demo Batch Enquiry
Karve Road Weekend 01/03/2020 02:00 PM Demo Batch Enquiry

Data Science and Data Analytics  – Python / R / SAS

Learn Data Science, Deep Learning, & Machine Learning with Python / R  /SAS With Live Machine Learning & Deep Learning Projects 

Duration : 3 Months – Weekends 3 Hours on Saturday and Sundays

Real Time Projects , Assignments , scenarios are part of this course

Data Sets , installations , Interview Preparations , Repeat the session until 6 months are all attractions of this particular course

Trainer :- Experienced DataScience Consultant

Want to be Future Data Scientist

Introduction:  This course does not require a prior quantitative or mathematics background. It starts by introducing basic concepts such as the mean, median mode etc. and eventually covers all aspects of an analytics (or) data science career from analyzing and preparing raw data to visualizing your findings. If you’re a programmer or a fresh graduate looking to switch into an exciting new career track, or a data analyst looking to make the transition into the tech industry – this course will teach you the basic to Advance techniques used by real-world industry data scientists.

Data Science, Statistics with Python / R / SAS : This course is an introduction to Data Science and Statistics using the R programming language OR Python OR SAS. It covers both the theoretical aspects of Statistical concepts and the practical implementation using R / Python/ SaS. If you’re new to Python, don’t worry – the course starts with a crash course. If you’ve done some programming before or you are new in Programming, you should pick it up quickly. This course shows you how to get set up on Microsoft Windows-based PC’s; the sample code will also run on MacOS or Linux desktop systems.

Analytics: Using Spark and Scala you can analyze and explore your data in an interactive environment with fast feedback. The course will show how to leverage the power of RDDs and Data frames to manipulate data with ease.

Machine Learning and Data Science : Spark’s core functionality and built-in libraries make it easy to implement complex algorithms like Recommendations with very few lines of code. We’ll cover a variety of datasets and algorithms including PageRank, MapReduce and Graph datasets.

Real life examples: Every concept is explained with the help of examples, case studies and source code in R wherever necessary. The examples cover a wide array of topics and range from A/B testing in an Internet company context to the Capital Asset Pricing Model in a quant finance context.  

Target audience?

  • Engineering/Management Graduate or Post-graduate Fresher Students who want to make their career in Data Science Industry or want to be future Data Scientist.
  • Engineers who want to use a distributed computing engine for batch or stream processing or both
  • Analysts who want to leverage Spark for analyzing interesting datasets
  • Data Scientists who want a single engine for analyzing and modelling data as well as productionizing it.
  • MBA Graduates or business professionals who are looking to move to a heavily quantitative role.
  • Engineering Graduate/Professionals who want to understand basic statistics and lay a foundation for a career in Data Science
  • Working Professional or Fresh Graduate who have mostly worked in Descriptive analytics or not work anywhere and want to make the shift to being  data scientists
  • Professionals who’ve worked mostly with tools like Excel and want to learn how to use R for statistical analysis. 

Course Curriculum

Data Science, Deep Learning, & Machine Learning with Python Or SAS Or  R Language With Live Machine Learning & Deep Learning Projects

### Level 01 of 08 : Basic Python ###

  1. Introduction
  2. Data Types and Variables
  3. List
  4. Operators
  5. Control Flows

5.1 if … elif … elif … else

5.2 While Loop:

5.3 For Loop:

### Level 02 of 08: Advanced python ###

  1. Functions, Methods, Modules & Packages

6.1 Functions

6.1.1 System defined Functions

6.1.2 User Defined Functions (UDFs)

6.1.3 Tuple

6.1.4 Scope of objects in functions

6.1.5 Scope of objects in functions / Nested Functions

6.1.6 Default and flexible arguments

6.2 Methods

6.3 Modules

6.4 Packages

6.4.1 User Defined Packages

6.4.2 System defined Packages

  1. Dictionary
  2. Lambda functions
  3. Syntax Errors and Exceptions
  4. Iterables & Iterators
  5. List comprehensions
  6. Generators

### Level 03 of 08: Python Packages for Data Science ###


  1. NumPy package
  2. Pandas (powerful Python data analysis toolkit)

14.1 Introduction

14.2 Slicing Dataframe

14.3 Filtering Dataframe

14.4 Transforming Dataframe

14.5 Advanced indexing

14.6 Stack and unstack

14.7 Groupby and aggregations

  1. Matplotlib data visualization
  2. Seaborn data visualization
  3. Bokeh data visualization
  4. Import Data from Flat Files
  5. Import Data from Excel Files
  6. Import SAS and STATA Files
  7. Import HDF5 Files
  8. Import from Relational Database (Ex: SQLite)
  9. Import web data
  10. Import using urllib and requests packages
  11. Read HTML with BeautifulSoup package
  12. Import JSON File
  13. Movie and Wikipedia APIs
  14. Twitter API
  15. Cleaning Data (ETL)

30.1 Melt() data

30.2 Pivot (un-melting data)

30.3 Concatenating

30.4 Merge/Joins

30.5 Data Types Conversion


30.6 Regular expression operations

30.7 Dropping duplicate data

30.8 Filling missing data

30.9 Testing with asserts

  1. Time Series Analysis

### Level 04 of 08: Machine Learning Models ###

  1. Machine learning

32.1 Supervised learning

32.1.1 k-nearest neighbours algorithm Introduction Measuring model performance Hyper parameter tuning with GridSearchCV

32.1.2 Linear Models Logistic regression Understanding Classification Report Confusion matrix and ROC Curve AUC computation Hyperparameter tuning with GridSearchCV Linear regression Ridge and lasso regression

32.1.3 Support Vector Machines (SVM) Introduction Support Vectors Classification (SVC) Tune parameters of SVM(SVC) Support Vectors Regression(SVR)

32.1.4 Pre-processing of machine learning data Outliers Working with categorical features Regression with categorical features using ridge algorithm Handling missing data

32.1.1 ML Pipeline (Putting it all together)

32.1.2 Tree Based Models Decision Tree for Classification LogisticRegression Vs Decision Tree Classification Information Gain (IG) Entropy and Information Gain Gini Index

32.1.3 Decision Tree For Regression

32.1.4 Linear regression vs regression tree

32.2 Unsupervised Learning

32.2.1 k-means clustering


### Level 05 of 08: Deep Learning ###

  1. Deep learning

33.1 Introduction

33.2 Forward propagation

33.3 Activation functions

33.4 Deeper networks

33.5 Need for optimization

33.6 Gradient descent

33.7 Backpropagation

33.8 Creating keras Regression Model

33.9 Creating keras Classification Models

33.10 Using models

33.11 Understanding Model Optimization

33.12 Model Validation

33.13 Model Capacity

### Level 06 of 08: Project on Deep Learning ###

  1. Project using keras and tensorflow
  2. Convolutional Neural Networks(CNN)

### Level 07 of 08: NLU / NLP / Text Analytics/ Text Mining ###

  1. Natural Language Understanding/Processing (NLU/P)

36.1 Introduction

36.2 Regular Expressions

36.3 Tokenization

36.4 Advanced tokenization with regex

36.5 Charting word length with nltk

36.6 Word counts with bag of words

36.7 Text pre-processing

36.8 Gensim

36.9 Tf-idf with gensim

36.10 Named Entity Recognition


36.11 Introduction to SpaCy

36.12 Multilingual NER with polyglot

36.13 Building a “fake news” classifier

36.14 Dialog Flow

36.15 RASA NLU

### Level 08 of 08: Projects on NLU/NLP ###

  1. Introduction
  2. EchoBot
  3. ChitChat Bot
  4. Text Munging with regular expressions
  5. Understanding intents and entities
  6. Word vectors
  7. Intents and classification
  8. Entity Extraction
  9. Robust NLU with Rasa
  10. Building a virtual assistant

46.1 Access data from sqlite with parameters

46.2 Exploring a DB with natural language

46.3 Incremental slot filling and negation

  1. Dialogue

47.1 Stateful bots

47.2 Asking questions & queuing answers

Project 1 Build your own image recognition model with TensorFlow

Project 2 Predict fraud with data visualization & predictive modeling

Project 3 Spam Detection

Project 4 Build your own Recommendation System

Project 5 Build your own Python predictive modeling, regression analysis & machine learning Model 



  • Statistics and Data Science With R Language


    • Introduction to R
    • R and R studio Installation & Lab Setup
    • Descriptive Statistics

    Descriptive Statistics

    • Mean, Median, Mode
    • Our first foray into R : Frequency Distributions
    • Draw your first plot : A Histogram
    • Computing Mean, Median, Mode in R
    • What is IQR (Inter-quartile Range)?
    • Box and Whisker Plots
    • The Standard Deviation
    • Computing IQR and Standard Deviation in R

    Inferential Statistics

    • Drawing inferences from data
    • Random Variables are ubiquitous
    • The Normal Probability Distribution
    • Sampling is like fishing
    • Sample Statistics and Sampling Distributions

    Case studies in Inferential Statistics

    • Case Study 1 : Football Players (Estimating Population Mean from a Sample)
    • Case Study 2 : Election Polling (Estimating Population Proportion from a Sample)
    • Case Study 3 : A Medical Study (Hypothesis Test for the Population Mean)
    • Case Study 4 : Employee Behavior (Hypothesis Test for the Population Proportion)
    • Case Study 5: A/B Testing (Comparing the means of two populations)
    • Case Study 6: Customer Analysis (Comparing the proportions of 2 populations)

    Diving into R

    • Harnessing the power of R
    • Assigning Variables
    • Printing an output
    • Numbers are of type numeric
    • Characters and Dates
    • Logicals


    • Data Structures are the building blocks of R
    • Creating a Vector
    • The Mode of a Vector
    • Vectors are Atomic
    • Doing something with each element of a Vector
    • Aggregating Vectors
    • Operations between vectors of the same length
    • Operations between vectors of different length
    • Generating Sequences
    • Using conditions with Vectors
    • Find the lengths of multiple strings using Vectors
    • Generate a complex sequence (using recycling)
    • Vector Indexing (using numbers)
    • Vector Indexing (using conditions)
    • Vector Indexing (using names)


    • Creating an Array
    • Indexing an Array
    • Operations between 2 Arrays
    • Operations between an Array and a Vector
    • Outer Products


    • A Matrix is a 2-Dimensional Array
    • Creating a Matrix
    • Matrix Multiplication
    • Merging Matrices
    • Solving a set of linear equations


    • What is a factor?
    • Find the distinct values in a dataset (using factors)
    • Replace the levels of a factor
    • Aggregate factors with table()
    • Aggregate factors with tapply()

    Lists and Data Frames

    • Introducing Lists
    • Introducing Data Frames
    • Reading Data from files
    • Indexing a Data Frame
    • Aggregating and Sorting a Data Frame
    • Merging Data Frames

    Regression quantifies relationships between variables

    • Linear Regression in Excel : Preparing the data.
    • Linear Regression in Excel : Using LINEST()

    Linear Regression in R

    • Linear Regression in R : Preparing the data
    • Linear Regression in R : lm() and summary()
    • Multiple Linear Regression
    • Adding Categorical Variables to a linear mode
    • Robust Regression in R : rlm()
    • Parsing Regression Diagnostic Plots

    Data Visualization in R

    • Data Visualization
    • The plot() function in R
    • Control color palettes with RColorbrewer
    • Drawing bar plots
    • Drawing a heatmap
    • Drawing a Scatterplot Matrix
    • Plot a line chart with ggplot



                                    Data Science using SAS and Python


Course Name: Data Science using SAS and Python

Course Duration: 25 Days (2.5+ Hrs. Approx. per day)

Training Mode: Classroom

Course Resource: Materials + Software Can be installed + practical business case study

Course Content: Given Below

Day 1&2:

Introduction to Data Science:                                                                             

  • Welcome/General Discussion about expectation from course
  • Definition of Data
  • Difference between data management and data analytics
  • Data Science components

Programming using SAS:

  • Base SAS Overview
  • Data Step and Proc Step processing:
  • Concept of PDV, Input Buffer
  • Concept of SAS library and SAS Catalog
  • Variable Types in SAS
  • Reading Data stored external to SAS
  • Importing Data by using Proc Import
  • Data Step SAS statements
  • SAS Functions
  • Appending and Merging using SAS
  • SAS Procedures like proc means, proc Univariate, proc append, proc freq and proc export.
  • SAS Macros

Day 3 and 4:

Q&A/Project discussion based on previous days

Programming using Python:

  • Python Overview
  • Python Data Types
  • Python operations using Numbers, String, Logical, Arithmetic and so on
  • Python Strings
  • Python Lists
  • Python Tuple
  • Python Dictionary
  • FOR and WHILE loops
  • IF/THEN/ELSE in Python
  • Data Manipulation using Python and Pandas

Day 5, 6 and 7:

Q&A/Project discussion based on previous days


  • Levels of Measurement and Variable types
  • Descriptive Statistics and Picturing Distributions
  • Confidence Interval for the Mean

Hypothesis Testing and ANOVA

  • One Sample t-test of comparing means
  • Two Sample t-test of comparing means
  • One Way ANOVA
  • Assumptions of ANOVA Modeling
  • n-way ANOVA
  • ANOVA Post Hoc Studies

Day 8, 9 and 10:

Q&A/Project discussion based on Days previous days

Exploratory Data Analysis

  • Data Exploration by using Scatter Plots
  • Pearson and Spearmen Correlations

Linear Regression

  • Fit Simple Linear Regression Model
  • Assumptions of Linear Regression Model
  • Analyze the output of the Linear Regression
  • Producing Predicted Values
  • Difference between Simple Linear Regression and Multiple Linear Regression Models
  • Fit Multiple Linear Regression Model
  • Stepwise Regression/Model Selection Techniques

Day 11 and 12:

Q&A/Project discussion based on Days previous days

Regression Diagnostics

  • Residual Analysis
  • Influential Observation
  • Difference between Influential Observation and Outliers
  • Collinearity Diagnostics

Model Building Process 

Day 13, 14 and 15:

Q&A/Project discussion based on Days previous days

Categorical Data Analysis

  • Examining Distributions
  • Test of Associations by using chi-square test
  • Fisher’s Exact p-values for Pearson Chi-square test

Logistic Regression

  • Odds and Odds Ratio
  • Simple Logistic Regression
  • Multiple Logistic Regression with categorical predictors
  • Analyze the output of Logistic Regression

Day 16, 17 and 18:

Q&A/Project discussion based on Days previous days

Measure Model Performance

  • Apply the principles of honest assessment to model performance measurement
  • Assess classifier performance using the confusion matrix
  • Model selection and validation using training and validation data
  • Create and interpret graphs (ROC, lift, and gains charts) for model comparison and selection
  • Establish effective decision cut-off values for scoring

Day 19, 20, 21 and 22:

Q&A/Project discussion based on Days previous days

Decision Tree Modeling


Review Date
Reviewed Item
I have just completed training for DataScience from Radical institute . Trainer has indepth knowledge and excellent teaching skill. Sir can identify the learning capacity of each student and trained from grass root. Sir starts training from very basic points and he gets all the prerequisites ready so no one is in trouble. He is very attentive. He always kept the sessions interesting and interactive, explain the concept with real time industry scenario. I just want to say that sir you are the best because you brought out the best in us. Highly recommend his Python training for the same. I also thank to Radical Technologies for keeping such good and experienced faculties through which we are getting high quality training.
Author Rating

DataQubez University creates meaningful big data & Data Science certifications that are recognized in the industry as a confident measure of qualified, capable big data experts. How do we accomplish that mission? DataQubez certifications are exclusively hands on, performance-based exams that require you to complete a set of tasks. Demonstrate your expertise with the most sought-after technical skills. Big data success requires professionals who can prove their mastery with the tools and techniques of the Hadoop stack. However, experts predict a major shortage of advanced analytics skills over the next few years. At DataQubez, we’re drawing on our industry leadership and early corpus of real-world experience to address the big data & Data Science talent gap.

How To Become Certified Data Science Professional Engineer

Certification Code – DQCP – 501

Certification Description – DataQubez Certified Professional Data Science Engineer

Exam Objectives

Configuration :-

Define and deploy a rack topology script, Change the configuration of a service using Apache Hadoop, Configure the Capacity Scheduler, Create a home directory for a user and configure permissions, Configure the include and exclude DataNode files

Troubleshooting :-

Restart an Cluster service, View an application’s log file, Configure and manage alerts Troubleshoot a failed job

High Availability :-

Configure NameNode, Configure ResourceManager, Copy data between two clusters, Create a snapshot of an HDFS directory, Recover a snapshot, Configure HiveServer2

Data Ingestion – with Sqoop & Flume :-

Import data from a table in a relational database into HDFS, Import the results of a query from a relational database into HDFS, Import a table from a relational database into a new or existing Hive table, Insert or update data from HDFS into a table in a relational database, Given a Flume configuration file, start a Flume agent, Given a configured sink and source, configure a Flume memory channel with a specified capacity

Data Transformation Using Pig :-

Write and execute a Pig script, Load data into a Pig relation without a schema, Load data into a Pig relation with a schema, Load data from a Hive table into a Pig relation, Use Pig to transform data into a specified format, Transform data to match a given Hive schema, Group the data of one or more Pig relations, Use Pig to remove records with null values from a relation, Store the data from a Pig relation into a folder in HDFS, Store the data from a Pig relation into a Hive table, Sort the output of a Pig relation, Remove the duplicate tuples of a Pig relation, Specify the number of reduce tasks for a Pig MapReduce job, Join two datasets using Pig, Perform a replicated join using Pig

Data Analysis Using Hive :-

Write and execute a Hive query, Define a Hive-managed table, Define a Hive external table, Define a partitioned Hive table, Define a bucketed Hive table, Define a Hive table from a select query, Define a Hive table that uses the ORCFile format, Create a new ORCFile table from the data in an existing non-ORCFile Hive table, Specify the storage format of a Hive table Specify the delimiter of a Hive table, Load data into a Hive table from a local directory Load data into a Hive table from an HDFS directory, Load data into a Hive table as the result of a query, Load a compressed data file into a Hive table, Update a row in a Hive table, Delete a row from a Hive table, Insert a new row into a Hive table, Join two Hive tables, Set a Hadoop or Hive configuration property from within a Hive query.

Data Processing through Spark & Spark SQL& Python :-

Frame big data analysis problems as Apache Spark scripts, Optimize Spark jobs through partitioning, caching, and other techniques, Develop distributed code using the Scala programming language, Build, deploy, and run Spark scripts on Hadoop clusters, Transform structured data using SparkSQL and DataFrames

Recomandtion Engine using Spark MLLIB & Python :-

Using MLLib to Produce Recomandation Engine, Run Page rank algorithem, using dataframes with mllib, Machine Learning with Spark

Stream Data Processing using Spark Streaming& Python :-

Process Stream Data using spark streaming.

Regression with Spark& Python :-

Introduction to Linear Regression, Introduction to Regression Section, Linear Regression Documentation Alternate Linear Regression Data CSV File, Linear Regression Walkthrough , Linear Regression Project

Classification with Spark & Python :-

Classification, Classification Documentation, Spark Classification – Logistic Regression , Logistic Regression Amendments, Classification Project

Clustering with Spark & Python :-

Clustering with Spark & Python, KMeans, Example of KMeans with Spark & Python, Clustering Project

Model Evaluation & Python :-

Model Evaluation, Spark Model Evaluation, Spark – Model Evaluation – Regression

R Programming :-

Program in R, Create Data Visualizations, Use R to manipulate data easily, Use R for Data Science, Use R for Data Analysis, Use R to handle csv,excel,SQL files or web scraping, Use R for Machine Learning Algorithms, Machine Learning with R – Linear Regression, Machine Learning with R – Logistic Regression

For Exam Registration of DataQubez Certified Professional Data Science Engineer, Click here:







Trainer for Big data & Data science course is having 11 years of exp. in the same technologies, he is industry expert. Trainer itself cloudera certified along with AWS (Solution Architecture) and GCP (Google Cloud Platform) certified. And also he is certified data scientist from The University of Chicago.

  • Training By 11+ Years experienced Real Time Trainer
  • A pool of 200+ real time Practical Sessions on Data Science and Analytics
  • Scenarios and Assignments to make sure you compete with current Industry standards
  • World class training methods
  • Training  until the candidate get placed
  • Certification and Placement Support until you get certified and placed
  • All training in reasonable cost
  • 10000+ Satisfied candidates
  • 5000+ Placement Records
  • Corporate and Online Training in reasonable Cost
  • Complete End-to-End Project with Each Course
  • World Class Lab Facility which facilitates I3 /I5 /I7 Servers and Cisco UCS Servers
  •  Covers Topics other than from Books which is required for the IT Industry
  • Resume And Interview preparation with 100% Hands-on Practical sessions
  • Doubt clearing sessions any time after the course
  • Happy to help you any time after the course

In classroom we  solve real time problem, and also  push students to create at-least a demo model and push his/her code into GIT, also in class we solve real time problem or data world problems.

Radical technologies, we believe that the best way to learn job-skills is from industry professionals. So, we are building an alternate higher education system, when you can learn job-skills from industry experts and get certified by companies. we complete the course as in classroom method with 85% Practical scenarios complete hands-on on each and every point of the course. and if student faces any issue in future he/she can join also in next batch. These courses are delivered through a live interactive classroom platform

We provide in classroom for solving real time problem, and also trying push to students at least create a demo model and push his/her code into GIT, also in class we solve real time Kaggle problem or data world problems.


Big Data with Cloud Computing (AWS) – Amazon Web Services

Big Data with Cloud Computing (GCP) – Google Cloud Platform

Big Data & Data Science with Cloud Computing (AWS) – Amazon Web Services

Big Data & Data Science with Cloud Computing (GCP) – Google Cloud Platform

Data Science with R & Spark with Python & Scala

Machine Learning with Google Cloud Platform with Tensor Flow

Our Courses

Drop A Query