ADVANCED BIG DATA SCIENCE TRAINING
Learn Data Science, Deep Learning, & Machine Learning with Python / R /SAS With Live Machine Learning & Deep Learning Projects
Duration : 3 Months – Weekends 3 Hours on Saturday and Sundays
Real Time Projects , Assignments , scenarios are part of this course
Data Sets , installations , Interview Preparations , Repeat the session until 6 months are all attractions of this particular course
Trainer :- Experienced DataScience Consultant
Why Radical Technologies
Want to be Future Data Scientist
Introduction: This course does not require a prior quantitative or mathematics background. It starts by introducing basic concepts such as the mean, median mode etc. and eventually covers all aspects of an analytics (or) data science career from analyzing and preparing raw data to visualizing your findings. If you’re a programmer or a fresh graduate looking to switch into an exciting new career track, or a data analyst looking to make the transition into the tech industry – this course will teach you the basic to Advance techniques used by real-world industry data scientists.
Data Science, Statistics with Python / R / SAS : This course is an introduction to Data Science and Statistics using the R programming language OR Python OR SAS. It covers both the theoretical aspects of Statistical concepts and the practical implementation using R / Python/ SaS. If you’re new to Python, don’t worry – the course starts with a crash course. If you’ve done some programming before or you are new in Programming, you should pick it up quickly. This course shows you how to get set up on Microsoft Windows-based PC’s; the sample code will also run on MacOS or Linux desktop systems.
Analytics: Using Spark and Scala you can analyze and explore your data in an interactive environment with fast feedback. The course will show how to leverage the power of RDDs and Data frames to manipulate data with ease.
Machine Learning and Data Science : Spark’s core functionality and built-in libraries make it easy to implement complex algorithms like Recommendations with very few lines of code. We’ll cover a variety of datasets and algorithms including PageRank, MapReduce and Graph datasets.
Real life examples: Every concept is explained with the help of examples, case studies and source code in R wherever necessary. The examples cover a wide array of topics and range from A/B testing in an Internet company context to the Capital Asset Pricing Model in a quant finance context.
Target audience?
- Engineering/Management Graduate or Post-graduate Fresher Students who want to make their career in Data Science Industry or want to be future Data Scientist.
- Engineers who want to use a distributed computing engine for batch or stream processing or both
- Analysts who want to leverage Spark for analyzing interesting datasets
- Data Scientists who want a single engine for analyzing and modelling data as well as productionizing it.
- MBA Graduates or business professionals who are looking to move to a heavily quantitative role.
- Engineering Graduate/Professionals who want to understand basic statistics and lay a foundation for a career in Data Science
- Working Professional or Fresh Graduate who have mostly worked in Descriptive analytics or not work anywhere and want to make the shift to being data scientists
- Professionals who’ve worked mostly with tools like Excel and want to learn how to use R for statistical analysis.
Course Outline
Introduction To Data Science
- What is Data Science?
- Why Python for data science?
- Relevance in industry and need of the hour
- How leading companies are harnessing the power of Data Science with Python?
- Different phases of a typical Analytics/Data Science projects and role of python
- Anaconda vs. Python
Python Essentials (Core)
- Overview of Python- Starting with Python
- Introduction to installation of Python
- Introduction to Python Editors & IDE’s(Canopy, pycharm, Jupyter, Rodeo, Ipython etc…)
- Understand Jupyter notebook & Customize Settings
- Concept of Packages/Libraries – Important packages(NumPy, SciPy, scikit-learn, Pandas, Matplotlib, etc)
- Installing & loading Packages & Name Spaces
- Data Types & Data objects/structures (strings, Tuples, Lists, Dictionaries)
- List and Dictionary Comprehensions
- Variable & Value Labels – Date & Time Values
- Basic Operations – Mathematical – string – date
- Reading and writing data
- Simple plotting
- Control flow & conditional statements
- Debugging & Code profiling
- How to create class and modules and how to call them?
- Scientific distributions used in python for Data Science – Numpy, scify, pandas, scikitlearn, statmodels, nltk etc
Accessing/Importing And Exporting Data Using Python Modules
- Importing Data from various sources (Csv, txt, excel, access etc)
- Database Input (Connecting to database)
- Viewing Data objects – subsetting, methods
- Exporting Data to various formats
- Important python modules: Pandas, beautifulsoup
Data Manipulation – Cleansing – Munging Using Python Modules
- Cleansing Data with Python
- Data Manipulation steps(Sorting, filtering, duplicates, merging, appending, subsetting, derived variables, sampling, Data type conversions, renaming, formatting etc)
- Data manipulation tools(Operators, Functions, Packages, control structures, Loops, arrays etc)
- Python Built-in Functions (Text, numeric, date, utility functions)
- Python User Defined Functions
- Stripping out extraneous information
- Normalizing data
- Formatting data
- Important Python modules for data manipulation (Pandas, Numpy, re, math, string, datetime etc)
Data Analysis – Visualization Using Python
- Introduction exploratory data analysis
- Descriptive statistics, Frequency Tables and summarization
- Univariate Analysis (Distribution of data & Graphical Analysis)
- Bivariate Analysis(Cross Tabs, Distributions & Relationships, Graphical Analysis)
- Creating Graphs- Bar/pie/line chart/histogram/ boxplot/ scatter/ density etc)
- Important Packages for Exploratory Analysis(NumPy Arrays, Matplotlib, seaborn, Pandas and scipy.stats etc)
Basic Statistics & Implementation Of Stats Methods In Python
- Basic Statistics – Measures of Central Tendencies and Variance
- Building blocks – Probability Distributions – Normal distribution – Central Limit Theorem
- Inferential Statistics -Sampling – Concept of Hypothesis Testing
- Statistical Methods – Z/t-tests (One sample, independent, paired), Anova, Correlation and Chi-square
- Important modules for statistical methods: Numpy, Scipy, Pandas
Python: Machine Learning -Predictive Modeling – Basics
- Introduction to Machine Learning & Predictive Modeling
- Types of Business problems – Mapping of Techniques – Regression vs. classification vs. segmentation vs. Forecasting
- Major Classes of Learning Algorithms -Supervised vs Unsupervised Learning
- Different Phases of Predictive Modeling (Data Pre-processing, Sampling, Model Building, Validation)
- Overfitting (Bias-Variance Trade off) & Performance Metrics
- Feature engineering & dimension reduction
- Concept of optimization & cost function
- Concept of gradient descent algorithm
- Concept of Cross validation(Bootstrapping, K-Fold validation etc)
- Model performance metrics (R-square, RMSE, MAPE, AUC, ROC curve, recall, precision, sensitivity, specificity, confusion metrics)
Machine Learning Algorithms & Applications – Implementation In Python
- Linear & Logistic Regression
- Segmentation – Cluster Analysis (K-Means)
- Decision Trees (CART/CD 5.0)
- Ensemble Learning (Random Forest, Bagging & boosting)
- Artificial Neural Networks(ANN)
- Support Vector Machines(SVM)
- Other Techniques (KNN, Naïve Bayes, PCA)
- Introduction to Text Mining using NLTK
- Introduction to Time Series Forecasting (Decomposition & ARIMA)
- Important python modules for Machine Learning (SciKit Learn, stats models, scipy, nltk etc)
- Fine tuning the models using Hyper parameters, grid search, piping etc.
Project – Consolidate Learnings
- Applying different algorithms to solve the business problems and bench mark the results
Introduction To Big Data
- Introduction and Relevance
- Uses of Big Data analytics in various industries like Telecom, E- commerce, Finance and Insurance etc.
- Problems with Traditional Large-Scale Systems
Hadoop(Big Data) Eco-System
- Motivation for Hadoop
- Different types of projects by Apache
- Role of projects in the Hadoop Ecosystem
- Key technology foundations required for Big Data
- Limitations and Solutions of existing Data Analytics Architecture
- Comparison of traditional data management systems with Big Data management systems
- Evaluate key framework requirements for Big Data analytics
- Hadoop Ecosystem & Hadoop 2.x core components
- Explain the relevance of real-time data
- Explain how to use Big Data and real-time data as a Business planning tool
Hadoop Cluster-Architecture-Configuration Files
- Hadoop Master-Slave Architecture
- The Hadoop Distributed File System – Concept of data storage
- Explain different types of cluster setups(Fully distributed/Pseudo etc)
- Hadoop cluster set up – Installation
- Hadoop 2.x Cluster Architecture
- A Typical enterprise cluster – Hadoop Cluster Modes
- Understanding cluster management tools like Cloudera manager/Apache ambari
Hadoop-HDFS & MapReduce (YARN)
- HDFS Overview & Data storage in HDFS
- Get the data into Hadoop from local machine(Data Loading Techniques) – vice versa
- Map Reduce Overview (Traditional way Vs. MapReduce way)
- Concept of Mapper & Reducer
- Understanding MapReduce program Framework
- Develop MapReduce Program using Java (Basic)
- Develop MapReduce program with streaming API) (Basic)
Data Integration Using Sqoop & Flume
- Integrating Hadoop into an Existing Enterprise
- Loading Data from an RDBMS into HDFS by Using Sqoop
- Managing Real-Time Data Using Flume
- Accessing HDFS from Legacy Systems
Data Analysis Using Pig
- Introduction to Data Analysis Tools
- Apache PIG – MapReduce Vs Pig, Pig Use Cases
- PIG’s Data Model
- PIG Streaming
- Pig Latin Program & Execution
- Pig Latin : Relational Operators, File Loaders, Group Operator, COGROUP Operator, Joins and COGROUP, Union, Diagnostic Operators, Pig UDF
- Writing JAVA UDF’s
- Embedded PIG in JAVA
- PIG Macros
- Parameter Substitution
- Use Pig to automate the design and implementation of MapReduce applications
- Use Pig to apply structure to unstructured Big Data
Data Analysis Using Hive
- Apache Hive – Hive Vs. PIG – Hive Use Cases
- Discuss the Hive data storage principle
- Explain the File formats and Records formats supported by the Hive environment
- Perform operations with data in Hive
- Hive QL: Joining Tables, Dynamic Partitioning, Custom Map/Reduce Scripts
- Hive Script, Hive UDF
- Hive Persistence formats
- Loading data in Hive – Methods
- Serialization & Deserialization
- Handling Text data using Hive
- Integrating external BI tools with Hadoop Hive
Data Analysis Using Impala
- Impala & Architecture
- How Impala executes Queries and its importance
- Hive vs. PIG vs. Impala
- Extending Impala with User Defined functions
Introduction To Other Ecosystem Tools
- NoSQL database – Hbase
- Introduction Oozie
Spark: Introduction
- Introduction to Apache Spark
- Streaming Data Vs. In Memory Data
- Map Reduce Vs. Spark
- Modes of Spark
- Spark Installation Demo
- Overview of Spark on a cluster
- Spark Standalone Cluster
Spark: Spark In Practice
- Invoking Spark Shell
- Creating the Spark Context
- Loading a File in Shell
- Performing Some Basic Operations on Files in Spark Shell
- Caching Overview
- Distributed Persistence
- Spark Streaming Overview(Example: Streaming Word Count)
Spark: Spark Meets Hive
- Analyze Hive and Spark SQL Architecture
- Analyze Spark SQL
- Context in Spark SQL
- Implement a sample example for Spark SQL
- Integrating hive and Spark SQL
- Support for JSON and Parquet File Formats Implement Data Visualization in Spark
- Loading of Data
- Hive Queries through Spark
- Performance Tuning Tips in Spark
- Shared Variables: Broadcast Variables & Accumulators
Spark Streaming
- Extract and analyze the data from twitter using Spark streaming
- Comparison of Spark and Storm – Overview
Spark GraphX
- Overview of GraphX module in spark
- Creating graphs with GraphX
Introduction To Machine Learning Using Spark
- Understand Machine learning framework
- Implement some of the ML algorithms using Spark MLLib
Project
- Consolidate all the learnings
- Working on Big Data Project by integrating various key components
Learn Advanced Big Data SCIENCE – Course in Pune with Training, Certification & Guaranteed Job Placement Assistance!
Online Batches Available for the Areas
Ambegaon Budruk | Aundh | Baner | Bavdhan Khurd | Bavdhan Budruk | Balewadi | Shivajinagar | Bibvewadi | Bhugaon | Bhukum | Dhankawadi | Dhanori | Dhayari | Erandwane | Fursungi | Ghorpadi | Hadapsar | Hingne Khurd | Karve Nagar | Kalas | Katraj | Khadki | Kharadi | Kondhwa | Koregaon Park | Kothrud | Lohagaon | Manjri | Markal | Mohammed Wadi | Mundhwa | Nanded | Parvati (Parvati Hill) | Panmala | Pashan | Pirangut | Shivane | Sus | Undri | Vishrantwadi | Vitthalwadi | Vadgaon Khurd | Vadgaon Budruk | Vadgaon Sheri | Wagholi | Wanwadi | Warje | Yerwada | Akurdi | Bhosari | Chakan | Charholi Budruk | Chikhli | Chimbali | Chinchwad | Dapodi | Dehu Road | Dighi | Dudulgaon | Hinjawadi | Kalewadi | Kasarwadi | Maan | Moshi | Phugewadi | Pimple Gurav | Pimple Nilakh | Pimple Saudagar | Pimpri | Ravet | Rahatani | Sangvi | Talawade | Tathawade | Thergaon | Wakad