What you’ll learn

  • OOPS and Functional Programming in Scala
  • Apache Spark Framework
  • Advanced Spark Programming
  • Integrating Spark with Kafka
  • Spark MLib – Machine Learning
  • Spark Streaming, SparkSQL, Spark GraphX etc.


This course on Apache Spark and Scala aims at providing an advanced expertise in big data Hadoop ecosystem. This course will provide a standard skillset which helps one become a specialist on the top of Big data Hadoop developer.

Apache Spark is a lightning-fast cluster computing designed for fast computation.

The course starts with a detailed description on limitations of mapreduce and how Spark can help overcome them. Further it covers a deeper dive into the Scala programming language.

Moving on it covers Spark as a standalone cluster and an understanding of Resiliient Distributed Datasets.

The course also covers concepts of Spark SQL using SQL queries through SQL context and Hive Queries through Hive context.

This course certainly provides material required for building a career path from Big data Hadoop developer to BIg data Hadoop architect.

This course has been prepared for professionals aspiring to learn the basics of Big Data Analytics using Spark Framework and become a Spark Developer. In addition, it would be useful for Analytics Professionals and ETL developers as well.

Before you start proceeding with this course, we assume that you have prior exposure to Scala programming, database concepts, and any of the Linux operating system flavors.

Link description