Google Cloud platform is catching up and a lot of companies have already started moving their infrastructure to GCP. This course provides the most practical solutions to real world use cases in terms of data engineering on Cloud. This course is designed keeping in mind end to end lifecycle of a typical Big data ETL project both batch processing and real time streaming and analytics. Considering the most important components of any batch processing or streaming jobs, this course covers Writing ETLjobs using Pyspark from scratchStorage components on GCP (GCS & Dataproc HDFS) Loading Data into Data-warehousing tool on GCP(BigQuery)Handling/Writing Data Orchestration and dependencies using Apache Airflow(Google Composer) in Python from scratch Batch Data ingestion using Sqoop, CloudSql and Apache Airflow Real Time data streaming and analytics using the latest API, Spark Structured Streaming with PythonMicro batching using PySpark streaming & Hive on Dataproc The coding tutorials and the problem statements in this course are extremely comprehensive and will surely give one enough confidence to take up new challenges in the Big Data / Hadoop Ecosystem on cloud and start approaching problem statements & job interviews without inhibition. Most importantly, this course makes use of Linux Ubuntu 18.02 as a local operating system. Though most of the codes are run and triggered on Cloud, this course expects one to be experienced enough to be able to set up Google SDKs, python and a GCPAccount by themselves on their local machines because the local operating system does not matter in order to succeed in this course. P.S: 88BA1461141F3A2A6E2D for half price.