Apache Beam is a unified and portable programming model for both Batch and Streaming use cases. Earlier we could run Spark, Flink & Cloud Dataflow Jobs only on their respective clusters. But now Apache Beam has come up with a portable programming model where we can build language agnostic Big data pipelines and run it using any Big data engine (Apache Spark, Flink or in Google Cloud Platform using its Cloud Dataflow and many more Big data engines).Apache Beam is the future of building Big data processing pipelines and is going to be accepted by mass companies due to its portability. Many big companies have even started deploying Beam pipelines in their production servers. What’s included in the course? Complete Apache Beam concepts explainedfrom Scratch to Real-Time implementation. Each and every Apache Beam concept is explained with a HANDS-ON example of it. Include even those concepts, the explanation to which is not very clear even in Apache Beam’s official documentation. Build 2 Real-time Big data case studies using Beam. Load data to Google BigQuery Tables from Beam pipeline. Codes and Datasetsused in lectures areattached in the course for your convenience.