The SnowPro Advanced: Data Engineer Mock tests advanced knowledge and skills used to apply comprehensive data engineering principles using Snowflake. This Practice Mock Test will test the ability of Candidate to: Source data from Data Lakes, APIs, and on-premises Transform, replicate, and share data across cloud platforms Design end-to-end near real-time streams Design scalable compute solutions for DE workloads Evaluate performance metricsDomain Estimated Percentage Range of Exam Questions 1.0 Data Movement 35-40% 2.0 Performance Optimization 20-25% 3.0 Storage and Data Protection 10-15% 4.0 Security 10-15% 5.0 Data Transformation 15-20%1.0 Domain: Data Movement1.1 Given a data set, load data into Snowflake. Outline considerations for data loading Define data loading features and potential impact1.2 Ingest data of various formats through the mechanics of Snowflake. Required data formats Outline Stages1.3 Troubleshoot data ingestion.1.4 Design, build and troubleshoot continuous data pipelines. Design a data pipeline that forces uniqueness but is not unique. Stages Tasks Streams Snowpipe Auto ingest as compared to Rest API1.5 Analyze and differentiate types of data pipelines.1.6 Install, configure, and use connectors to connect to Snowflake.1.7 Design and build data sharing solutions. Implement a data share Create a secure view Implement row level filtering1.8 Outline when to use an External Table and define how they work. Partitioning external tables Materialized views Partitioned data unloading2.0 Domain: Performance Optimization2.1 Troubleshoot underperforming queries. Identify underperforming queries Outline telemetry around the operation Increase efficiency Identify the root cause2.2 Given a scenario, configure a solution for the best performance. Scale out vs. scale in Cluster vs. increase warehouse size Query complexity Micro partitions and the impact of clustering Materialized views Search optimization2.3 Outline and use caching features.2.4 Monitor continuous data pipelines. Snowpipe Stages3.0 Domain: Storage & Data Protection3.1 Implement data recovery features in Snowflake. Time Travel Fail-safe3.2 Outline the impact of Streams on Time Travel.3.3 Use System Functions to analyze Micro-partitions. Clustering depth Cluster keys3.4 Use Time Travel and Cloning to create new development environments. Backup databases Test changes before deployment Rollback4.0 Domain: Security4.1 Outline Snowflake security principles. Authentication methods (Single Sign On, Key Authentication, Username/Password, MFA) Role Based Access Control (RBAC)4.2 Outline the System Defined Roles and when they should be applied. The purpose of each of the System Defined Roles including best practicesusage in each case The primary differences between SECURITYADMIN and USERADMIN roles The difference between the purpose and usage of theUSERADMIN/SECURITYADMIN roles and SYSADMIN4.3 Outline Column Level Security. Explain the options available to support column level security includingDynamic Data Masking and External Tokenization DDL required to manage Dynamic Data Masking Methods and best practices for creating and applying masking policies ondata.5.0 Domain: Data Transformation5.1 Define User-Defined Functions (UDFs) and outline how to use them. Secure UDFs SQL UDFs JavaScript UDFs Returning Table Value vs. Scalar Value5.2 Define and create External Functions. Secure External Functions5.3 Design, Build, and Leverage Stored Procedures. Transaction management5.4 Handle and transform semi-structured data. Traverse and transform semi-structured data to structured data Transform structured to semi-structured data5.5 Outline different data schemas. Star Data lake Data vault