Solve real-world data problems and create data-driven workflows for easy data movement and processing at scale with Azure Data Factory
Key Features
- Learn how to load and transform data from various sources, both on-premises and on cloud
- Use Azure Data Factory’s visual environment to build and manage hybrid ETL pipelines
- Discover how to prepare, transform, process, and enrich data to generate key insights
Book Description
This new edition of the Azure Data Factory Cookbook, fully updated to reflect ADS V2, will help you get up and running by showing you how to create and execute your first job in ADF.
You’ll learn how to branch and chain activities, create custom activities, and schedule pipelines, as well as discovering the benefits of Cloud Data Warehousing, Azure Synapse Analytics, and Azure Data Lake Storage Gen2.
With practical recipes, you’ll learn how to actively engage with analytical tools from Azure's data services and leverage your on-premises infrastructure with cloud-native tools to get relevant business insights. As you advance, you’ll be able to integrate the most commonly used Azure services into ADF and understand how Azure services can be useful in designing ETL pipelines. You'll familiarize yourself with the common errors that you may encounter while working with ADF and find out how to use the Azure portal to monitor pipelines. You’ll also understand error messages and resolve problems in Connectors and Data flows with the debugging capabilities of ADF.
Two new chapters covering Azure Data Explorer and key best practices have been added, along with new recipes throughout.
By the end of this book, you’ll be able to use ADF as the main ETL and orchestration tool for your Data Warehouse or Data Platform projects.
What you will learn
- Create an orchestration and transformation job in ADF
- Develop, execute, and monitor Data Flows using Azure Synapse Analytics
- Create Big Data pipelines using Databricks and Delta tables
- Work with Big Data in Azure Data Lake Storage Gen2 using Spark pools
- Migrate on-premises SSIS jobs to ADF
- Integrate ADF with commonly used Azure services such as Azure ML, Azure Logic Apps, and Azure Functions
- Run big data compute jobs within HDInsight and Azure Databricks
- Copy data from AWS S3 and Google Cloud Storage to Azure Storage using ADF's built-in connectors
Who this book is for
This book is for ETL developers, data warehouse and ETL architects, software professionals, and anyone else who wants to learn about the common and not-so-common challenges faced while developing traditional and hybrid ETL solutions using Microsoft's Azure Data Factory. You’ll also find this book useful if you are looking for recipes to improve or enhance your existing ETL pipelines. Basic knowledge of data warehousing is a prerequisite.
Table of Contents
- Getting Started with ADF
- Orchestration and Control Flow
- Setting up Synapse Analytics
- Working with Data Lake and Spark Pools
- Working with Big Data and Databricks
- Data Migration – Azure Data Factory and Other Cloud Services
- Extending Azure Data Factory with Logic Apps and Azure Functions
- Microsoft Fabric and Power BI, Azure ML and Cognitive Services
- Managing Deployment Processes with Azure DevOps
- Monitoring and Troubleshooting Data Pipelines
- Working with Azure Data Explorer
- The Best Practices of Working with ADF
Dmitry Foshin is a business intelligence team leader, whose main goals are delivering business insights to the management team through data engineering, analytics, and visualization. He has led and executed complex full-stack BI solutions (from ETL processes to building DWH and reporting) using Azure technologies, Data Lake, Data Factory, Data Bricks, MS Office 365, PowerBI, and Tableau. He has also successfully launched numerous data analytics projects – both on-premises and cloud – that help achieve corporate goals in international FMCG companies, banking, and manufacturing industries.
Tonya Chernyshova is an experienced Data Engineer with over 10 years in the field, including time at Amazon. Specializing in Data Modeling, Automation, Cloud Computing (AWS and Azure), and Data Visualization, she has a strong track record of delivering scalable, maintainable data products. Her expertise drives data-driven insights and business growth, showcasing her proficiency in leveraging cloud technologies to enhance data capabilities.
Dmitry Anoshin is a data-centric technologist and a recognized expert in building and implementing big data and analytics solutions. He has a successful track record when it comes to implementing business and digital intelligence projects in numerous industries, including retail, finance, marketing, and e-commerce. Dmitry possesses in-depth knowledge of digital/business intelligence, ETL, data warehousing, and big data technologies. He has extensive experience in the data integration process and is proficient in using various data warehousing methodologies. Dmitry has constantly exceeded project expectations when he has worked in the financial, machine tool, and retail industries. He has completed a number of multinational full BI/DI solution life cycle implementation projects. With expertise in data modeling, Dmitry also has a background and business experience in multiple relation databases, OLAP systems, and NoSQL databases. He is also an active speaker at data conferences and helps people to adopt cloud analytics.
Xenia Ireton is a Senior Software Engineer at Microsoft. She has extensive knowledge in building distributed services, data pipelines and data warehouses.