Introduction to Azure Data Factory - Azure Data Factory (2023)

  • Article
  • 8 minutes to read

APPLIES TO: Introduction to Azure Data Factory - Azure Data Factory (1)Azure Data Factory Introduction to Azure Data Factory - Azure Data Factory (2)Azure Synapse Analytics

In the world of big data, raw, unorganized data is often stored in relational, non-relational, and other storage systems. However, on its own, raw data doesn't have the proper context or meaning to provide meaningful insights to analysts, data scientists, or business decision makers.

Big data requires a service that can orchestrate and operationalize processes to refine these enormous stores of raw data into actionable business insights. Azure Data Factory is a managed cloud service that's built for these complex hybrid extract-transform-load (ETL), extract-load-transform (ELT), and data integration projects.

Usage scenarios

For example, imagine a gaming company that collects petabytes of game logs that are produced by games in the cloud. The company wants to analyze these logs to gain insights into customer preferences, demographics, and usage behavior. It also wants to identify up-sell and cross-sell opportunities, develop compelling new features, drive business growth, and provide a better experience to its customers.

To analyze these logs, the company needs to use reference data such as customer information, game information, and marketing campaign information that is in an on-premises data store. The company wants to utilize this data from the on-premises data store, combining it with additional log data that it has in a cloud data store.

To extract insights, it hopes to process the joined data by using a Spark cluster in the cloud (Azure HDInsight), and publish the transformed data into a cloud data warehouse such as Azure Synapse Analytics to easily build a report on top of it. They want to automate this workflow, and monitor and manage it on a daily schedule. They also want to execute it when files land in a blob store container.

(Video) Azure Data Factory | Azure Data Factory Tutorial For Beginners | Azure Tutorial | Simplilearn

Azure Data Factory is the platform that solves such data scenarios. It is the cloud-based ETL and data integration service that allows you to create data-driven workflows for orchestrating data movement and transforming data at scale. Using Azure Data Factory, you can create and schedule data-driven workflows (called pipelines) that can ingest data from disparate data stores. You can build complex ETL processes that transform data visually with data flows or by using compute services such as Azure HDInsight Hadoop, Azure Databricks, and Azure SQL Database.

Additionally, you can publish your transformed data to data stores such as Azure Synapse Analytics for business intelligence (BI) applications to consume. Ultimately, through Azure Data Factory, raw data can be organized into meaningful data stores and data lakes for better business decisions.

How does it work?

Data Factory contains a series of interconnected systems that provide a complete end-to-end platform for data engineers.

Introduction to Azure Data Factory - Azure Data Factory (3)

This visual guide provides a detailed overview of the complete Data Factory architecture:

To see more detail, select the preceding image to zoom in, or browse to the high resolution image.

Connect and collect

Enterprises have data of various types that are located in disparate sources on-premises, in the cloud, structured, unstructured, and semi-structured, all arriving at different intervals and speeds.

(Video) Azure Data Factory | Azure Data Factory Tutorial For Beginners | Introduction to Azure Data Factory

The first step in building an information production system is to connect to all the required sources of data and processing, such as software-as-a-service (SaaS) services, databases, file shares, and FTP web services. The next step is to move the data as needed to a centralized location for subsequent processing.

Without Data Factory, enterprises must build custom data movement components or write custom services to integrate these data sources and processing. It's expensive and hard to integrate and maintain such systems. In addition, they often lack the enterprise-grade monitoring, alerting, and the controls that a fully managed service can offer.

With Data Factory, you can use the Copy Activity in a data pipeline to move data from both on-premises and cloud source data stores to a centralization data store in the cloud for further analysis. For example, you can collect data in Azure Data Lake Storage and transform the data later by using an Azure Data Lake Analytics compute service. You can also collect data in Azure Blob storage and transform it later by using an Azure HDInsight Hadoop cluster.

Transform and enrich

After data is present in a centralized data store in the cloud, process or transform the collected data by using ADF mapping data flows. Data flows enable data engineers to build and maintain data transformation graphs that execute on Spark without needing to understand Spark clusters or Spark programming.

If you prefer to code transformations by hand, ADF supports external activities for executing your transformations on compute services such as HDInsight Hadoop, Spark, Data Lake Analytics, and Machine Learning.

CI/CD and publish

Data Factory offers full support for CI/CD of your data pipelines using Azure DevOps and GitHub. This allows you to incrementally develop and deliver your ETL processes before publishing the finished product. After the raw data has been refined into a business-ready consumable form, load the data into Azure Data Warehouse, Azure SQL Database, Azure Cosmos DB, or whichever analytics engine your business users can point to from their business intelligence tools.

Monitor

After you have successfully built and deployed your data integration pipeline, providing business value from refined data, monitor the scheduled activities and pipelines for success and failure rates. Azure Data Factory has built-in support for pipeline monitoring via Azure Monitor, API, PowerShell, Azure Monitor logs, and health panels on the Azure portal.

Top-level concepts

An Azure subscription might have one or more Azure Data Factory instances (or data factories). Azure Data Factory is composed of below key components.

(Video) Introduction to Azure Data Factory

  • Pipelines
  • Activities
  • Datasets
  • Linked services
  • Data Flows
  • Integration Runtimes

These components work together to provide the platform on which you can compose data-driven workflows with steps to move and transform data.

Pipeline

A data factory might have one or more pipelines. A pipeline is a logical grouping of activities that performs a unit of work. Together, the activities in a pipeline perform a task. For example, a pipeline can contain a group of activities that ingests data from an Azure blob, and then runs a Hive query on an HDInsight cluster to partition the data.

The benefit of this is that the pipeline allows you to manage the activities as a set instead of managing each one individually. The activities in a pipeline can be chained together to operate sequentially, or they can operate independently in parallel.

Mapping data flows

Create and manage graphs of data transformation logic that you can use to transform any-sized data. You can build-up a reusable library of data transformation routines and execute those processes in a scaled-out manner from your ADF pipelines. Data Factory will execute your logic on a Spark cluster that spins-up and spins-down when you need it. You won't ever have to manage or maintain clusters.

Activity

Activities represent a processing step in a pipeline. For example, you might use a copy activity to copy data from one data store to another data store. Similarly, you might use a Hive activity, which runs a Hive query on an Azure HDInsight cluster, to transform or analyze your data. Data Factory supports three types of activities: data movement activities, data transformation activities, and control activities.

Datasets

Datasets represent data structures within the data stores, which simply point to or reference the data you want to use in your activities as inputs or outputs.

Linked services

Linked services are much like connection strings, which define the connection information that's needed for Data Factory to connect to external resources. Think of it this way: a linked service defines the connection to the data source, and a dataset represents the structure of the data. For example, an Azure Storage-linked service specifies a connection string to connect to the Azure Storage account. Additionally, an Azure blob dataset specifies the blob container and the folder that contains the data.

Linked services are used for two purposes in Data Factory:

(Video) Azure Data Factory | Introduction to Azure Data Factory

  • To represent a data store that includes, but isn't limited to, a SQL Server database, Oracle database, file share, or Azure blob storage account. For a list of supported data stores, see the copy activity article.

  • To represent a compute resource that can host the execution of an activity. For example, the HDInsightHive activity runs on an HDInsight Hadoop cluster. For a list of transformation activities and supported compute environments, see the transform data article.

Integration Runtime

In Data Factory, an activity defines the action to be performed. A linked service defines a target data store or a compute service. An integration runtime provides the bridge between the activity and linked Services. It's referenced by the linked service or activity, and provides the compute environment where the activity either runs on or gets dispatched from. This way, the activity can be performed in the region closest possible to the target data store or compute service in the most performant way while meeting security and compliance needs.

Triggers

Triggers represent the unit of processing that determines when a pipeline execution needs to be kicked off. There are different types of triggers for different types of events.

Pipeline runs

A pipeline run is an instance of the pipeline execution. Pipeline runs are typically instantiated by passing the arguments to the parameters that are defined in pipelines. The arguments can be passed manually or within the trigger definition.

Parameters

Parameters are key-value pairs of read-only configuration.  Parameters are defined in the pipeline. The arguments for the defined parameters are passed during execution from the run context that was created by a trigger or a pipeline that was executed manually. Activities within the pipeline consume the parameter values.

A dataset is a strongly typed parameter and a reusable/referenceable entity. An activity can reference datasets and can consume the properties that are defined in the dataset definition.

A linked service is also a strongly typed parameter that contains the connection information to either a data store or a compute environment. It is also a reusable/referenceable entity.

(Video) 1. Introduction to Azure Data Factory

Control flow

Control flow is an orchestration of pipeline activities that includes chaining activities in a sequence, branching, defining parameters at the pipeline level, and passing arguments while invoking the pipeline on-demand or from a trigger. It also includes custom-state passing and looping containers, that is, For-each iterators.

Variables

Variables can be used inside of pipelines to store temporary values and can also be used in conjunction with parameters to enable passing values between pipelines, data flows, and other activities.

Next steps

Here are important next step documents to explore:

  • Dataset and linked services
  • Pipelines and activities
  • Integration runtime
  • Mapping Data Flows
  • Data Factory UI in the Azure portal
  • Copy Data tool in the Azure portal
  • PowerShell
  • .NET
  • Python
  • REST
  • Azure Resource Manager template

FAQs

What is Azure Azure Data Factory? ›

Azure Data Factory is Azure's cloud ETL service for scale-out serverless data integration and data transformation. It offers a code-free UI for intuitive authoring and single-pane-of-glass monitoring and management. You can also lift and shift existing SSIS packages to Azure and run them with full compatibility in ADF.

What do I need to learn for Azure Data Factory? ›

Learning objectives

Describe data integration patterns. Explain the data factory process. Understand Azure Data Factory components. Azure Data Factory security.

Is Azure Data Factory an ETL? ›

With Azure Data Factory, it's fast and easy to build code-free or code-centric ETL and ELT processes. In this scenario, learn how to create code-free pipelines within an intuitive visual environment. In today's data-driven world, big data processing is a critical task for every organization.

Is Azure data/factory ETL or ELT? ›

With Azure Data Factory, it is fast and easy to build code-free or code-centric ETL and ELT processes.

Is Azure data/factory a database? ›

Azure Data Factory is a cloud-based data integration service that allows you to create data-driven workflows in the cloud for orchestrating and automating data movement and data transformation. ADF does not store any data itself.

What is the difference between Azure and Azure Data Factory? ›

ADF helps in transforming, scheduling and loading the data as per project requirement. Whereas Azure Data Lake is massively scalable and secure data lake storage for storing optimized workloads. It can store structured, semi structured and unstructured data seamlessly.

Does Azure Data factory need coding? ›

Ans: No, coding is not required. Azure Data Factory lets you create workflows very quickly. It offers 90+ built-in connectors available in Azure Data Factory to transform the data using mapping data flow activities without programming skills or spark cluster knowledge.

Which 3 types of activities can you run in Microsoft Azure data factory? ›

Data Factory supports three types of activities: data movement activities, data transformation activities, and control activities.

Is Azure Data factory PaaS or Saas? ›

Azure Data Factory (ADF) is a Microsoft Azure PaaS solution for data transformation and load. ADF supports data movement between many on premises and cloud data sources.

What is the main use of Azure Data factory? ›

It is the cloud-based ETL and data integration service that allows you to create data-driven workflows for orchestrating data movement and transforming data at scale. Using Azure Data Factory, you can create and schedule data-driven workflows (called pipelines) that can ingest data from disparate data stores.

How many types of activities are in Azure Data factory? ›

Data Factory supports two types of activities: data movement activities and data transformation activities. Each activity can have zero or more input datasets and produce one or more output datasets.

Is Azure Data factory serverless? ›

Azure Data Factory is Azure's fully managed, serverless data integration service.

How many days it will take to learn Azure Data Factory? ›

You'll be Azure Data Lake & Data Factory trained in just 2 days.

How to practice Azure Data Factory for free? ›

Unfortunately there is no SANDBOX kind of application. But you should be able to use Azure Free trial subscription of $200 credit to create a data factory and use with few activities to explore.

What are the 3 types of data that can be stored in Azure? ›

Azure storage types include objects, managed files and managed disks. Customers should understand their often-specific uses before implementation. Each storage type has different pricing tiers -- usually based on performance and availability -- to make each one accessible to companies of every size and type.

What is Azure Data Factory vs Databricks? ›

Key Difference Between Azure Data Factory vs Databricks

Azure data factory is a data integration service and orchestration tool for performing the ETL process and orchestrating the data movements. Azure data bricks is providing a unified collaborative platform for data scientists and data engineers.

Why use Azure Data Factory instead of SSIS? ›

Azure Data Factory has built-in support for Azure HDInsight, a managed Hadoop service. This means that the service can be used to process big data sets, something that would be difficult to do with SSIS. Azure Data Factory supports both batch and streaming data processes while SSIS supports only batch processes.

What are the top 3 certifications in Azure? ›

Microsoft Certified: Azure Security Engineer Associate – Exam AZ-500. Microsoft Certified: Azure Network Engineer Associate – Exam AZ-700. Microsoft Certified: Azure AI Engineer Associate – Exam AI-102. Microsoft Certified: Azure Data Scientist Associate – Exam DP-100.

What is difference between pipeline and data flow in ADF? ›

Data moves from one component to the next via a series of pipes. Data flows through each pipe from left to right. A "pipeline" is a series of pipes that connect components together so they form a protocol.

Can Azure data/factory run Python? ›

Set up an Azure Data Factory pipeline. In this section, you'll create and validate a pipeline using your Python script. Follow the steps to create a data factory under the "Create a data factory" section of this article.

Is Azure Data factory in demand? ›

It provides several certifications for mastering specific Azure skills. According to Microsoft, almost 365,000 businesses register for the Azure platform each year. This indicates that Microsoft Azure Data Engineers are in high demand.

How many pipelines can an Azure data/factory have? ›

Overview. A Data Factory or Synapse Workspace can have one or more pipelines.

What is Azure Data Factory architecture? ›

Azure Data Factory.

Data Factory is a managed service that orchestrates and automates data movement and data transformation. In this architecture, it coordinates the various stages of the ELT process.

Is Azure Data Factory a tool? ›

Azure Data Factory is a user interface tool which offers a very graphical overview to create/manage activities and pipelines.

What is similar to Azure Data Factory? ›

We have compiled a list of solutions that reviewers voted as the best overall alternatives and competitors to Azure Data Factory, including AWS Glue, IBM InfoSphere DataStage, Matillion ETL, and Apache NiFi.

What are the ETL tools in Azure? ›

Extract, transform, and load (ETL) is a data pipeline used to collect data from various sources. It then transforms the data according to business rules, and it loads the data into a destination data store.

What are the advantages of Azure Data Factory? ›

Azure Data Factory is a good option if you have a multi-cloud architecture. With it, you can integrate and centralize data stored across various clouds. Also, it's a good choice if your various applications write user data to different locations.

What are the three components of Azure? ›

A wide range of Microsoft's software as a service (SaaS), platform as a service (PaaS) and infrastructure as a service (IaaS) products are hosted on Azure. Azure offers three core areas of functionality; Virtual Machines, cloud services, and app services.

What is the purpose of data factory? ›

The purpose of Data Factory is to retrieve data from one or more data sources, and convert it into a format that you process. The data sources might present data in different ways, and contain noise that you need to filter out.

What is equivalent of Azure Data Factory? ›

We have compiled a list of solutions that reviewers voted as the best overall alternatives and competitors to Azure Data Factory, including AWS Glue, IBM InfoSphere DataStage, Matillion ETL, and Apache NiFi.

Is Azure data/factory a tool? ›

Azure Data Factory is a user interface tool which offers a very graphical overview to create/manage activities and pipelines.

Is Azure Data Factory PaaS or Saas? ›

Azure Data Factory (ADF) is a Microsoft Azure PaaS solution for data transformation and load. ADF supports data movement between many on premises and cloud data sources.

What is Azure Data Factory in simple words? ›

Azure Data Factory is the platform that solves such data scenarios. It is the cloud-based ETL and data integration service that allows you to create data-driven workflows for orchestrating data movement and transforming data at scale.

Which 3 types of activities can you run in Microsoft Azure data Factory? ›

Data Factory supports three types of activities: data movement activities, data transformation activities, and control activities.

What are the activities in Azure Data Factory? ›

Data Factory supports two types of activities: data movement activities and data transformation activities. Each activity can have zero or more input datasets and produce one or more output datasets.

What is difference between data factory and Databricks? ›

The last and most significant difference between the two tools is that ADF is generally used for data movement, ETL process, and data orchestration whereas; Databricks helps in data streaming and data collaboration in real-time.

Is Azure Data factory same as SSIS? ›

Azure Data Factory supports both batch and streaming data processes while SSIS supports only batch processes. Azure Data Factory allows you to define a series of tasks that need to be performed on data, such as copying data from one location to another, analyzing it and storing it in a database.

Videos

1. Azure Data Factory Part 1 - Introduction about Azure Data Factory
(databag)
2. Azure Data Factory Tutorial | Introduction to ETL in Azure
(Adam Marczak - Azure for Everyone)
3. Azure Data Factory | Azure Data Engineer | Introduction to Azure Data Factory | Intellipaat
(Intellipaat)
4. Azure Data Factory 2021 | Azure Data Factory Tutorial For Beginners | Azure Data Factory Tutorial
(TechBrothersIT)
5. What is the Azure Data Factory? | How to Use the Azure Data Factory
(ITProTV)
6. Paul Andrew: A Complete Introduction to Azure Data Factory at Data Platform Discovery Day April 2020
(Data Platform Discovery Day)
Top Articles
Latest Posts
Article information

Author: Nathanael Baumbach

Last Updated: 02/27/2023

Views: 6455

Rating: 4.4 / 5 (55 voted)

Reviews: 94% of readers found this page helpful

Author information

Name: Nathanael Baumbach

Birthday: 1998-12-02

Address: Apt. 829 751 Glover View, West Orlando, IN 22436

Phone: +901025288581

Job: Internal IT Coordinator

Hobby: Gunsmithing, Motor sports, Flying, Skiing, Hooping, Lego building, Ice skating

Introduction: My name is Nathanael Baumbach, I am a fantastic, nice, victorious, brave, healthy, cute, glorious person who loves writing and wants to share my knowledge and understanding with you.