In today’s installment in our Azure Databricks mini-series, I’ll cover running a Databricks notebook using Azure Data Factory (ADF). With Databricks, you can run notebooks using different contexts; in my example, I’ll be using Python.
3Cloud, formerly Pragmatic Works Consulting, was able to help a large school district in Georgia use Power BI to easily and effectively pinpoint struggling students in order to get the help they need to graduate, while saving the district over $300,000 in staff resources and time.
Do you want to learn the basics of developing Mapping and Wrangling Data Flows in Azure Data Factory (ADF)? In a recent webinar, Sr. BI Consultant, Andie Letourneau, teaches you about managing and maintaining server cluster or writing complex code to build pipelines.
Do you want to learn how to how to build data quality projects in Azure Data Factory using data flows to prepare data for analytics at scale? In a recent webinar, Sr. Program Manager on the Azure Data Factory team, Mark Kromer, shows you how to do this, without writing any Spark code.
Do you want to learn how to manage and execute SSIS inside of Azure using “Lift and Shift”? In a recent webinar, Manuel Quintana, discussed some of the potential issues you could encounter and how it compares to Azure Data Factory.
What do you know about Azure Databricks? If you’re unsure of what it is and how it’s used, I’m here today to clear that up with a high-level overview of the tool. Databricks is great for data engineering and data analytics.
Are you using Azure DevOps and want to know how to use it as a code repository? A benefit to using DevOps (or any code repository) is you can create a method to preserve the code from a working version while you’re making modification. In this post I’ll show you how to connect an existing Azure Data Factory project to an Azure DevOps code repository.