CHAPTER 2
In the previous chapter we briefly touched upon the concepts of serverless and functions. In this chapter we’re going to write Azure functions and see them in action. There are various ways to create and deploy functions to Azure. The easiest is by creating a function app in the Azure portal and creating new functions there. It’s also possible to write and deploy functions using Visual Studio. Lastly, it’s possible to deploy functions using Azure DevOps. We’re going to explore all three. We’ll see triggers as well as input and output bindings. Finally, we’re going to write a durable function.
The easiest way to create an Azure function app is through the portal. First, we need a function app, a specific sort of App Service tailored to host functions. While function apps are shown in the App Service overview, you can’t create a function app from here. Instead, click + Add in the upper-left corner, or go to Create a resource and search for Function App.
The page that opens is the same one we saw in the previous chapter. It’s straightforward. Assign or create a resource group, assign a unique (across Azure) name for your function, specify that you want to publish code rather than containers, select .NET Core as your runtime stack, and select a region that’s closest to you (or your customers). The function app name will be part of the function’s URL, [function name].azurewebsites.net (which is the same for an app service).
On the Hosting blade, we need to specify a storage account. For this purpose, I recommend creating a new one, which will be placed in the same resource group as your function app, so you can delete the entire resource group once we’re done. Choose Windows for your operating system and Consumption for your plan type.
Tip: Once your function and storage account are created, head over to the storage account. The storage account was probably created as general purpose v1, which is outdated. Go to the Configuration blade in the menu and click Upgrade. This will upgrade your account to general purpose v2, which is faster and cheaper, and has additional functionality. That said, all the samples in this chapter can be completed with a general purpose v1 storage account.
On the Monitoring blade, enable Application Insights. You can skip tags altogether. Go to Review + create and click Create. This will deploy an empty function app.
The plan types need a bit of explanation.
Currently, there are three plans that decide how your functions will run and how you will be billed. The Consumption plan is the default, but you can also opt for an app service plan or a premium plan.
The Consumption plan is the default serverless plan: you get servers dynamically allocated to you, providing automatic scaling, and you pay per execution and GB-s. This is potentially the cheapest plan and the plan that needs the least configuration. This is the only true serverless plan in the sense that if your function is not running, you won’t get servers allocated to you—and you’ll pay nothing for it.
With an App Service plan, you can use a hosting plan that you may already have in place for an (underutilized) app service. With this option you get a regular service plan that can run your workloads, but doesn’t scale as much. You can still scale up (upgrade your plan, for example from Basic to Standard) or scale out (add additional instances, either manually or automatically). The pros to this solution are that you have predictable costs, which are the monthly costs for your plan, and the timeout is set to a default of 30 minutes, but can be extended to unlimited execution duration, meaning you have no timeout at all. On top of that, you can enable “Always On” in the function app settings, which eliminates the latency at startup, because it doesn’t have to start every time. Obviously, this isn’t serverless, but it has some use cases when you still want to use functions.
The Premium plan combines the Consumption and App Service plans, but is also potentially the most expensive. Billing is based on number of core seconds and memory used. A so-called “warm instance” is always available to eliminate startup latency, but that also means there’s a minimum cost starting at about €70. Furthermore, you also get unlimited execution duration, you can use it for multiple function apps, and you can connect to VNETs or VPN. You still enjoy the scaling benefits that come with the Consumption plan.
Which plan type to choose is up to you and your specific situation. For the remainder of this chapter, we’ll use the Consumption plan.
When the function app is created, we can start to write functions. Go to the function app and click the + symbol next to Functions.

Figure 4: Creating an Azure Function
Here we can choose how we want to write our function, using Visual Studio or Visual Studio Code (the open source, lightweight, multiplatform code editor from Microsoft), any other editor, or in-portal. We’re going to use the In-portal option, so we don’t need any editors right now. Choose In-portal and click Continue at the bottom of the page.
Next, we need to select a template. There are two default templates to choose from: Webhook + API and Timer, but we can choose More templates. Here we find Azure Queue Storage trigger, Azure Service Bus Queue and Topic triggers, Blob Storage trigger, Event Hub trigger, Cosmos DB trigger, Durable Functions, and more. If you click on More templates, you can choose the Webhook + API template or the HTTP trigger template (which is the same). If you click on the first, you’ll immediately get a function named HttpTrigger1. If you click the latter, you get to choose a name and authorization level (leave the defaults).
This should leave you with a function named HttpTrigger1 and some default sample code that echoes a name either from a URL parameter or a request body. You can test the function right away, which lets you choose GET or POST for the HTTP method. You can add headers and query parameters, and there’s a default request body. The default will print Hello, Azure, as you would expect from the code. Try changing the code a bit, for example by changing Hello to Bye, and see that it works by running the test again.
Another way to test this is by getting the URL and pasting it in your browser. You can get the URL at the top by clicking </> Get function URL. It will look like: https://[your-app-service-name].azurewebsites.net/api/HttpTrigger1?code=[code].

Figure 5: Testing an Azure Function
If you try it out, you’ll get an error saying a name should be provided, so simply add &name=Azure at the end of the URL, and it will work again. That’s it: you can now change the function as you please, and see it work. The sample shows how to read query and body parameters. Including packages, like Newtonsoft.Json, is done by using #r and the package name. Other than that, it’s just C# code. The HttpRequest and ILogger arguments are injected by the Azure Functions runtime, but we’ll see more of that later. Let’s first dive into a bit of security for your functions.
When you got the URL from your function, you may have noticed the code=[code] part. If you created the function from the HTTP trigger template rather than the Webhook + API template, you could even choose an authorization level. Basically, your function is public to anyone, and the only way to limit access is by securing it with a key. There are three kinds of keys: function keys, host keys, and master keys. There are also three levels of authorization: function, admin, and anonymous.
The authorization level of a function can be changed under the Integrate menu option of a function; we’ll look at that in a minute. Function keys can be managed from the Manage menu option under a function.

Figure 6: Managing Function Keys
The default authorization level is function, which means you need any of the three keys to access the function. The function key is specific to the function. The host key can be used to access any function in the function app. Function and host keys can be created, renewed, and revoked. The master key cannot be revoked or created, but it can be renewed.
The admin authorization level needs the master key for access. Function and host keys cannot be used on this level. Function keys can be shared with other parties that need to connect to your function, as can host keys. Giving out the master key is a very bad idea—it cannot be revoked, and there is only one.
For example, say the Contoso and Acme companies need access to your function app. You can create two new host keys and give each one of them to one of the companies. If access needs to be revoked for one of the companies, you can simply revoke its key. Likewise, if any of the keys get compromised, you can renew it for that company. If a company only needs access to one or two functions, you can hand out the specific function keys.
The third authorization level is anonymous, and it simply means no keys are required and everyone with the URL can access your function.
Authorization levels are important for HTTP triggers. Other trigger types, such as timers, can never be triggered externally. Those functions do not have function keys, but they still have host and master keys because those are not specific for a function.
An important aspect of functions is triggers and bindings. We’ve seen some of the trigger templates, like the HTTP trigger, the timer trigger, and the Service Bus trigger. Next to triggers, a function can have bindings that translate function input to .NET Core types, or output to input for other Azure services, such as blob storage, table storage, Service Bus, or Cosmos DB. The output of a function, an IObjectResult in the default example, is another example of a binding. A trigger is a specific sort of binding that differs from input or output bindings in that you can only have one. You can find triggers and bindings within the Integrate menu option under your function.

Figure 7: Function Triggers and Bindings
On the Integrate screen, you can change the behavior of triggers. For example, you can change the authorization level or the allowed HTTP methods. In the case of a timer trigger, you can change the CRON expression, which indicates at what times or intervals the function triggers. For Cosmos DB triggers, you can specify what collection and database name you want to connect to. Every trigger has its own configuration.
Let’s start by adding an output binding, since that’s the easiest one to work with. You’ll notice that the return value is an output binding as well. If you click it, you’ll notice that Use function return value is checked. This can only be checked for one output binding per function.
Let’s create a new output binding. In the Integrate window, click + New Output and select Azure Blob Storage. You have to select it and then scroll down with the outer scroll bar to click Select. (It sounds simple, but it’s not obvious on small screens.) On the next screen, you probably need to install an extension, so simply click Install. After that, click Save, and the output binding will be created. The defaults for the binding dictate that the contents of the blob are specified in an output parameter named outputBlob, and the blob itself is created in the same storage account as the function in a container named outcontainer, with a random GUID as a name. We now need to change the code of the function so that it sets the outputBlob parameter.
Code Listing 1
|
[…] public static IActionResult Run(HttpRequest req, out string outputBlob, ILogger log) { […] outputBlob = $"Hello, {name}"; return name != null […] } |
If you now run the function and then head to your storage account, you’ll notice a container with a file that has the contents Hello, [name].
We’ll use output bindings in Visual Studio later, which has a couple of ways to define them. The main point to take away is that there’s a binding that binds, or converts, a string to a blob in a storage account.
Next is the input binding. This works pretty much the same. Click + New Input and select Azure Blob Storage. To make this more real-worldly, we’re going to change the Path property to incontainer/{customer}.txt. Click Save. Now, head over to your trigger and add {customer} to your Route template. The URL for your function is now https://[your-app-service-name].azurewebsites.net/api/{customer}?code=[code]&name=[name]. Now refresh your browser with F5, or the changes won’t be visible to the debugger.
Open your code again and make sure you get the input blob as an argument to your function.
Code Listing 2
|
[…] public static IActionResult Run(HttpRequest req, string inputBlob, out string outputBlob, ILogger log) { […] return name != null […] } |
Next, make sure the input blob is available in your storage account. Go to your storage account and create a new container named incontainer. On your local computer, create a file named customer1.txt and add some text to it. Upload it to your container. Now, head back to your function and test it. Make sure you add customer to your query with the value of customer1. This should load customer1.txt from incontainer and pass its contents to inputBlob in your function. If all went well, you should see the text inside your text file printed to the output.
Using input and output parameters, it requires almost no code to transform string (or sometimes slightly more difficult classes) to blobs, records in table storage or Cosmos DB, messages on queues, or even emails and SMS messages. We’ll see more of these bindings in a later example in Visual Studio.
A very important part of any application, and far too often an afterthought, is logging and monitoring. While regular app services have diagnostics logs, these are disabled for functions. In the code sample we already saw an ILogger and a call to log.LogInformation, but this isn’t the ILogger<T> that you are probably using in your ASP.NET Core applications. Even with functions, you can still use the log stream and alerts and metrics options of your app service, though.
You can monitor your functions by using the Monitor menu option. Functions use Application Insights for monitoring, and as such, you can get very detailed information about your functions and individual invocations. When you open the Monitor blade, you can see all function invocations of the last 30 days with a slight delay of five minutes. Clicking an invocation shows some details about it. The information in your code’s log.LogInformation is shown here as well. The ILogger can be used to log messages with various log levels to this blade. An error is shown when your functions throw an exception and you don’t catch it in the function. This is important, because it means that if you write a try-catch block and don’t re-throw your exception, the function will always succeed, while that may not actually be the case.

Figure 8: Monitoring in Azure Functions
By clicking Run in Application Insights, you can go further back or narrow down your search criteria. Application Insights works with a SQL-like syntax named Kusto Query Language (KQL), and the blade that opens gives you a couple of tools to help you write your queries. KQL is not in scope of this book, but you should be able to figure out some basics. For example, the query for the last 60 days is made by simply editing the default query to 60 days.
Code Listing 3
requests | project timestamp, id, operation_Name, success, resultCode, duration, operation_Id, cloud_RoleName, invocationId=customDimensions['InvocationId'] | where timestamp > ago(60d) | where cloud_RoleName =~ 'SuccinctlyFunction' and operation_Name == 'HttpTrigger1' | order by timestamp desc | take 20 |
It is also possible to get live feedback from your function while it is running. Go back to the Monitor blade of your function and click Live app metrics to get live feedback. This is especially useful if someone is using your function right now but isn’t getting the expected result. The blade that opens is somewhat big and convoluted, but it shows a lot of information, and you should be able to find what you need. Especially when you’re trying to debug a single request, this is indispensable.

Figure 9: Live Metrics Stream
Application Insights offers a lot of tools for monitoring, but they are not within the scope of this book. However, it is possible to monitor availability, failures, and performance; to see an application map that can show interaction with other components such as databases, storage accounts, and other functions; and to send notifications, such as emails, on failures.
Now that we’ve seen functions in the portal, we’re going to write some functions in Visual Studio. While the portal has its merits—it’s easy and a low barrier to entry—it’s hard to write any serious code, and you can’t utilize tools such as source control and continuous integration. I’m using the latest version of Visual Studio 2019 Community Edition (which you can download here) with .NET Core 3.1 (which you can download here), but the examples should work with Visual Studio 2017 and .NET Core 2.1 as well. When installing Visual Studio, you can select .NET Core workload, and .NET Core will be installed for you. In order to be able to develop functions using Visual Studio, you need to select the Azure development tools in the Visual Studio installer.
Once you’ve got everything set up, open Visual Studio, create a new project, and search for Function. This should bring up the Azure Functions project template. If you can’t find it, you can click Install more tools and features, which opens the installer and allows you to install the Azure development tools.

Figure 10: Creating an Azure Functions Project in Visual Studio 2019
Once you click the Azure Functions project template, Visual Studio will ask you for a project and solution name, like you’re used to. I’ve named mine SuccinctlyFunctionApp, but you can name yours whatever you like. Once you click Create, you can choose your runtime: Azure Functions v1 for the .NET Framework, or Azure Functions v2 and v3 for .NET Core. I’m going for Azure Functions v3 (.NET Core). You also have to pick a trigger, like we had to do in the portal. Let’s choose the Blob trigger option this time. On the right side, you have to choose a storage account, either an actual Azure storage account or a storage emulator. This is the storage account for your function to be stored, not the storage account of the blobs that will trigger your function. The emulator, unfortunately, does not support every trigger. It’s fine for a timer trigger, but not for our blob trigger. So, click Browse instead, and select the storage account that you used in the previous examples.
We must also enter a connection string name and a path, which will contain the connection string of the storage account that will trigger our function, and the path within that storage account that will be monitored for new blobs. Name the connection string StorAccConnString and leave the path at the default samples-workitems. Then, click Create.

Figure 11: Configuring an Azure Functions Project in Visual Studio 2019
When you try to run the generated sample code, you’ll get an error that the StorAccConnString is not configured. Open the local.settings.json file, copy the AzureWebJobsStorage line, and change it to StorAccConnString. Then head over to the storage account in the Azure portal, go to your blob storage, and create a new container and name it samples-workitems. Run your sample function and upload any random file to the container. Your function should now trigger, and it should print something like:
C# Blob trigger function Processed blob
Name: dummy.txt
Size: 10 Bytes
Tip: You’ll notice that when you stop your function using Visual Studio, the debug window doesn’t close. There’s an option in Visual Studio, under Tools > Options > Debugging > General, to Automatically close the console when debugging stops. It seems to work in the latest version of Visual Studio, but in the past, I’ve had problems with this option. If your breakpoints aren’t hit during debugging, make sure this option is turned off.
There are a few things to notice in the example. First, the blob is represented in code as a stream. Second, the blobPath variable has a {name} variable, which is also passed to the function. The name is handy, but not mandatory.
Let’s say we upload text files to the blob storage, and we only want to know the contents of the file. I already mentioned that many triggers can translate to various C# objects. Instead of the stream, we can have the file represented as a string. Change the type of myBlob to string and remove the name variable. Instead of logging the name and the length, log myBlob directly. The code should now look as follows.
Code Listing 4
|
public static void Run([BlobTrigger("samples-workitems/{name}", Connection = "StorAccConnString")]string myBlob, ILogger log) { log.LogInformation($"File contents: {myBlob}"); } |
If you now upload a simple text file to the blob container, the function will trigger and print the contents. If you upload an image or something that doesn’t contain text, it still tries to convert the bytes to text, and ends up with some weird string like IDATx??L#O??.
Tip: A blob can be represented by stream, TextReader, string, byte[], a Plain Old CLR Object (or POCO) serializable as JSON, and various ICloudBlobs. The different triggers all have various code representations, and it’s not obvious from looking at the code or the trigger. The documentation is pretty good though, so be sure to check it. For example, you can find the various usages of the blob trigger here.
The Azure portal was a bit limited in bindings options. For example, it wasn’t possible to create multiple bindings to the return statement. For the next example we’re going to quickly create a Service Bus. We’re going to look at the Service Bus in more depth in Chapter 4.
Go to the Azure portal, create a new resource, and search for Service Bus. In the Service Bus creation blade, enter a name that’s unique for Azure, for example SuccinctlyBus. Choose the Basic pricing tier, your subscription, resource group, and location. Once the Service Bus is created, look it up, find Queues in the left-hand menu, and create a new queue. Name it myqueue and leave all the defaults. When the queue is created, click it, and it will show 0 active messages. Next, find your Shared access policies in the Service Bus menu, click RootManageShareAccessKey, and copy one of the connection strings. We’ll need it in a minute.
Next, go back to the code in Visual Studio. The first thing we need to do is install the Microsoft.Azure.WebJobs.Extensions.ServiceBus package using NuGet. Now, go to the local.settings.json file and create a new setting under StorAccConnString, name it BusConnString, and copy the Service Bus connection string as its value.
We can now use the Service Bus binding in the code. Above the function, add a ServiceBusAttribute to the return value and pass to it myqueue as queue name, and BusConnString as connection. To make things more fun, we’re going to add an additional output binding with the BlobAttribute so we can copy our incoming blob. You can add this attribute to the return value as well, and it will compile, but only one attribute will work. The function should now look as follows.
Code Listing 5
[FunctionName("Function1")] [return: ServiceBus("myqueue", Connection = "BusConnString")] public static string Run([BlobTrigger("samples-workitems/{name}", Connection = "StorAccConnString")]string myBlob, [Blob("samples-workitems-copy/{name}_copy", FileAccess.Write, Connection = "StorAccConnString")]out string outputBlob, ILogger log) { log.LogInformation($"File contents: {myBlob}"); outputBlob = myBlob; return myBlob; } |
When you now upload a text file to the blob container, the function will trigger, and it will return the file contents as string. The BlobAttribute on the out parameter will now translate this string to a new blob with _copy appended to the original name: myfile.txt_copy. We have to do this in another container, or the copy will trigger the function, and we’ll be stuck in an endless loop with _copy_copy_copy_copy. The Service Bus binding on the return value will put the string as a message on the myqueue queue that we just created. We can see this in the portal if we open the queue. It should now say there is one active message on the queue.
Now, let’s create a new function that responds to the queue.
Code Listing 6
[FunctionName("ReadQueue")] public static void DoWork([ServiceBusTrigger("myqueue", Connection = "BusConnString")]string message, ILogger log) { log.LogInformation($"Queue contents: {message}"); } |
The ServiceBusTrigger looks the same as the ServiceBus binding to the return value. We specify a queue name and a connection name. The variable can be a string, a byte array, a message, or a custom type if the message contains JSON. The FunctionName attribute is used to specify the function’s display name in the Azure portal. When you place a blob in the container, you can see that the first function will read it, create a copy, and put a message on the queue. The second function will trigger instantly and log the message.

Figure 12: Read the Blob, Queue the Message, and Read the Bus
We’re now going to change the first function a bit so that it will place a JSON string on the queue. We can then read it in the second function, and read the blob copy as input parameter. The first thing we need to do is create a class that we can write to and read from the queue.
Code Listing 7
public class BlobFile { public string FileName { get; set; } } |
Next, we need to return a JSON string instead of the file contents in our first function. We only need to pass the name of the file to the queue because we can read the contents using an input parameter.
Code Listing 8
using Newtonsoft.Json; […] log.LogInformation($"File contents: {myBlob}"); outputBlob = myBlob; return JsonConvert.SerializeObject(new { FileName = name }); |
Now, in the second function, we can add an input parameter, bind the Service Bus trigger to BlobFile, and use the FileName property in the input parameter.
Code Listing 9
[FunctionName("ReadQueue")] public static void DoWork([ServiceBusTrigger("myqueue", Connection = "BusConnString")]BlobFile message, [Blob("samples-workitems-copy/{FileName}_copy", FileAccess.Read, Connection = "StorAccConnString")]string blob, ILogger log) { log.LogInformation($"Queue contents: {message.FileName}"); log.LogInformation($"Blob contents: {blob}"); } |
When you now upload a text file to your blob container, the first function will read it, create a copy, and place a JSON message containing the file name on the queue. The second function then reads the message from the queue and uses the file name to read the copy blob, and passes its contents as an input parameter. You can then use the object from the queue and the blob from the storage account.
When you’re on a consumption plan, your app isn’t running all the time, and you get a small startup delay if the function is first executed after it’s gone idle. With that in mind, it’s important to keep your startup time short. So far, we haven’t seen any startup logic. We have some static functions in a static class, so the class can’t have state or constructors. However, they don’t have to be static. This is especially useful when you want to use dependency injection.
The first thing we need to do is create an interface and a class we want to inject.
Code Listing 10
public interface IMyInterface { string GetText(); } public class MyClass : IMyInterface { public string GetText() => "Hello from MyClass!"; } |
Next, we need to install the Microsoft.Azure.Functions.Extensions package using NuGet. Once that is installed, we can create a new class that will have some startup logic.
Code Listing 11
using Microsoft.Azure.Functions.Extensions.DependencyInjection; using Microsoft.Extensions.DependencyInjection; [assembly: FunctionsStartup(typeof(SuccinctlyFunctionApp.Startup))] namespace SuccinctlyFunctionApp { public class Startup : FunctionsStartup { public override void Configure(IFunctionsHostBuilder builder) { builder.Services.AddTransient<IMyInterface, MyClass>(); } } } |
The one thing to note here is the assembly attribute that specifies the Startup class. The class itself inherits from FunctionsStartup and overrides the Configure method, which lets you build your dependency injection container. It uses the default .NET Core DI libraries.
You can now remove all the static keywords from the Function1 class and add a constructor that takes IMyInterface as argument.
Code Listing 12
public class Function1 { private readonly IMyInterface myInterface; public Function1(IMyInterface myInterface) { this.myInterface = myInterface; } […] |
You can then use myInterface in any function in the Function1 class.
Code Listing 13
log.LogInformation(myInterface.GetText()); |
It’s perfectly valid to use DI in your function classes. However, you can’t use state to “remember” values from one function to another. Due to the dynamic scaling of functions, you’re never sure if your functions will run in the same instance. Write your functions as if they were static, except for some injected variables.
Once your function is done, you need to deploy it to Azure. There are various ways to deploy your app. The easiest method of deploying your function app is through Visual Studio. Another method is using Azure DevOps. While a complete discussion of Azure DevOps is not within the scope of this book, we’re going to explore both methods of deployment in the following sections.
In Visual Studio, in the Solution Explorer, simply right-click on the solution that has your function app and click Publish. After that it’s self-explanatory. You can choose the Consumption plan, Premium plan, or an App Service plan with Windows or Linux. You can also choose an existing Azure function or create a new one.

Figure 13: Picking a Publish Target Using Visual Studio
The Run from package file (recommended) option needs some explanation. Normally, a web app would run files from the wwwroot folder. With Run from package, your web app is read from a zip file that is mounted as a read-only file drive on wwwroot. That means deploying is as easy as uploading a new zip file and pointing to it. You’ll never have locked or conflicting files. That leads to atomicity and predictability: a function app is deployed at once, and all files are included. Versioning also becomes easier if you correctly version your zip files. Cold starts also become faster, which becomes noticeable if you run Node.js with thousands of files in npm modules. The downside to running from packages is that it only runs on Windows, so Linux is not supported, and your site becomes read-only, so you can’t change it from the Azure portal.
Back to the actual deployment. We’re going to create a new Azure function. I’ve had some bad experiences with overwriting an existing function with a new packaged function; it doesn’t always work. So, you can create a publishing profile and fill out the data to create a new function app, and then just click Create.

Figure 14: Creating a New Resource Using Visual Studio
The plus side of using Visual Studio is, of course, that it’s easy to deploy. The downside, however, is that you always have to deploy a local copy. When deploying software, you often need to do some additional work, like change some configuration, change a version, or run some tests. That means you must always remember to do those steps. What’s more, you can now deploy from your machine, but a coworker may be using another editor, or might somehow be unable to build the software while they still need to be able to do deployments. Deployments may now suffer from the “it works on my machine” syndrome.
As I mentioned, DevOps is not within the scope of this book, but I wanted to mention it and give you a quick how-to guide in case you’re already familiar with Azure DevOps, so prior knowledge is assumed here. First, you need a Git repository in DevOps. Copy the solution to your repository and commit and push it to DevOps. For simplicity, be sure to add the local.settings.json file to source control (which is excluded from Git by default).
Next, you need to create a build pipeline in DevOps. I’m using the classic editor without YAML with the Azure Functions for .NET template. Pick the repository you just created and click Save. You shouldn’t have to change anything.
The same goes for the release pipeline. Create a new pipeline and choose the Deploy a function app to Azure Functions pipeline. In this step you need an existing function app, so create one in the Azure portal. It is possible to create function apps from DevOps by using ARM templates (or PowerShell or the Azure CLI), but I won’t cover that here. We’ll see some ARM templates in the next chapter, though. Once the function app is created, you can return to DevOps and enter the necessary values.

Figure 15: Deploying a Function App Using DevOps
Set the artifact to the latest version of the build pipeline you just created and save the pipeline. If you now create a new build and a new release, the function app should be released to Azure.
Note: Most resources in Azure can be deployed using ARM templates. In the next chapter, we’ll see an example of an ARM template. When ARM isn’t available for a certain resource or setting, you can use PowerShell or the Azure CLI. Sometimes it’s easier (and safer) to manage resources manually, like users and access.
Durable functions are an extension to the Azure Functions runtime. They allow for stateful functions in a serverless environment. Durable functions can be written in C#, JavaScript in version 2, and F# in version 1, but the long-term goal is to support all languages. The typical use cases for durable functions are function chaining, fan-out/fan-in, async HTTP APIs, monitoring, human interaction, and aggregating event data. I’ll briefly discuss some scenarios in the following sections.
The default template is for the async HTTP API scenario. You can initiate some process by doing a HTTP request. This returns a response with some URLs that can get you information about the initiated task. Meanwhile, the process is run asynchronously. With the provided URLs you can request the status of the process, the result if applicable, when it was created and last updated, or you can cancel it. You can read more about the various scenarios in the documentation.
Durable functions share the runtime of regular functions, but with a lot of new functionality, and restrictions as well. Discussing all possibilities and restrictions of durable functions could be a book on its own, but in the following sections we’re going to look at some basics that should get you started. There is also plenty of official documentation on durable functions, so you should be good to go.
You can create durable functions in the Azure portal or in Visual Studio. When you’re creating them in the portal, you’ll need to install an extension, which you’re prompted for, and which takes about two minutes. In the portal you’ll find three sorts of durable functions: starter, activity, and orchestrator. In Visual Studio, these are created for you in a single file. We’re going to use Visual Studio to create our durable functions.
Open Visual Studio, create a new solution, and choose the Functions template. Pick an empty function and browse for your Azure storage account. When the project is created, right-click the project and select Add > New Azure Function, pick a name (or leave the default), click Add, and then choose the Durable Functions Orchestration function. The functions that are created need some explanation.
There are three functions: Function1, Function1_Hello, and Function1_HttpStart. As the name suggests, Function1_HttpStart sets everything in motion. This creates a new context with an ID by calling starter.StartNewAsync(“Function1”, null), and returns that ID to the caller. You can see this in action by starting the application, opening your browser, and browsing to http://localhost:7071/api/Function1_HttpStart (your port may vary; see the console output). This returns a JSON.
Code Listing 14
{ "id": "1341f5df5d8149679acdbd5e0fd07043", "statusQueryGetUri": "[URL]", "sendEventPostUri": "[URL]", "terminatePostUri": "[URL]", "rewindPostUri": "[URL]", "purgeHistoryDeleteUri": "[URL]" } |
The next step, in code, is that Function1 is executed asynchronously. This is the orchestrator function that determines the status and output of the function. You can get the status of the function by using the statusQueryGetUri URL from the response of Function1_HttpStart. If you’re fast enough, you’ll see the status Pending or Running, but you’ll probably see Completed. If the functions raise an exception, you’ll see status the Failed. Other than that, you’ll see the function name, ID, input, output, the creation time, and when it was last updated.
{ "name": "Function1", "instanceId": "1341f5df5d8149679acdbd5e0fd07043", "runtimeStatus": "Completed", "input": null, "customStatus": null, "output": ["Hello Tokyo!", "Hello Seattle!", "Hello London!"], "createdTime": "2020-01-23T10:13:27Z", "lastUpdatedTime": "2020-01-23T10:14:09Z" } |
The actual work takes place in Function1_Hello, the activity function, which is called from Function1 three times with CallActivityAsync. The result from Hello is added to the output, which is returned by the orchestrator function. You can test the asynchronous character a bit better by adding Thread.Sleep(3000); to the activity function. This allows you to comfortably check out the status of the orchestrator.
Unfortunately, the Durable Functions template is not up to date, and some manual work is required to get it to the latest. First, go to NuGet and update the Microsoft.Azure.WebJobs.Extensions.DurableTask package. Next, in your code, add a using statement for the Microsoft.Azure.WebJobs.Extensions.DurableTask namespace. Next, in the starter function, replace the OrchestrationClient attribute with the DurableClient attribute, and DurableOrchestrationClient with IDurableClient or IDurableOrchestrationClient. In the orchestrator function, replace DurableOrchestrationContext with IDurableOrchestrationContext. You could also replace the Function1_Hello string with a constant string to avoid typos or missing one if you ever decide to rename.
Version 2 has some new features like durable entities, which allow you to read and update small pieces of state, and durable HTTP, which allows you to call HTTP APIs directly from orchestrator functions and implement automatic client-side HTTP 202 status polling, and has built-in support for Azure managed identities.
Let’s look at another scenario: human interaction. A common scenario is an approval flow. An employee requests something, perhaps a day off, and a manager has to approve or deny the requests. After three days the request is automatically approved or denied. We can use events for this.
Let’s first create the starter function.
Code Listing 15
[FunctionName("StartApproval")] public static async Task<HttpResponseMessage> StartApproval( [HttpTrigger(AuthorizationLevel.Anonymous, "get")]HttpRequestMessage req, [DurableClient]IDurableOrchestrationClient client) { string name = req.RequestUri.ParseQueryString()["name"]; string instanceId = await client.StartNewAsync("ApprovalFlow", null, name); return client.CreateCheckStatusResponse(req, instanceId); } |
The only thing we do here is get a name variable from the request URL and pass it to the ApprovalFlow orchestrator function.
The orchestrator function is a bit of a beast. First, we create a cancellation token that we pass to a timer. The timer indicates when the approval request expires, which we set to a minute from the approval time. The real magic is in the context.WaitForExternalEvent function, paired with the Task.WhenAny function. WaitForExternalEvent returns a task that completes when the event is raised. Task.WhenAny waits until either the task from the external event returns or the timer task completes (meaning it goes off). If the approvalEvent completes first, we cancel the cancelToken so that the timer stops. Next, we check to see whether the approval event returned true (approved) or false (denied). When the timer completes first, the request is automatically denied. With context.GetInput<string> we can get the input that was passed to the function from the starter function, in this case the value of the name URL parameter.
Code Listing 16
[FunctionName("ApprovalFlow")] public static async Task<object> ApprovalFlow([OrchestrationTrigger]IDurableOrchestrationContext context) { // Possibly write the request to a database here. //await context.CallActivityAsync("RequestToDb", context.GetInput<string>()); using (var cancelToken = new CancellationTokenSource()) { DateTime dueTime = context.CurrentUtcDateTime.AddMinutes(1); Task durableTimeout = context.CreateTimer(dueTime, cancelToken.Token); Task<bool> approvalEvent = context.WaitForExternalEvent<bool>("ApprovalEvent"); if (approvalEvent == await Task.WhenAny(approvalEvent, durableTimeout)) { cancelToken.Cancel(); if (approvalEvent.Result) { return await context.CallActivityAsync<object>("ApproveActivity", context.GetInput<string>()); } else { return await context.CallActivityAsync<object>("DenyActivity", context.GetInput<string>()); } } else { return await context.CallActivityAsync<object>("DenyActivity", context.GetInput<string>()); } } } [FunctionName("ApproveActivity")] public static object ApproveActivity([ActivityTrigger] string name) { // Probably update some record in the database here. return new { Name = name, Approved = true }; } [FunctionName("DenyActivity")] public static object DenyActivity([ActivityTrigger] string name) { // Probably update some record in the database here. return new { Name = name, Approved = false }; } |
The ApproveActivity and DenyActivity functions are the activity functions, and they simply return whether a request was approved or denied, but in a real-world scenario you’d probably do some updates in a database, and possibly send out some emails.
That leaves two more functions: one for approval, and one for denial. Or alternatively, one function with a parameter approved or denied. In any case, here’s the function for approval.
Code Listing 17
[FunctionName("Approve")] public static async Task<HttpResponseMessage> Approve( [HttpTrigger(AuthorizationLevel.Anonymous, "get")]HttpRequestMessage req, [DurableClient]IDurableOrchestrationClient client) { string instanceId = req.RequestUri.ParseQueryString()["instanceId"]; await client.RaiseEventAsync(instanceId, "ApprovalEvent", true); return req.CreateResponse(HttpStatusCode.OK, "Approved"); } |
For this to work, we need to provide the durable function’s instance ID in the query parameters. With this instance ID, we can raise an event by using client.RaiseEventAsync. This will complete the approval event in the orchestrator function, so it approves the request.
You can test this by starting up the function app, browsing to the URL of the StartApproval function, and providing a name in the parameters, like http://localhost:7071/api/StartApproval?name=Sander. This returns a JSON with the statusQueryGetUri and the id. Open the get URI in a separate browser tab to see that the status is currently running. Next, browse to the URL for the approval function and supply the ID that’s returned from the initial start: http://localhost:7071/api/Approve?instanceId=[ID]. Now, refresh the status tab and you should see the status is completed, the input was Sander (or whatever name you provided), the output JSON with a name, and whether the request was approved. If you wait longer than a minute to approve the request, it’s denied.
Azure Functions is a powerful tool in your toolbox. With numerous triggers and bindings, it integrates with your other Azure services almost seamlessly. With durable functions and keeping state, your serverless options greatly expand. There are limitations though, so functions aren’t always a good option. If you need a quick function, you can use the Azure portal, or if you’re building some serious, big, serverless solutions, you can use Visual Studio and Azure DevOps.