82 minute read

Taking notes for AZ-900 Microsoft Azure Fundamentals proved to be a good idea so let’s do it again. This time for AZ-204 Developing Solutions for Microsoft Azure. Same approach: start with exam skill outline (May 18, 2020 version) and follow the learning paths. Additionaly, check the Pluralsight course Developing Solutions for Microsoft Azure (AZ-204) and follow related practice exam offered by Kaplan.

UPDATE 2021-02-024

Few remarks after I passed the exam:

  • The Microsoft Learning Path proved to be almost a complete waste of time. Only about 15-20% of the exam skill outline I could follow based on the Learning Path. The rest of the path was about interesting stuff, but not related to the exam.
  • The Kaplan tests, again, proved to be outdated. Not like for the AZ-900 but I still had to ignore lots of questions (about AKS for example)
  • The Pluralsight videos were to the point and very useful but it’s just impossible to cover everything by video. The 30 min. Pluralsight Exam Alerts videos were extremely useful. They pin point the main topics and show you what kind of questions you might get in the exam.

Develop Azure compute solutions (25-30%)

Implement IaaS solutions

  • provision VMs

    Things to consider:

    • Network:
      • Address space: When you set up a virtual network, you specify the available address spaces, subnets, and security. If the VNet will be connected to other VNets, you must select address ranges that are not overlapping.
      • Segregation: After deciding the virtual network address space(s), you can create one or more subnets for your virtual network. You do this to break up your network into more manageable sections. For example, you might assign 10.1.0.0 to VMs, 10.2.0.0 to back-end services, and 10.3.0.0 to SQL Server VMs.
      • Secure the network: By default, there is no security boundary between subnets, so services in each of these subnets can talk to one another. However, you can set up Network Security Groups (NSGs), which allow you to control the traffic flow to and from subnets and to and from VMs. NSGs act as software firewalls, applying custom rules to each inbound or outbound request at the network interface and subnet level.
    • Plan VM deployment: What does the server communicate with? Which ports are open? Which OS is used? How much disk space is in use? What kind of data does this use? Are there restrictions (legal or otherwise) with not having it on-premises? What sort of CPU, memory, and disk I/O load does the server have? Is there burst traffic to account for?
    • Name the VM: This name also defines a manageable Azure resource, and it’s not trivial to change later. That means you should choose names that are meaningful and consistent, so you can easily identify what the VM does. A good convention is to include the following information in the name: Environment, Location, Instance, Product or Service, Role. Ex: devusc-webvm01
    • Location: The location can limit your available options. Each region has different hardware available and some configurations are not available in all regions. There are price differences between locations.
    • Size Options: General purpose, Compute optimized, Memory optimized, Storage optimized, GPU, High performance computes
    • Pricing Model: There are two separate costs the subscription will be charged for every VM: compute and storage.
      • Compute costs - Compute expenses are priced on a per-hour basis but billed on a per-minute basis.
      • Storage costs - You are charged separately for the storage the VM uses.
    • Storage: Best practice is that all Azure virtual machines will have at least two virtual hard disks (VHDs). Virtual disks can be backed by either Standard or Premium Storage accounts. When you create disks, you will have two options for managing the relationship between the storage account and each VHD:
      • Unmanaged disks: you are responsible for the storage accounts that are used to hold the VHDs that correspond to your VM disks.
      • Managed disks: the newer and recommended disk storage model. You specify the size of the disk, up to 4 TB, and Azure creates and manages both the disk and the storage.
    • Operating system: Windows or Linux. However, if you can’t find a suitable OS image, you can create your disk image with what you need, upload it to Azure storage, and use it to create an Azure VM. Keep in mind that Azure only supports 64-bit operating systems.
  • configure VMs for remote access

    Secure Shell (SSH) is an encrypted connection protocol that allows secure sign-ins over unsecured connections. SSH allows you to connect to a terminal shell from a remote location using a network connection. There are two approaches we can use to authenticate an SSH connection: username and password, or an SSH key pair. VMs created using SSH keys are by default configured with passwords disabled, which greatly increases the difficulty of brute-force guessing attacks.

    There are two parts to an SSH key pair: a public key and a private key.

    • The public key is placed on your Linux VM or any other service that you wish to use with public-key cryptography. This can be shared with anyone.
    • The private key is what you present to verify your identity to your Linux VM when you make an SSH connection. Consider this confidential information and protect this like you would a password or any other private data.

    With a public IP, we can interact with the VM over the Internet. Public IP addresses in Azure are dynamically allocated by default. That means the IP address can change over time - for VMs the IP address assignment happens when the VM is restarted. You can pay more to assign static addresses.

    To connect to the VM via SSH, you need:

    • the public IP address of the VM
    • the username of the local account on the VM
    • a public key configured in that account
    • access to the corresponding private key
    • port 22 open on the VM

    Remote Desktop (RDP) provides remote connectivity to the UI of Windows-based computers. Microsoft provides RDP clients for the following operating systems: Windows (built-in), macOS, iOS and Android. There are also open-source Linux clients, such as Remmina that enable you to connect to a Windows PC from an Ubuntu distribution.

    You can secure your management ports with just-in-time access. You can lock down inbound traffic to your Azure Virtual Machines with Azure Security Center’s just-in-time (JIT) virtual machine (VM) access feature. You can request access to a JIT-enabled VM from Security Center, Azure virtual machines, PowerShell, or the REST API.

  • create ARM templates

    Resource Manager templates are JSON files that define the resources you need to deploy for your solution. Create resource templates from the Automation section for a specific VM by selecting the Export template option.

    Advantages:

    • Declarative syntax
    • Repeatable results
    • Orchestration: You don’t have to worry about the complexities of ordering operations. Resource Manager orchestrates the deployment of interdependent resources so they’re created in the correct order. When possible, Resource Manager deploys resources in parallel so your deployments finish faster than serial deployments.
    • Modular files: You can break your templates into smaller, reusable components and link them together at deployment time.
    • Extensibility: With deployment scripts, you can add PowerShell or Bash scripts to your templates. Testing: You can make sure your template follows recommended guidelines by testing it with the ARM template tool kit (arm-ttk). This test kit is a PowerShell script that you can download from GitHub.
    • Preview changes: You can use the what-if operation to get a preview of changes before deploying the template. With what-if, you see which resources will be created, updated, or deleted, and any resource properties that will be changed. The what-if operation checks the current state of your environment and eliminates the need to manage state.
    • Built-in validation: Your template is deployed only after passing validation.
    • Tracked deployments: In the Azure portal, you can review the deployment history and get information about the template deployment. You can see the template that was deployed, the parameter values passed in, and any output values.
    • Policy as code: Azure Policy is a policy as code framework to automate governance. If you’re using Azure policies, policy remediation is done on non-compliant resources when deployed through templates.
    • Deployment Blueprints: You can take advantage of Blueprints provided by Microsoft to meet regulatory and compliance standards. These blueprints include pre-built templates for various architectures.
    • CI/CD integration
    • Exportable code: You can get a template for an existing resource group by either exporting the current state of the resource group, or viewing the template used for a particular deployment.
    • Authoring tools

    The template has the following sections:

    • Parameters - Provide values during deployment that allow the same template to be used with different environments.
    • Variables - Define values that are reused in your templates. They can be constructed from parameter values.
    • User-defined functions - Create customized functions that simplify your template.
    • Resources - Specify the resources to deploy.
    • Outputs - Return values from the deployed resources.

    Example: to deploy a VM (in a new virtual network with a single subnet) using ARM template with C# you need the following files:

    • CreateVMTemplate.json
    • Parameters.json
    • azureauth.properties

    After creating your template, you may wish to share it with other users in your organization. Template specs enable you to store a template as a resource type.

    Commands:

    • PowerShell: New-AzResourceGroupDeployment -Name myDeploymentName -ResourceGroupName myResourceGroup -TemplateFile $templateFile
    • CLI: az deployment group create --name myDeploymentName --resource-group myResourceGroup --template-file $templateFile
  • create container images for solutions by using Docker

    How to:

    • Use the docker build command to create the container image and tag it: docker build ./aci-helloworld -t aci-tutorial-app
    • Before you deploy the container to Azure Container Instances, use docker run to run it locally and confirm that it works. The -d switch lets the container run in the background, while -p allows you to map an arbitrary port on your computer to port 80 in the container: docker run -d -p 8080:80 aci-tutorial-app
  • publish an image to the Azure Container Registry

    Container Registry is an Azure service that you can use to create your own private Docker registries. Like Docker Hub, Container Registry is organized around repositories that contain one or more images. Container Registry also lets you automate tasks such as redeploying an app when an image is rebuilt.

    Security is an important reason to choose Container Registry instead of Docker Hub:

    • You have much more control over who can see and use your images.
    • You can sign images to increase trust and reduce the chances of an image becoming accidentally (or intentionally) corrupted or otherwise infected.
    • All images stored in a container registry are encrypted at rest.
    • Container Registry runs in Azure. The registry can be replicated to store images near where they’re likely to be deployed.
    • Container Registry is highly scalable, providing enhanced throughput for Docker pulls that can span many nodes concurrently. The Premium SKU of Container Registry includes 500 GiB of storage.

    You create a registry by using either the Azure portal or the Azure CLI acr create command. In addition to storing and hosting images, you can also use Container Registry to build images. Instead of building an image yourself and pushing it to Container Registry, use the CLI to upload the Docker file and other files that make up your image. Container Registry will then build the image for you. Use the acr build command to run a build.

    You use the tasks feature of Container Registry to rebuild your image whenever its source code changes automatically. You configure a Container Registry task to monitor the GitHub repository that contains your code and trigger a build each time it changes. Container Registry tasks must be created from the command line. Example: this command is making use of ACR tasks to run docker build in the cloud: az acr build --registry $ACR_NAME --image helloacrtasks:v1 .

  • run containers by using Azure Container Instance

    Azure Container Instances is useful for scenarios that can operate in isolated containers, including simple applications, task automation, and build jobs. Here are some of the benefits:

    • Fast startup: Launch containers in seconds.
    • Per second billing: Incur costs only while the container is running.
    • Hypervisor-level security: Isolate your application as completely as it would be in a VM.
    • Custom sizes: Specify exact values for CPU cores and memory.
    • Persistent storage: Mount Azure Files shares directly to a container to retrieve and persist state.
    • Linux and Windows: Schedule both Windows and Linux containers using the same API.

    Azure Container Instances enables exposing your container groups directly to the internet with an IP address and a fully qualified domain name (FQDN). When you create a container instance, you can specify a custom DNS name label so your application is reachable at customlabel.azureregion.azurecontainer.io.

    Azure Container Instances has three restart-policy options:

    • Always: Containers in the container group are always restarted. This policy makes sense for long-running tasks such as a web server. This is the default setting applied when no restart policy is specified at container creation.
    • Never: Containers in the container group are never restarted. The containers run one time only.
    • OnFailure: Containers in the container group are restarted only when the process executed in the container fails (when it terminates with a nonzero exit code). The containers are run at least once. This policy works well for containers that run short-lived tasks.

    By default, Azure Container Instances are stateless. If the container crashes or stops, all of its state is lost. To persist state beyond the lifetime of the container, you must mount a volume from an external store. To mount an Azure file share as a volume in Azure Container Instances, you need these three values:

    • The storage account name
    • The share name
    • The storage account access key

    Commands:

    • Create container:
        az container create \
            --resource-group learn-deploy-aci-rg \
            --name mycontainer \
            --image microsoft/aci-helloworld \
            --ports 80 \
            --dns-name-label $DNS_NAME_LABEL \
            --location eastus
            --environment-variables \
                DB_ENDPOINT=$DB_ENDPOINT \
                DB_MASTERKEY=$DB_MASTERKEY
      

      (To use secure environment variables, you use the –secure-environment-variables argument instead of the –environment-variables argument)

    • Get logs:
        az container logs \
            --resource-group learn-deploy-aci-rg \
            --name mycontainer
      
    • Get container events (attach). The az container attach command shows container events and logs. By contrast, the az container logs only shows the logs and not the startup events.
        az container attach \
            --resource-group learn-deploy-aci-rg \
            --name mycontainer
      
    • Execute commands in container:
        az container exec \
            --resource-group learn-deploy-aci-rg \
            --name mycontainer \
            --exec-command /bin/sh
      

Create Azure App Service Web Apps

  • create an Azure App Service Web App

    Azure App Service is a fully managed web application hosting platform. This platform as a service (PaaS) offered by Azure allows you to focus on designing and building your app while Azure takes care of the infrastructure to run and scale your applications.

    Using the Azure portal, you can easily add deployment slots to an App Service web app. For instance, you can create a staging deployment slot where you can push your code to test on Azure. Once you are happy with your code, you can easily swap the staging deployment slot with the production slot.

    The Azure portal provides out-of-the-box continuous integration and deployment with Azure DevOps, GitHub, Bitbucket, FTP, or a local Git repository on your development machine. Connect your web app with any of the above sources and App Service will do the rest for you by automatically syncing your code and any future changes on the code into the web app.

    Baked into the web app is the ability to scale up/down or scale out. Depending on the usage of the web app, you can scale your app up/down by increasing/decreasing the resources of the underlying machine that is hosting your web app. Resources can be number of cores or the amount of RAM available.

    Creating a web app allocates a set of hosting resources in App Service, which you can use to host any web-based application that is supported by Azure, whether it be ASP.NET Core, Node.js, Java, Python, etc.

    Among other things, web app requires the following:

    • Publish type: You can deploy your application to App Service as code or as a ready-to-run Docker image. Selecting Docker image will activate the Docker tab of the wizard, where you provide information about the Docker registry from which App Service will retrieve your image.
    • Runtime stack: If you choose to deploy your application as code, App Service needs to know what runtime your application uses (examples include Node.js, Python, Java, and .NET). If you deploy your application as a Docker image, you will not need to choose a runtime stack, since your image will include it.
    • Operating system: If you are deploying your app as code, many of the available runtime stacks are limited to one operating system or the other. If your application is packaged as a Docker image, choose the operating system on which your image is designed to run. Selecting Windows activates the Monitoring tab, where you have the option to enable Application Insights. Application Insights can be used from Linux-hosted apps as well, but this turnkey, no-code option is only available on Windows.
    • App Service plans: An App Service plan is a set of virtual server resources that run App Service apps. A plan’s size (sometimes referred to as its sku or pricing tier) determines the performance characteristics of the virtual servers that run the apps assigned to the plan and the App Service features that those apps have access to. Every App Service web app you create must be assigned to a single App Service plan that runs it. App Service plans are the unit of billing for App Service. The size of each App Service plan in your subscription, in addition to the bandwidth resources used by the apps deployed to those plans, determines the price that you pay. The number of web apps deployed to your App Service plans has no effect on your bill.

    Commands:

    • Deploy (it uses ZIP deploy): az webapp up --sku F1 --name <app-name> --os-type linux (Running it again in the same session will reuse cached values from .azure/config file: az webapp up --os-type linux)
  • enable diagnostics logging

    App logs are the output of runtime trace statements in app code. The types of logging available through the Azure App Service depends on the code framework of the app, and on whether the app is running on a Windows or Linux app host: App environment | Host | Log levels | Save location – | – | – | – ASP.NET | Windows | Error, Warning, Information, Verbose | File system, Blob storage ASP.NET Core | Windows | Error, Warning, Information, Verbose | File system, Blob storage ASP.NET Core | Linux | Error | File system Node.js | Windows | Error (STDERR), Information (STDOUT), Warning, Verbose | File system, Blob storage Node.js | Linux | Error | File system Java | Linux | Error | File system

    Live log streaming is an easy and efficient way to view live logs for troubleshooting purposes. Live log streaming is designed to provide a quick view of all messages that are being sent to the app logs in the file system, without having to go through the process of locating and opening these logs. To use live logging, you connect to the live log service from the command line, and can then see text being written to the app’s logs in real time.

    Alternatives to app diagnostics. Azure Application Insights is a site extension that provides additional performance monitoring features, such as detailed usage and performance data, and is designed for production app deployments as well as being a potentially useful development tool.

    All Azure Web apps have an associated Source Control Management (SCM) service site. This site runs the Kudu service, and other Site Extensions; it is Kudu that manages deployment and troubleshooting for Azure Web Apps, including options for viewing and downloading log files. One way to access the KUDU console is navigate to https://<app name>.scm.azurewebsites.net, and then sign in using deployment credentials.

    With the Azure Monitor integration, you can create Diagnostic Settings to send logs to Storage Accounts, Event Hubs and Log Analytics.

    Commands:

    • Enable logging: az webapp log config --application-logging true --level verbose --name <app-name> --resource-group <resource-group-name> (there is currently no way to disable application logging by using Azure CLI commands)
    • Open log stream: az webapp log tail --name <app name> --resource-group <resource group name>
  • deploy code to a web app

    Automated deployment, or continuous integration, is a process used to push out new features and bug fixes in a fast and repetitive pattern with minimal impact on end users.

    Azure supports automated deployment directly from several sources. The following options are available:

    • Azure DevOps: You can push your code to Azure DevOps (previously known as Visual Studio Team Services), build your code in the cloud, run the tests, generate a release from the code, and finally, push your code to an Azure Web App.
    • GitHub: Azure supports automated deployment directly from GitHub. When you connect your GitHub repository to Azure for automated deployment, any changes you push to your production branch on GitHub will be automatically deployed for you.
    • Bitbucket: With its similarities to GitHub, you can configure an automated deployment with Bitbucket.
    • OneDrive: Microsoft’s cloud-based storage. You must have a Microsoft Account linked to a OneDrive account to deploy to Azure.
    • Dropbox: Azure supports deployment from Dropbox, which is a popular cloud-based storage system that is similar to OneDrive.

    There are a few options that you can use to manually push your code to Azure:

    • Git: App Service web apps feature a Git URL that you can add as a remote repository. Pushing to the remote repository will deploy your app.
    • az webapp up: webapp up is a feature of the az command-line interface that packages your app and deploys it. Unlike other deployment methods, az webapp up can create a new App Service web app for you if you haven’t already created one.
    • ZIP deploy: Use az webapp deployment source config-zip to send a ZIP of your application files to App Service. ZIP deploy can also be accessed via basic HTTP utilities such as curl.
    • WAR deploy: It’s an App Service deployment mechanism specifically designed for deploying Java web applications using WAR packages. WAR deploy can be accessed using the Kudu HTTP API located at https://.scm.azurewebsites.net/api/wardeploy.
    • Visual Studio: Visual Studio features an App Service deployment wizard that can walk you through the deployment process.
    • FTP/S: FTP or FTPS is a traditional way of pushing your code to many hosting environments, including App Service.

    Within a single Azure App Service web app, you can create multiple deployment slots. Each slot is a separate instance of that web app, and it has a separate hostname. You can deploy a different version of your web app into each slot. Deployment slots are available only when your web app uses an App Service plan in the Standard, Premium, or Isolated tier. The new slot is effectively a separate web app with a different hostname. That’s why anyone on the internet can access it if they know that hostname. You can control access to a slot by using IP address restrictions.

    When provisioning and deploying high-scale applications that are composed of highly decoupled microservices, repeatability and predictability are crucial to success. Azure App Service enables you to create microservices that include web apps, mobile back ends, and API apps. Azure Resource Manager enables you to manage all the microservices as a unit, together with resource dependencies such as database and source control settings. Now, you can also deploy such an application using JSON templates and simple PowerShell scripting.

  • configure web app settings including SSL, API, and connection strings

    In App Service, app settings are variables passed as environment variables to the application code. For Linux apps and custom containers, App Service passes app settings to the container using the –env flag to set the environment variable in the container. App settings and connection strings are always encrypted when stored (encrypted-at-rest).

    For ASP.NET and ASP.NET Core developers, setting app settings in App Service are like setting them in in _Web.config_ or _appsettings.json_, but the values in App Service override the ones in _Web.config_ or _appsettings.json_. You can keep development settings (for example, local MySQL password) in _Web.config_ or _appsettings.json_ and production secrets (for example, Azure MySQL database password) safely in App Service. Same, for ASP.NET and ASP.NET Core developers, setting connection strings in App Service are like setting them in in _Web.config_, but the values you set in App Service override the ones in _Web.config_. You can keep development settings (for example, a database file) in _Web.config_ and production secrets (for example, SQL Database credentials) safely in App Service. The same code uses your development settings when you debug locally, and it uses your production secrets when deployed to Azure. For other language stacks, it's better to use _app settings_ instead, because connection strings require special formatting in the variable keys in order to access the values.

    You can use the Azure CLI to create and manage settings from the command line:

    • Create: az webapp config appsettings set --name <app-name> --resource-group <resource-group-name> --settings <setting-name>="<value>"
    • Show: az webapp config appsettings list --name <app-name> --resource-group <resource-group-name>

    The following table lists the options you have for adding certificates in App Service: Option | Description – | – Create a free App Service Managed Certificate (Preview) | A private certificate that’s easy to use if you just need to secure your www custom domain or any non-naked domain in App Service. Purchase an App Service certificate | A private certificate that’s managed by Azure. It combines the simplicity of automated certificate management and the flexibility of renewal and export options. Import a certificate from Key Vault | Useful if you use Azure Key Vault to manage your PKCS12 certificates. See Private certificate requirements. Upload a private certificate | If you already have a private certificate from a third-party provider, you can upload it. See Private certificate requirements. Upload a public certificate | Public certificates are not used to secure custom domains, but you can load them into your code if you need them to access remote resources.

  • implement autoscaling rules, including scheduled autoscaling, and scaling by operational or system metrics

    When you create a web app, you can either create a new App Service plan or use an existing one. If you select an existing plan, any other web apps that use the same plan will share resources with your web app. They’ll all scale together, so they need to have the same scaling requirements. If your apps have different requirements, use a separate App Service plan for each one.

    You scale out by adding more instances to an App Service plan, up to the limit available for your selected tier.

    You scale an App Service plan up and down by changing the pricing tier and hardware level that it runs on. Scaling up can cause an interruption in service to client apps running at the time. Also, scaling up can cause the outgoing IP addresses for the web app to change.

    Azure Monitor autoscale applies only to Virtual Machine scale sets, Cloud Services, App Service - Web Apps, and API Management services.

    The following explanation applies to autoscaling:

    • Resource Metrics. Resources emit metrics, these metrics are later processed by rules. Metrics come via different methods. Virtual machine scale sets use telemetry data from Azure diagnostics agents whereas telemetry for Web apps and Cloud services comes directly from the Azure Infrastructure. Some commonly used statistics include CPU Usage, memory usage, thread counts, queue length, and disk usage. For a list of what telemetry data you can use, see Autoscale Common Metrics.
    • Custom Metrics. You can also leverage your own custom metrics that your application(s) may be emitting. If you have configured your application(s) to send metrics to Application Insights you can leverage those metrics to make decisions on whether to scale or not.
    • Time. Schedule-based rules are based on UTC. You must set your time zone properly when setting up your rules.
    • Rules. You can have many of them. You can create complex overlapping rules as needed for your situation. Rule types include
      • Metric-based - For example, do this action when CPU usage is above 50%.
      • Time-based - For example, trigger a webhook every 8am on Saturday in a given time zone. Metric-based rules measure application load and add or remove VMs based on that load. Schedule-based rules allow you to scale when you see time patterns in your load and want to scale before a possible load increase or decrease occurs.
    • Actions and automation. Rules can trigger one or more types of actions.
      • Scale - Scale VMs in or out
      • Email - Send email to subscription admins, co-admins, and/or additional email address you specify
      • Automate via webhooks - Call webhooks, which can trigger multiple complex actions inside or outside Azure. Inside Azure, you can start an Azure Automation runbook, Azure Function, or Azure Logic App. Example third-party URL outside Azure include services like Slack and Twilio.

Implement Azure functions

Azure Functions is a serverless application platform. It allows developers to host business logic that can be executed without provisioning infrastructure. Functions provides intrinsic scalability and you are charged only for the resources used. You can write your function code in the language of your choice, including C#, F#, JavaScript, Python, and PowerShell Core. Support for package managers like NuGet and NPM is also included, so you can use popular libraries in your business logic.

An Azure Functions app stores management information, code, and logs in Azure Storage. The storage account must support Azure Blob, Queue, Files, and Table storage; use a general Azure Storage account for this purpose.

  • implement input and output bindings for a function

    Bindings are a declarative way to connect data and services to your function. Bindings know how to talk to different services, which means you don’t have to write code in your function to connect to data sources and manage connections. The platform takes care of that complexity for you as part of the binding code. Each binding has a direction - your code reads data from input bindings, and writes data to output bindings. Each function can have zero or more bindings to manage the input and output data processed by the function.

    A trigger is a special type of input binding that has the additional capability of initiating execution.

    Three properties are required in all bindings. You may have to supply additional properties based on the type of binding and storage you are using.

    • Name - Defines the function parameter through which you access the data. For example, in a queue input binding, this is the name of the function parameter that receives the queue message content.
    • Type - Identifies the type of binding, i.e., the type of data or service we want to interact with.
    • Direction - Indicates the direction data is flowing, i.e., is it an input or output binding?

    Additionally, most binding types also need a fourth property:

    • Connection - Provides the name of an app setting key that contains the connection string. Bindings use connection strings stored in app settings to keep secrets out of the function code. This makes your code more configurable and secure.

    Bindings are defined in JSON. A binding is configured in your function’s configuration file, which is named function.json and lives in the same folder as your function code.

    A binding expression is specialized text in function.json, function parameters, or code that is evaluated when the function is invoked to yield a value. Types of binding expressions:

    • App settings
    • Trigger file name
    • Trigger metadata
    • JSON payloads
    • New GUID
    • Current date and time
  • implement function triggers by using data operations, timers, and webhooks

    The type of event that starts the function is called a trigger. You must configure a function with exactly one trigger. Azure supports triggers for the following services:

    Service Trigger description
    Blob storage Starts a function when a new or updated blob is detected.
    Azure Cosmos DB Start a function when inserts and updates are detected.
    Event Grid Starts a function when an event is received from Event Grid.
    HTTP Starts a function with an HTTP request.
    Microsoft Graph Events Starts a function in response to an incoming webhook from the Microsoft Graph. Each instance of this trigger can react to one Microsoft Graph resource type.
    Queue storage Starts a function when a new item is received on a queue. The queue message is provided as input to the function.
    Service Bus Starts a function in response to messages from a Service Bus queue.
    Timer Starts a function on a schedule.

    Secure HTTP triggers: HTTP triggers let you use API keys to block unknown callers by requiring the key to be present on each request. When you create a function, you select the authorization level. By default, it’s set to Function, which requires a function-specific API key, but it can also be set to Admin to use a global “master” key, or Anonymous to indicate that no key is required. You can also change the authorization level through the function properties after creation.

    A blob trigger is a trigger that executes a function when a file is uploaded or updated in Azure Blob storage (data operation). To create a blob trigger, you create an Azure Storage account and provide a location that the trigger monitors.

    One setting that you’ll want to look at is the Path. The Path tells the blob trigger where to monitor to see if a blob is uploaded or updated. i.e samples-workitems/{name}.png

    A timer trigger is a trigger that executes a function at a consistent interval. To create a timer trigger, you need to supply two pieces of information:

    • A Timestamp parameter name, which is simply an identifier to access the trigger in code.
    • A Schedule, which is a CRON expression that sets the interval for the timer.

    An HTTP trigger (webhook) is a trigger that executes a function when it receives an HTTP request. HTTP triggers have many capabilities and customizations, including:

    • Provide authorized access by supplying keys.
    • Restrict which HTTP verbs are supported.
    • Return data back to the caller.
    • Receive data through query string parameters or through the request body.
    • Support URL route templates to modify the function URL.
    • When you create an HTTP trigger, select a programming language, provide a trigger name, and select an Authorization level.

    One setting that’s important to understand is Request parameter name. This setting is a string that represents the name of the parameter that contains the information about an incoming HTTP request. By default, the name of the parameter is req.

  • implement Azure Durable Functions

    Durable Functions is an extension of Azure Functions that enables you to perform long-lasting, stateful operations in Azure. Azure provides the infrastructure for maintaining state information. You can use Durable Functions to orchestrate a long-running workflow.

    Benefits of using Durable Functions include:

    • They enable you to write event driven code. A durable function can wait asynchronously for one or more external events, and then perform a series of tasks in response to these events.
    • You can chain functions together. You can implement common patterns such as fan-out/fan-in, which uses one function to invoke others in parallel, and then accumulate the results.
    • You can orchestrate and coordinate functions, and specify the order in which functions should execute.
    • The state is managed for you. You don’t have to write your own code to save state information for a long-running function.

    Durable functions allows you to define stateful workflows using an orchestration function. An orchestration function provides these extra benefits:

    • You can define the workflows in code. You don’t need to write a JSON description or use a workflow design tool.
    • Functions can be called both synchronously and asynchronously. Output from the called functions is saved locally in variables and used in subsequent function calls.
    • Azure checkpoints the progress of a function automatically when the function awaits. Azure may choose to dehydrate the function and save its state while the function waits, to preserve resources and reduce costs. When the function starts running again, Azure will rehydrate it and restore its state.

    Function types:

    • Client functions are the entry point for creating an instance of a Durable Functions orchestration. They can run in response to an event from many sources, such as a new HTTP request arriving, a message being posted to a message queue, an event arriving in an event stream. You can write them in any of the supported languages.
    • Orchestrator functions describe how actions are executed, and the order in which they are run. You write the orchestration logic in code (only C# or JavaScript).
    • Activity functions are the basic units of work in a durable function orchestration. An activity function contains the actual work performed by the tasks being orchestrated.

    Application patterns:

    • Function chaining - In this pattern, the workflow executes a sequence of functions in a specified order. The output of one function is applied to the input of the next function in the sequence. The output of the final function is used to generate a result.
    • Fan out/fan in - This pattern runs multiple functions in parallel and then waits for all the functions to finish. The results of the parallel executions can be aggregated or used to compute a final result.
    • Async HTTP APIs - This pattern addresses the problem of coordinating state of long-running operations with external clients. An HTTP call can trigger the long-running action. Then, it can redirect the client to a status endpoint. The client can learn when the operation is finished by polling this endpoint.
    • Monitor - This pattern implements a recurring process in a workflow, possibly looking for a change in state. For example, you could use this pattern to poll until specific conditions are met.
    • Human interaction - This pattern combines automated processes that also involve some human interaction. A manual process within an automated process is tricky because people aren’t as highly available and as responsive as most computers. Human interaction can be incorporated using timeouts and compensation logic that runs if the human fails to interact correctly within a specified response time. An approval process is an example of a process that involves human interaction.

Develop for Azure storage (10-15%)

Develop solutions that use Cosmos DB storage

  • select the appropriate API for your solution

    Azure Cosmos DB is a globally distributed and elastically scalable database. At the lowest level, Azure Cosmos DB stores data in atom-record-sequence (ARS) format. The data is then abstracted and projected as an API, which you specify when you are creating your database.

    Core (SQL) is the default API for Azure Cosmos DB, which provides you with a view of your data that resembles a traditional NoSQL document store. You can query the hierarchical JSON documents with a SQL-like language. Core (SQL) uses JavaScript’s type system, expression evaluation, and function invocation.

    Azure Cosmos DB’s API for MongoDB supports the MongoDB wire protocol. This API allows existing MongoDB client SDKs, drivers, and tools to interact with the data transparently, as if they are running against an actual MongoDB database. The data is stored in document format, which is the same as using Core (SQL). Azure Cosmos DB’s API for MongoDB is currently compatible with 3.2 version of the MongoDB wire protocol.

    Azure Cosmos DB’s support for the Cassandra API makes it possible to query data by using the Cassandra Query Language (CQL), and your data will appear to be a partitioned row store. Just like the MongoDB API, any clients or tools should be able to connect transparently to Azure Cosmos DB; only your connection settings should need to be updated. Cosmos DB’s Cassandra API currently supports version 4 of the CQL wire protocol.

    Azure Cosmos DB’s Azure Table API provides support for applications that are written for Azure Table Storage that need premium capabilities like global distribution, high availability, scalable throughput. The original Table API only allows for indexing on the Partition and Row keys; there are no secondary indexes. Storing table data in Cosmos DB automatically indexes all the properties, and requires no index management.

    Choosing Gremlin as the API provides a graph-based view over the data. Remember that at the lowest level, all data in any Azure Cosmos DB is stored in an ARS format. A graph-based view on the database means data is either a vertex (which is an individual item in the database), or an edge (which is a relationship between items in the database). You typically use a traversal language to query a graph database, and Azure Cosmos DB supports Apache Tinkerpop’s Gremlin language.

      Core (SQL) MongoDB Cassandra Azure Table Gremlin
    New projects being created from scratch        
    Existing MongoDB, Cassandra, Azure Table, or Gremlin data  
    Analysis of the relationships between data        
    All other scenarios        

    Additionally, there are a few questions that you can ask in order to help you define the scenario where the database is going to be used:

    • Does the schema change a lot? A traditional document database is a good fit in these scenarios, making Core (SQL) a good choice.
    • Is there important data about the relationships between items in the database? Relationships that require metadata to be stored for them are best represented in a graph database.
    • Does the data consist of simple key-value pairs? Before Azure Cosmos DB existed, Redis or the Table API might have been a good fit for this kind of data; however, Core (SQL) API is now the better choice, as it offers a richer query experience, with improved indexing over the Table API.

    Depending on which API you use, an Azure Cosmos item can represent either a document in a collection, a row in a table, or a node or edge in a graph. The following table shows the mapping of API-specific entities to an Azure Cosmos item:

    Cosmos entity SQL API Cassandra API Azure Cosmos DB API for MongoDB Gremlin API Table API
    Azure Cosmos item Item Row Document Node or edge Item
  • implement partitioning schemes

    Azure Cosmos DB uses partitioning to scale individual containers in a database to meet the performance needs of your application. In partitioning, the items in a container are divided into distinct subsets called logical partitions. Logical partitions are formed based on the value of a partition key that is associated with each item in a container. All the items in a logical partition have the same partition key value.

    A logical partition also defines the scope of database transactions. You can update items within a logical partition by using a transaction with snapshot isolation. When new items are added to a container, new logical partitions are transparently created by the system. There is no limit to the number of logical partitions in your container. Each logical partition can store up to 20GB of data.

    A container is scaled by distributing data and throughput across physical partitions. Internally, one or more logical partitions are mapped to a single physical partition. Typically smaller containers have many logical partitions but they only require a single physical partition. Unlike logical partitions, physical partitions are an internal implementation of the system and they are entirely managed by Azure Cosmos DB.

    If your container has a property that has a wide range of possible values, it is likely a great partition key choice. One possible example of such a property is the item ID. For small read-heavy containers or write-heavy containers of any size, the item ID is naturally a great choice for the partition key.

  • interact with data using the appropriate SDK

      // Create a new instance of the Cosmos Client
      var cosmosClient = new CosmosClient(EndpointUri, PrimaryKey);
      // Create a new database
      var database = await this.cosmosClient.CreateDatabaseIfNotExistsAsync(databaseId);
       // Create a new container
      var container = await this.database.CreateContainerIfNotExistsAsync(containerId, "/LastName");
      // Read the item to see if it exists.  
      ItemResponse<Family> family = await this.container.ReadItemAsync<Family>(family.Id, new PartitionKey(family.LastName));
      // Query data
      var sqlQueryText = "SELECT * FROM c WHERE c.LastName = 'Andersen'";
      QueryDefinition queryDefinition = new QueryDefinition(sqlQueryText);
      FeedIterator<Family> queryResultSetIterator = this.container.GetItemQueryIterator<Family>(queryDefinition);
      List<Family> families = new List<Family>();
      while (queryResultSetIterator.HasMoreResults)
      {
          FeedResponse<Family> currentResultSet = await queryResultSetIterator.ReadNextAsync();
          foreach (Family family in currentResultSet)
          {
              // Do stuff
          }
      }
    
  • set the appropriate consistency level for operations

    Consistency can sometimes be an issue when you are working with distributed systems, but Azure Cosmos DB alleviates this situation by offering you five different consistency levels (from strongest to weakest):

    • strong
    • bounded staleness
    • session
    • consistent prefix
    • eventual

    The linearizability of the strong consistency model is the gold standard of data programmability. But it adds a steep price from higher write latencies due to data having to replicate and commit across large distances. Strong consistency may also suffer from reduced availability (during failures) because data cannot replicate and commit in every region. Eventual consistency offers higher availability and better performance, but its more difficult to program applications because data may not be completely consistent across all regions.

    Strong: Strong consistency offers a linearizability guarantee. Linearizability refers to serving requests concurrently. The reads are guaranteed to return the most recent committed version of an item. A client never sees an uncommitted or partial write. Users are always guaranteed to read the latest committed write.

    Bounded staleness: The reads are guaranteed to honor the consistent-prefix guarantee. The reads might lag behind writes by at most “K” versions (that is, “updates”) of an item or by “T” time interval, whichever is reached first. In other words, when you choose bounded staleness, the “staleness” can be configured in two ways:

    • The number of versions (K) of the item
    • The time interval (T) reads might lag behind the writes

    Session: Within a single client session reads are guaranteed to honor the consistent-prefix, monotonic reads, monotonic writes, read-your-writes, and write-follows-reads guarantees. This assumes a single “writer” session or sharing the session token for multiple writers.

    Consistent prefix: Updates that are returned contain some prefix of all the updates, with no gaps. Consistent prefix consistency level guarantees that reads never see out-of-order writes.

    Eventual: There’s no ordering guarantee for reads. In the absence of any further writes, the replicas eventually converge. Eventual consistency is the weakest form of consistency because a client may read the values that are older than the ones it had read before. Eventual consistency is ideal where the application does not require any ordering guarantees.

  • create Cosmos DB containers

    Commands:

    • Create a Cosmos container with default index policy
        az cosmosdb sql container create \
            -a $accountName -g $resourceGroupName \
            -d $databaseName -n $containerName \
            -p $partitionKey --throughput $throughput	
      
  • implement scaling (partitions, containers)

    Azure Cosmos DB uses partitioning to scale individual containers in a database to meet the performance needs of your application. In partitioning, the items in a container are divided into distinct subsets called logical partitions. Logical partitions are formed based on the value of a partition key that is associated with each item in a container. All the items in a logical partition have the same partition key value.

    Selecting a partition key with a wide range of possible values ensures that the container is able to scale. Azure Cosmos DB transparently partitions your container using the logical partition key that you specify in order to elastically scale your provisioned throughput and storage. All containers created inside a database with a provisioned throughput must be created with a partition key.

  • implement server-side programming including stored procedures, triggers, and change feed notifications

    A stored procedure is a piece of application logic written in JavaScript that is registered and executed against a collection as a single transaction. In Azure Cosmos DB, JavaScript is hosted in the same memory space as the database. Hence, requests made within stored procedures execute in the same scope of a database session. This process enables Azure Cosmos DB to guarantee ACID for all operations that are part of a single stored procedure.

    The stored procedure resource has a fixed schema. The body property contains the application logic. The following example illustrates the JSON construct of a stored procedure:

      {    
          "id":"SimpleStoredProc",  
          "body":"function (docToCreate, addedPropertyName, addedPropertyValue) { getContext().getResponse().setBody('Hello World'); }",  
          "_rid":"hLEEAI1YjgcBAAAAAAAAgA==",  
          "_ts":1408058682,  
          "_self":"dbs\/hLEEAA==\/colls\/hLEEAI1Yjgc=\/sprocs\/hLEEAI1YjgcBAAAAAAAAgA==\/",  
          "_etag":"00004100-0000-0000-0000-53ed453a0000"  
      }
    

    Merely having All access mode for a particular stored procedure does not allow the user to execute the stored procedure. Instead, the user has to have All access mode at the collection level in order to execute a stored procedure.

    Triggers are pieces of application logic that can be executed before (pre-triggers) and after (post-triggers) creation, deletion, and replacement of a document. Triggers are written in JavaScript. Both pre and post triggers do not take parameters. Like stored procedures, triggers live within the confines of a collection, thus confining the application logic to the collection.

    Similar to stored procedures, the triggers resource has a fixed schema. The body property contains the application logic. The following example illustrates the JSON construct of a trigger. There are two additional required parameters when creating a trigger:

    • triggerOperation - It is the type of operation that invokes the trigger. The acceptable values are: All, Insert, Replace and Delete.
    • triggerType. This specifies when the trigger is fired. The acceptable values are: Pre and Post. Pre triggers fire before an operation while Post triggers after an operation.

    Change feed in Azure Cosmos DB is a persistent record of changes to a container in the order they occur. Change feed support in Azure Cosmos DB works by listening to an Azure Cosmos container for any changes. It then outputs the sorted list of documents that were changed in the order in which they were modified. The persisted changes can be processed asynchronously and incrementally, and the output can be distributed across one or more consumers for parallel processing. Currently change feed doesn’t log deletes.

    The feature is supported in all Cosmos DB APIs except Table API.

    Change feed items come in the order of their modification time. This sort order is guaranteed per logical partition key. While consuming the change feed in an Eventual consistency level, there could be duplicate events in-between subsequent change feed read operations (the last event of one read operation appears as the first of the next).

Develop solutions that use blob storage

Azure Storage supports three kinds of blobs:

Blob type Description
Block blobs Block blobs are used to hold text or binary files up to ~5 TB (50,000 blocks of 100 MB) in size. The primary use case for block blobs is the storage of files that are read from beginning to end, such as media files or image files for websites. They are named block blobs because files larger than 100 MB must be uploaded as small blocks. These blocks are then consolidated (or committed) into the final blob.
Page blobs Page blobs are used to hold random-access files up to 8 TB in size. Page blobs are used primarily as the backing storage for the VHDs used to provide durable disks for Azure Virtual Machines (Azure VMs). They are named page blobs because they provide random read/write access to 512-byte pages.
Append blobs Append blobs are made up of blocks like block blobs, but they are optimized for append operations. These blobs are frequently used for logging information from one or more sources into the same blob. For example, you might write all of your trace logging to the same append blob for an application running on multiple VMs. A single append blob can be up to 195 GB.
  • move items in Blob storage between storage accounts or containers

    Azure doesn’t include any process to move blobs. To perform a move, you’ll first copy the data and then delete the source data. Azure does provide several tools you can use to copy blobs to a destination:

    • Azure CLI
    • AzCopy utility
    • .NET Storage Client library

    The Azure CLI provides access to Azure Storage through the az storage series of commands. The basic commands to upload and download blobs between blob storage and the local file system are synchronous. You transfer blobs between containers and storage accounts using the az storage blob copy command. Unlike the upload and download operations, this command runs asynchronously and uses the Azure Storage service to manage the copy process. This command also supports a batch mode that enables you to copy multiple blobs.

    Commands:

    • Move
        az storage blob copy start \
            --destination-container destContainer \
            --destination-blob myBlob \
            --source-account-name mySourceAccount \
            --source-account-key mySourceAccountKey \
            --source-container myContainer \
            --source-blob myBlob	
      
    • Check status:
        az storage blob show \
            --container-name destContainer \
            --name myBlob
      

    The AzCopy utility was written specifically for transferring data into, out of, and between Azure Storage accounts. A key strength of AzCopy over the Azure CLI is that all operations run asynchronously, and they’re recoverable.

    As with the Azure CLI, AzCopy makes use of the Azure Storage service to transfer blobs between storage accounts. The AzCopy command lacks the ability to select blobs based on their modification dates. However, AzCopy does provide comprehensive support for hierarchical containers and blob selection by pattern matching (two features not available with the Azure CLI).

    You can control the performance and resource utilization of the AzCopy command by setting the AZCOPY_CONCURRENCY_VALUE environment variable. AzCopy uses the value of this variable to specify the number of concurrent threads it will use for transferring data. By default, it’s set to 300.

    Commands:

    • Move: azcopy copy "https://sourceaccount.blob.core.windows.net/sourcecontainer/*?<source sas token>" "https://destaccount.blob.core.windows.net/destcontainer/*?<dest sas token>"

    The .NET Storage Client library is a collection of objects and methods that you can use to build custom applications that manipulate items held in Azure Storage.

    Code:

          CloudBlockBlob destBlob = destContainer.GetBlockBlobReference(sourceBlob.Name);
          await destBlob.StartCopyAsync(new Uri(GetSharedAccessUri(sourceBlob.Name, sourceContainer)));
    

    The StartCopyAsync method initiates the blob copy operation and the process runs in the background. You can check on the progress of the operation by retrieving a reference to the destination blob, and querying its CopyState.

  • set and retrieve properties and metadata

    Blob containers support system properties and user-defined metadata, in addition to the data they contain:

    • System properties: System properties exist on each Blob storage resource. Some of them can be read or set, while others are read-only. Under the covers, some system properties correspond to certain standard HTTP headers. The Azure Storage client library for .NET maintains these properties for you.
    • User-defined metadata: User-defined metadata consists of one or more name-value pairs that you specify for a Blob storage resource. You can use metadata to store additional values with the resource. Metadata values are for your own purposes only, and do not affect how the resource behaves.

    Metadata name/value pairs are valid HTTP headers, and so should adhere to all restrictions governing HTTP headers. Metadata names must be valid HTTP header names and valid C# identifiers, may contain only ASCII characters, and should be treated as case-insensitive. Metadata values containing non-ASCII characters should be Base64-encoded or URL-encoded.

    Code:

          // Fetch some container properties and write out their values.
          var properties = await container.GetPropertiesAsync();
          Console.WriteLine($"Properties for container {container.Uri}");
          Console.WriteLine($"Public access level: {properties.Value.PublicAccess}");
          Console.WriteLine($"Last modified time in UTC: {properties.Value.LastModified}");
    
          var metadata = new Dictionary<string, string>();
    
          // Add some metadata to the container.
          metadata.Add("docType", "textDocuments");
          metadata.Add("category", "guidance");
    
          // Set the container's metadata.
          await container.SetMetadataAsync(metadata);
    
          // Enumerate the container's metadata.
          Console.WriteLine("Container metadata:");
          foreach (var metadataItem in properties.Value.Metadata)
          {
              Console.WriteLine($"\tKey: {metadataItem.Key}");
              Console.WriteLine($"\tValue: {metadataItem.Value}");
          }		
    
  • interact with data using the appropriate SDK

    Use the Azure Blob Storage client library v12 for .NET to:

    • Create a container
    • Upload a blob to Azure Storage
    • List all of the blobs in a container
    • Download the blob to your local computer
    • Delete a container
  • implement data archiving and retention

    Immutable storage for Azure Blob storage enables users to store business-critical data objects in a WORM (Write Once, Read Many) state. This state makes the data non-erasable and non-modifiable for a user-specified interval. For the duration of the retention interval, blobs can be created and read, but cannot be modified or deleted. Immutable storage is available for general-purpose v1, general-purpose v2, BlobStorage, and BlockBlobStorage accounts in all Azure regions.

    Immutable storage supports the following features:

    • Time-based retention policy support: Users can set policies to store data for a specified interval. After the retention period has expired, blobs can be deleted but not overwritten.
    • Legal hold policy support: If the retention interval is not known, users can set legal holds to store immutable data until the legal hold is cleared. Each legal hold is associated with a user-defined alphanumeric tag (such as a case ID, event name, etc.) that is used as an identifier string.
    • Support for all blob tiers: WORM policies are independent of the Azure Blob storage tier and apply to all the tiers: hot, cool, and archive.
    • Container-level configuration: Users can configure time-based retention policies and legal hold tags at the container level. By using simple container-level settings, users can create and lock time-based retention policies, extend retention intervals, set and clear legal holds, and more. These policies apply to all the blobs in the container, both existing and new.
    • Audit logging support: Each container includes a policy audit log. It shows up to seven time-based retention commands for locked time-based retention policies and contains the user ID, command type, time stamps, and retention interval.
  • implement hot, cool, and archive storage

    Azure storage offers different access tiers, allowing you to store blob object data in the most cost-effective manner. Available access tiers include:

    • Hot - Optimized for storing data that is accessed frequently.
    • Cool - Optimized for storing data that is infrequently accessed and stored for at least 30 days.
    • Archive - Optimized for storing data that is rarely accessed and stored for at least 180 days with flexible latency requirements, on the order of hours.

    The following considerations apply to the different access tiers:

    • The access tier can be set on a blob during or after upload.
    • Only the hot and cool access tiers can be set at the account level. The archive access tier can only be set at the blob level.
    • Data in the cool access tier has slightly lower availability, but still has high durability, retrieval latency, and throughput characteristics similar to hot data. For cool data, slightly lower availability and higher access costs are acceptable trade-offs for lower overall storage costs compared to hot data. For more information, see SLA for storage.
    • Data in the archive access tier is stored offline. The archive tier offers the lowest storage costs but also the highest access costs and latency.
    • The hot and cool tiers support all redundancy options. The archive tier supports only LRS, GRS, and RA-GRS.
    • Data storage limits are set at the account level and not per access tier. You can choose to use all of your limit in one tier or across all three tiers.

    Object storage data tiering between hot, cool, and archive is supported in Blob Storage and General Purpose v2 (GPv2) accounts. General Purpose v1 (GPv1) accounts don’t support tiering.

    Account-level tiering. Blobs in all three access tiers can coexist within the same account. Any blob that doesn’t have an explicitly assigned tier infers the tier from the account access tier setting.

    Blob-level tiering. Blob-level tiering allows you to upload data to the access tier of your choice using the Put Blob or Put Block List operations and change the tier of your data at the object level using the Set Blob Tier operation or lifecycle management feature.

    While a blob is in the archive access tier, it’s considered offline and can’t be read or modified. The blob metadata remains online and available, allowing you to list the blob and its properties. Reading and modifying blob data is only available with online tiers such as hot or cool. There are two options to retrieve and access data stored in the archive access tier:

    • Rehydrate an archived blob to an online tier - Rehydrate an archive blob to hot or cool by changing its tier using the Set Blob Tier operation.
    • Copy an archived blob to an online tier - Create a new copy of an archive blob by using the Copy Blob operation. Specify a different blob name and a destination tier of hot or cool.

Implement Azure security (15-20%)

Implement user authentication and authorization

  • implement OAuth2 authentication

    To protect an API with Azure AD:

    1. Register an application in Azure AD that represents the API.
    2. Every client application that calls the API needs to be registered as an application in Azure AD.
    3. Once that you have registered both applications to represent the API and the Client, grant permissions to allow the client-app to call the backend-app.
    4. Enable OAuth 2.0 user authorization in the client app (from API Management instance). At this point the client app can obtain access tokens from Azure AD.
    5. Enable OAuth 2.0 user authorization for your API. This enables the client app to know that it needs to obtain an access token on behalf of the user, before making calls to your API.
    6. Configure a JWT (JSON Web Token) validation policy to pre-authorize requests. Otherwise, the API Management does not validate the access token at this point.
  • create and implement shared access signatures

    A shared access signature is a signed URI that points to one or more storage resources. The URI includes a token that contains a special set of query parameters. The token indicates how the resources may be accessed by the client. One of the query parameters, the signature, is constructed from the SAS parameters and signed with the key that was used to create the SAS. This signature is used by Azure Storage to authorize access to the storage resource. Use a SAS to give secure access to resources in your storage account to any client who does not otherwise have permissions to those resources.

    A shared access signature (SAS) provides secure delegated access to resources in your storage account. With a SAS, you have granular control over how a client can access your data. For example:

    • What resources the client may access.
    • What permissions they have to those resources.
    • How long the SAS is valid.

    Azure Storage supports three types of shared access signatures:

    • User delegation SAS. A user delegation SAS is secured with Azure Active Directory (Azure AD) credentials and also by the permissions specified for the SAS. A user delegation SAS applies to Blob storage only.
    • Service SAS. A service SAS is secured with the storage account key. A service SAS delegates access to a resource in only one of the Azure Storage services: Blob storage, Queue storage, Table storage, or Azure Files.
    • Account SAS. An account SAS is secured with the storage account key. An account SAS delegates access to resources in one or more of the storage services. All of the operations available via a service or user delegation SAS are also available via an account SAS.

    A shared access signature can take one of the following two forms:

    • Ad hoc SAS. When you create an ad hoc SAS, the start time, expiry time, and permissions are specified in the SAS URI. Any type of SAS can be an ad hoc SAS.
    • Service SAS with stored access policy. A stored access policy is defined on a resource container, which can be a blob container, table, queue, or file share. The stored access policy can be used to manage constraints for one or more service shared access signatures. When you associate a service SAS with a stored access policy, the SAS inherits the constraints—the start time, expiry time, and permissions—defined for the stored access policy.

    You can sign a SAS token with a user delegation key or with a storage account key (Shared Key). Microsoft recommends using a user delegation SAS when possible for superior security.

    • You can sign a SAS token by using a user delegation key that was created using Azure Active Directory (Azure AD) credentials. A user delegation SAS is signed with the user delegation key. To get the key, and then create the SAS, an Azure AD security principal must be assigned an Azure role that includes the Microsoft.Storage/storageAccounts/blobServices/generateUserDelegationKey action.
    • Both a service SAS and an account SAS are signed with the storage account key. To create a SAS that is signed with the account key, an application must have access to the account key.

    The SAS token is a string that you generate on the client side, for example by using one of the Azure Storage client libraries. The SAS token is not tracked by Azure Storage in any way. You can create an unlimited number of SAS tokens on the client side. After you create a SAS, you can distribute it to client applications that require access to resources in your storage account.

  • register apps and use Azure Active Directory to authenticate users

    Azure AD can be used for all your applications, and by centralizing your application management you can use the same identity management features, tools, and policies across your entire app portfolio. Doing so will provide a unified solution that improves security, reduces costs, increases productivity, and enables you to ensure compliance. And you will get remote access to on-premises apps.

  • control access to resources by using role-based access controls (RBAC)

    Azure RBAC is an authorization system built on Azure Resource Manager that provides fine-grained access management of Azure resources. You control access to resources using Azure RBAC by creating role assignments, which control how permissions are enforced. You can think of these elements as “who”, “what”, and “where”.

    • Security principal (who): A security principal is just a fancy name for a user, group, or application that you want to grant access to.
    • Role definition (what you can do): A role definition is a collection of permissions.
    • Scope (where): Scope is where the access applies to. In Azure, you can specify a scope at four levels: management group, subscription, resource group, or resource. Scopes are structured in a parent-child relationship. You can assign roles at any of these levels of scope.

    Azure RBAC supports deny assignments in a limited way. Similar to a role assignment, a deny assignment attaches a set of deny actions to a user, group, service principal, or managed identity at a particular scope for the purpose of denying access. Deny assignments take precedence over role assignments.

Implement secure cloud solutions

  • secure app configuration data by using the App Configuration and KeyVault API

    Azure App Configuration provides a service to centrally manage application settings and feature flags. The easiest way to add an App Configuration store to your application is through a client library provided by Microsoft.

    App Configuration offers the following benefits:

    • A fully managed service that can be set up in minutes
    • Flexible key representations and mappings
    • Tagging with labels
    • Point-in-time replay of settings
    • Dedicated UI for feature flag management
    • Comparison of two sets of configurations on custom-defined dimensions
    • Enhanced security through Azure-managed identities
    • Encryption of sensitive information at rest and in transit
    • Native integration with popular frameworks

    If you are developing a project and need to share source code securely, use Azure Key Vault:

    • Create a Key Vault in your Azure subscription.
    • Grant you and your team members access to the Key Vault. If you have a large team, you can create an Azure Active Directory group and add that security group access to the Key Vault. If you already have your web app created, grant the web app access to the Key Vault so it can access the key vault without storing secret configuration in App Settings or files.
    • Add your secret to Key Vault on the Azure portal. For nested configuration settings, replace ‘:’ with ‘–’ so the Key Vault secret name is valid. ‘:’ is not allowed to be in the name of a Key Vault secret.
    • You can then access the secrets programatically via the KeyVaultClient that can be registered when the host builder is created: builder.AddAzureKeyVault(keyVaultEndpoint, keyVaultClient, new DefaultKeyVaultSecretManager());

    To work with secrets from Azure Key Vault in your App Service or Azure Functions you can use Key Vault references:

    • Create a key vault.
    • Create a system-assigned managed identity for your application.
    • Create an access policy in Key Vault for the application identity you created earlier. Enable the “Get” secret permission on this policy.
    • In your Application Settings you can reference the KeyVault secrets: @Microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret)
  • manage keys, secrets, and certificates by using the KeyVault API

    Azure Key Vault is a centralized cloud service for storing application secrets such as encryption keys, certificates, and server-side tokens.Key Vault resource provider supports two resource types: vaults and managed HSMs. Key Vault access has two facets: the management of the Key Vault itself (management plane), and accessing the data contained in the Key Vault (data plane).

    Microsoft and your apps don’t have access to the stored keys directly once a key is created or added to a key vault. Applications must use your keys by calling cryptography methods on the Key Vault service. The Key Vault service performs the requested operation within its hardened boundary. The application never has direct access to the keys. There are two variations on keys in Key Vault: hardware-protected (HSM), and software-protected.

    Secrets are small (less than 10K) data blobs protected by a HSM-generated key created with the Key Vault.

    Azure Key Vault manages X.509 based certificates that can come from several sources.

    • First, you can create self-signed certificates directly in the Azure portal. This process creates a public/private key pair and signs the certificate with its own key. These certificates can be used for testing and development.
    • Second, you can create an X.509 certificate signing request (CSR). This creates a public/private key pair in Key Vault along with a CSR you can pass over to your certification authority (CA). The signed X.509 certificate can then be merged with the held key pair to finalize the certificate in Key Vault
    • Third, you can connect your Key Vault with a trusted certificate issuer (referred to as an integrated CA) and create the certificate directly in Azure Key Vault. This approach requires a one-time setup to connect the certificate authority.
    • Finally, you can import existing certificates - this allows you to add certificates to Key Vault that you are already using. The imported certificate can be in either PFX or PEM format and must contain the private key.

    Commands:

    • Create vault: New-AzKeyVault -Name <your-unique-vault-name> -ResourceGroupName <resource-group>
    • Create key: $key = Add-AzureKeyVaultKey -VaultName 'contoso' -Name 'MyFirstKey' -Destination 'HSM'
  • implement Managed Identities for Azure resources

    On Azure, managed identities eliminate the need for developers having to manage credentials by providing an identity for the Azure resource in Azure AD and using it to obtain Azure Active Directory (Azure AD) tokens. This also helps accessing Azure Key Vault where developers can store credentials in a secure manner. Managed identities for Azure resources solves this problem by providing Azure services with an automatically managed identity in Azure AD.

    Benefits of using Managed identities:

    • You don’t need to manage credentials. Credentials are not even accessible to you.
    • You can use managed identities to authenticate to any Azure service that supports Azure AD authentication including Azure Key Vault.
    • Managed identities can be used without any additional cost.

    There are two types of managed identities:

    • System-assigned. Some Azure services allow you to enable a managed identity directly on a service instance. When you enable a system-assigned managed identity an identity is created in Azure AD that is tied to the lifecycle of that service instance. So when the resource is deleted, Azure automatically deletes the identity for you. By design, only that Azure resource can use this identity to request tokens from Azure AD.
    • User-assigned. You may also create a managed identity as a standalone Azure resource. You can create a user-assigned managed identity and assign it to one or more instances of an Azure service. In the case of user-assigned managed identities, the identity is managed separately from the resources that use it.

    Regardless of the type of identity chosen a managed identity is a service principal of a special type that may only be used with Azure resources.

Monitor, troubleshoot, and optimize Azure solutions (10-15%)

Integrate caching and content delivery within solutions

  • develop code to implement CDNs in solutions

    Azure Content Delivery Network (CDN) is a global CDN solution for delivering high-bandwidth content. It can be hosted in Azure or any other location. With Azure CDN, you can cache static objects loaded from Azure Blob storage, a web application, or any publicly accessible web server, by using the closest point of presence (POP) server. Azure CDN can also accelerate dynamic content, which cannot be cached, by leveraging various network and routing optimizations.

    Before you can create a CDN endpoint, you must have created at least one CDN profile, which can contain one or more CDN endpoints. To organize your CDN endpoints by internet domain, web application, or some other criteria, you can use multiple profiles.

  • configure cache and expiration policies for FrontDoor, CDNs, or Redis caches Store and retrieve data in Azure Redis cache

    Front Door is a modern Content Delivery Network (CDN) with dynamic site acceleration and load balancing, it also supports caching behaviors just like any other CDN. Key features included with Front Door:

    • Accelerated application performance by using split TCP-based anycast protocol.
    • Intelligent health probe monitoring for backend resources.
    • URL-path based routing for requests.
    • Enables hosting of multiple websites for efficient application infrastructure.
    • Cookie-based session affinity.
    • SSL offloading and certificate management.
    • Define your own custom domain.
    • Application security with integrated Web Application Firewall (WAF).
    • Redirect HTTP traffic to HTTPS with URL redirect.
    • Custom forwarding path with URL rewrite.
    • Native support of end-to-end IPv6 connectivity and HTTP/2 protocol.

    Azure Cache for Redis provides an in-memory data store based on the Redis software. Redis improves the performance and scalability of an application that uses on backend data stores heavily. It is able to process large volumes of application request by keeping frequently accessed data in the server memory that can be written to and read from quickly. Redis brings a critical low-latency and high-throughput data storage solution to modern applications.

    Azure Content Delivery Network (CDN) offers two ways to control how your files are cached:

    • Caching rules: You can use caching rules to set or modify default cache expiration behavior both globally and with custom conditions, such as a URL path and file extension. Azure CDN provides two types of caching rules:
      • Global caching rules: You can set one global caching rule for each endpoint in your profile, which affects all requests to the endpoint. The global caching rule overrides any HTTP cache-directive headers, if set.
      • Custom caching rules: You can set one or more custom caching rules for each endpoint in your profile. Custom caching rules match specific paths and file extensions, are processed in order, and override the global caching rule, if set.
    • Query string caching: You can adjust how the Azure CDN treats caching for requests with query strings. If the file is not cacheable, the query string caching setting has no effect, based on caching rules and CDN default behaviors.

    For global and custom caching rules, you can specify the following Caching behavior settings:

    • Bypass cache: Do not cache and ignore origin-provided cache-directive headers.
    • Override: Ignore origin-provided cache duration; use the provided cache duration instead. This will not override cache-control: no-cache.
    • Set if missing: Honor origin-provided cache-directive headers, if they exist; otherwise, use the provided cache duration.

    For global and custom caching rules, you can specify the cache expiration duration in days, hours, minutes, and seconds:

    • For the Override and Set if missing Caching behavior settings, valid cache durations range between 0 seconds and 366 days. For a value of 0 seconds, the CDN caches the content, but must revalidate each request with the origin server.
    • For the Bypass cache setting, the cache duration is automatically set to 0 seconds and cannot be changed.

    For custom cache rules, two match conditions are available:

    • Path: This condition matches the path of the URL, excluding the domain name, and supports the wildcard symbol (). For example, /myfile.html, /my/folder/, and /my/images/*.jpg. The maximum length is 260 characters.
    • Extension: This condition matches the file extension of the requested file. You can provide a list of comma-separated file extensions to match. For example, .jpg, .mp3, or .png. The maximum number of extensions is 50 and the maximum number of characters per extension is 16.

    Azure Front Door delivers large files without a cap on file size. Front Door uses a technique called object chunking. When a large file is requested, Front Door retrieves smaller pieces of the file from the backend. After receiving a full or byte-range file request, the Front Door environment requests the file from the backend in chunks of 8 MB. After the chunk arrives at the Front Door environment, it’s cached and immediately served to the user. Front Door then pre-fetches the next chunk in parallel. This pre-fetch ensures that the content stays one chunk ahead of the user, which reduces latency. This process continues until the entire file gets downloaded (if requested) or the client closes the connection.

    Front Door can dynamically compress content on the edge, resulting in a smaller and faster response time to your clients. In order for a file to be eligible for compression, caching must be enabled and the file must be of a MIME type to be eligible for compression. It support the following compression encodings: Gzip (GNU zip) and Brotli

    With Front Door, you can control how files are cached for a web request that contains a query string:

    • Ignore query strings: In this mode, Front Door passes the query strings from the requestor to the backend on the first request and caches the asset. All ensuing requests for the asset that are served from the Front Door environment ignore the query strings until the cached asset expires.
    • Cache every unique URL: In this mode, each request with a unique URL, including the query string, is treated as a unique asset with its own cache.

    If no Cache-Control is present, the default behavior is that Front Door will cache the resource for X amount of time where X gets randomly picked between 1 to 3 days. The following order of headers is used to determine how long an item will be stored in our cache:

    1. Cache-Control: s-maxage=
    2. Cache-Control: max-age=
    3. Expires:

Instrument solutions to support monitoring and logging

  • configure instrumentation in an app or service by using Application Insights

    Application Insights, a feature of Azure Monitor, is an extensible Application Performance Management (APM) service for developers and DevOps professionals. How does it work? You install a small instrumentation package (SDK) in your application or enable Application Insights using the Application Insights Agent when supported. The instrumentation monitors your app and directs the telemetry data to an Azure Application Insights Resource using a unique GUID that we refer to as an Instrumentation Key. You can instrument not only the web service application, but also any background components, and the JavaScript in the web pages themselves. The application and its components can run anywhere - it doesn’t have to be hosted in Azure.

    Live Metrics can be used to quickly verify if Application Insights monitoring is configured correctly. It shows CPU usage of the running process in near real-time. It can also show other telemetry like Requests, Dependencies, Traces, etc.

    The default configuration collects ILogger logs of severity Warning and above.

    Performance Counters. SDK Versions 2.8.0 and later support cpu/memory counter in Linux. No other counter is supported in Linux. The recommended way to get system counters in Linux (and other non-Windows environments) is by using EventCounters.

    If your application has client-side components, you can enable client-side telemetry for web applications by using: @inject Microsoft.ApplicationInsights.AspNetCore.JavaScriptSnippet JavaScriptSnippet.

    The Application Insights SDK for ASP.NET Core supports both fixed-rate and adaptive sampling. Adaptive sampling is enabled by default.

    Use telemetry initializers when you want to enrich telemetry with additional information.

  • analyze log data and troubleshoot solutions by using Azure Monitor

    Azure Monitor helps you maximize the availability and performance of your applications and services. It delivers a comprehensive solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments.

    Azure Monitor Logs is a feature of Azure Monitor that collects and organizes log and performance data from monitored resources. Data from different sources such as platform logs from Azure services, log and performance data from virtual machines agents, and usage and performance data from applications can be consolidated into a single workspace so they can be analyzed together using a sophisticated query language that’s capable of quickly analyzing millions of records.

    Data collected by Azure Monitor Logs is stored in one or more Log Analytics workspaces. The workspace defines the geographic location of the data, access rights defining which users can access data, and configuration settings such as the pricing tier and data retention. You must create at least one workspace to use Azure Monitor Logs.

    Once you create a Log Analytics workspace, you must configure different sources to send their data. No data is collected automatically. This configuration will be different depending on the data source.

    Log data from Application Insights is also stored in Azure Monitor Logs, but it’s stored different depending on how your application is configured.

    Data is retrieved from a Log Analytics workspace using a log query which is a read-only request to process data and return results. Log queries are written in Kusto Query Language (KQL), which is the same query language used by Azure Data Explorer. You can write log queries in Log Analytics to interactively analyze their results, use them in alert rules to be proactively notified of issues, or include their results in workbooks or dashboards. Insights include prebuilt queries to support their views and workbooks.

  • implement Application Insights Web Test and Alerts

    Availability checking and Alerting are features of Application Insights. Application Insight’s web tests will ping your application from multiple locations to check availability and then alert you when it is down.

    There are three types of availability tests:

    • URL ping test: a simple test that you can create in the Azure portal.
    • Multi-step web test: A recording of a sequence of web requests, which can be played back to test more complex scenarios. Multi-step web tests are created in Visual Studio Enterprise and uploaded to the portal for execution.
    • Custom Track Availability Tests: If you decide to create a custom application to run availability tests, the TrackAvailability() method can be used to send the results to Application Insights.

    Creating a Web Test:

    • In Azure portal -> Application Insights -> Availability tile -> Add Web Test
  • implement code that handles transient faults

    Example - Polly:

          var policy = Policy.Handle<Exception>().WaitAndRetryAsync(
              retryCount: 3, // Retry 3 times
              sleepDurationProvider: attempt => TimeSpan.FromMilliseconds(200 * Math.Pow(2, attempt - 1)), // Exponential backoff based on an initial 200 ms delay.
              onRetry: (exception, attempt) =>
              {
                  // Capture some information for logging/telemetry.
                  logger.LogWarn($"ExecuteReaderWithRetryAsync: Retry {attempt} due to {exception}.");
              });
    
          // Retry the following call according to the policy.
          await policy.ExecuteAsync<SqlDataReader>(async token =>
          {
              // This code is executed within the Policy
    
              if (conn.State != System.Data.ConnectionState.Open) await conn.OpenAsync(token);
              return await command.ExecuteReaderAsync(System.Data.CommandBehavior.Default, token);
    
          }, cancellationToken);
    

Connect to and consume Azure services and third-party services (25-30%)

Develop an App Service Logic App

  • create a Logic App

    Azure Logic Apps is a cloud service that helps you schedule, automate, and orchestrate tasks, business processes, and workflows when you need to integrate apps, data, systems, and services across enterprises or organizations.

    Every logic app must start with a trigger, which fires when a specific event happens, or when a specific condition is met.

  • create a custom connector for Logic Apps

    In Azure Logic Apps, you must first create the custom connector resource before defining the behavior of the connector using an OpenAPI definition or a Postman collection.

    To create a custom connector, you must describe the API you want to connect to so that the connector understands the API’s operations and data structures.

  • create a custom template for Logic Apps

    To get you started creating workflows more quickly, Logic Apps provides templates, which are prebuilt logic apps that follow commonly used patterns. Use these templates as provided or edit them to fit your scenario.

    Azure Logic Apps provides a prebuilt logic app Azure Resource Manager template that you can reuse, not only for creating logic apps, but also to define the resources and parameters to use for deployment. You can use this template for your own business scenarios or customize the template to meet your requirements.

Implement API Management

The Azure API Management (APIM) service enables you to construct an API from a set of disparate microservices. Azure API Management (APIM) is a fully managed cloud service that you can use to publish, secure, transform, maintain, and monitor APIs. It helps organizations publish APIs to external, partner, and internal developers to unlock the potential of their data and services. API Management handles all the tasks involved in mediating API calls, including request authentication and authorization, rate limit and quota enforcement, request and response transformation, logging and tracing, and API version management. APIM enables you to create and manage modern API gateways for existing backend services no matter where they are hosted.

By default the new API is created under the azure-api.net domain.

  • create an APIM instance

    In Azure Portal: Function App -> Api Management -> Create new.

    Composing an API using API Management has advantages that include:

    • Client apps are coupled to the API expressing business logic, not the underlying technical implementation with individual microservices. You can change the location and definition of the services without necessarily reconfiguring or updating the client apps.
    • API Management acts as an intermediary. It forwards requests to the right microservice, wherever it is located, and returns responses to users. Users never see the different URIs where microservices are hosted.
    • You can use API Management policies to enforce consistent rules on all microservices in the product. For example, you can transform all XML responses into JSON, if that is your preferred format.
    • Policies also enable you to enforce consistent security requirements.

    API Management also includes helpful tools - you can test each microservice and its operations to ensure that they behave in accordance with your requirements. You can also monitor the behavior and performance of deployed services.

    Azure API Management supports importing Azure Function Apps as new APIs or appending them to existing APIs. The process automatically generates a host key in the Azure Function App, which is then assigned to a named value in Azure API Management.

  • configure authentication for APIs

    When you publish APIs through Azure API Management, it’s easy and common to secure access to those APIs by using subscription keys. Client applications that need to consume the published APIs must include a valid subscription key in HTTP requests when they make calls to those APIs. To get a subscription key for accessing APIs, a subscription is required.

    API Management provides the capability to secure access to APIs (i.e., client to API Management) using client certificates. You can validate incoming certificate and check certificate properties against desired values using policy expressions.

    Policies can be configured to check the issuer and subject of a client certificate. Example:

          <choose>
              <when condition="@(context.Request.Certificate == null || !context.Request.Certificate.Verify() || context.Request.Certificate.Issuer != "trusted-issuer" || context.Request.Certificate.SubjectName.Name != "expected-subject-name")" >
                  <return-response>
                      <set-status code="403" reason="Invalid client certificate" />
                  </return-response>
              </when>
          </choose>	
    
  • define policies for APIs

    In Azure API Management (APIM), policies are a powerful capability of the system that allow the publisher to change the behavior of the API through configuration. Policies are a collection of Statements that are executed sequentially on the request or response of an API. Popular Statements include format conversion from XML to JSON and call rate limiting to restrict the amount of incoming calls from a developer. Many more policies are available out of the box.

    The policy definition is a simple XML document that describes a sequence of inbound and outbound statements. The XML can be edited directly in the definition window. A list of statements is provided to the right and statements applicable to the current scope are enabled and highlighted.

Develop event-based solutions

Events are lighter weight than messages, and are most often used for broadcast communications. The components sending the event are known as publishers, and receivers are known as subscribers.

  • implement solutions that use Azure Event Grid

    Azure Event Grid is a fully-managed event routing service running on top of Azure Service Fabric. Event Grid distributes events from different sources, such as Azure Blob storage accounts or Azure Media Services, to different handlers, such as Azure Functions or Webhooks. Event Grid was created to make it easier to build event-based and serverless applications on Azure.

    Event Grid supports most Azure services as a publisher or subscriber and can be used with third-party services. It provides a dynamically scalable, low-cost, messaging system that allows publishers to notify subscribers about a status change. It has the following characteristics:

    • dynamically scalable
    • low cost
    • serverless
    • at least once delivery

    There are several concepts in Azure Event Grid that connect a source to a subscriber:

    • Events: What happened. Events are the data messages passing through Event Grid that describe what has taken place. Each event is self-contained, can be up to 64 KB.
    • Event sources: Where the event took place. Event sources are responsible for sending events to Event Grid. Each event source is related to one or more event types.
    • Topics: The endpoint where publishers send events. Event topics categorize events into groups. Topics are divided into system topics, and custom topics.
    • Event subscriptions: The endpoint or built-in mechanism to route events, sometimes to multiple handlers. Subscriptions are also used by handlers to filter incoming events intelligently.
    • Event handlers: The app or service reacting to the event.

    Event source disambiguation: Azure Event Hub has the concept of an event publisher which is often confused with the event source. A publisher to Event Hub is the user or organization that decides to send events to Event Grid. For example, Microsoft publishes events for several Azure services.

    Use Event Grid when you need these features:

    • Simplicity: It is straightforward to connect sources to subscribers in Event Grid.
    • Advanced filtering: Subscriptions have close control over the events they receive from a topic.
    • Fan-out: You can subscribe to an unlimited number of endpoints to the same events and topics.
    • Reliability: Event Grid retries event delivery for up to 24 hours for each subscription.
    • Pay-per-event: Pay only for the number of events that you transmit.
    • High throughput: Build high-volume workloads on Event Grid.
    • Built-in Events: Get up and running quickly with resource-defined built-in events.
    • Custom Events: Use Event Grid to route, filter, and reliably deliver custom events in your app.
  • implement solutions that use Azure Notification Hubs

    Azure Notification Hubs provide an easy-to-use and scaled-out push engine that enables you to send notifications to any platform (iOS, Android, Windows, etc.) from any back-end (cloud or on-premises). Notification Hubs works great for both enterprise and consumer scenarios.

    Notification Hubs eliminates all complexities associated with sending push notifications on your own from your app backend. Its multi-platform, scaled-out push notification infrastructure reduces push-related coding and simplifies your backend. With Notification Hubs, devices are merely responsible for registering their PNS handles with a hub, while the backend sends messages to users or interest groups.

    To facilitate a seamless and unifying experience across Azure services, App Service Mobile Apps has built-in support for notifications using Azure Notification Hubs. App Service Mobile Apps offers a highly scalable, globally available mobile application development platform for enterprise developers and systems integrators that brings a rich set of capabilities to mobile developers.

  • implement solutions that use Azure Event Hub

    Azure Event Hubs is a cloud-based, event-processing service that can receive and process millions of events per second. Event Hubs acts as a front door for an event pipeline, to receive incoming data and stores this data until processing resources are available.. Unlike Event Grid, however, it is optimized for extremely high throughput, a large number of publishers, security, and resiliency. It has the following characteristics:

    • low latency
    • capable of receiving and processing millions of events per second
    • at least once delivery

    Events. An event is a small packet of information (a datagram) that contains a notification. Events can be published individually, or in batches, but a single publication (individual or batch) can’t exceed 1 MB.

    Publishers. Event publishers are any app or device that can send out events using either HTTPS, Advanced Message Queuing Protocol (AMQP) 1.0 or the Kafka protocol. For publishers that send data frequently, AMQP has better performance. However, it has a higher initial session overhead, because a persistent bidirectional socket and transport-level security (TLS) or SSL/TLS has to be set up first. Event publishers use Azure Active Directory based authorization with OAuth2-issued JWT tokens or an Event Hub-specific Shared Access Signature (SAS) token gain publishing access.

    Event Hubs enables granular control over event publishers through publisher policies. Publisher policies are run-time features designed to facilitate large numbers of independent event publishers.

    Subscribers. Event subscribers are apps that use one of two supported programmatic methods to receive and process events from an Event Hub.

    Consumer groups. An Event Hub consumer group represents a specific view of an Event Hub data stream. By using separate consumer groups, multiple subscriber apps can process an event stream independently, and without affecting other apps. However, the use of many consumer groups isn’t a requirement, and for many apps, the single default consumer group is sufficient.

    Partitions. As Event Hubs receives communications, it divides them into partitions. Partitions are buffers into which the communications are saved. Because of the event buffers, events are not completely ephemeral, and an event isn’t missed just because a subscriber is busy or even offline. The subscriber can always use the buffer to “catch up.” By default, events stay in the buffer for 24 hours before they automatically expire. The buffers are called partitions because the data is divided amongst them. Every event hub has at least two partitions, and each partition has a separate set of subscribers.

    Capture. Event Hubs can send all your events immediately to Azure Data Lake or Azure Blob storage for inexpensive, permanent persistence. Captured data is written in the Apache Avro format.

    Authentication. All publishers are authenticated and issued a token. This means Event Hubs can accept events from external devices and mobile apps, without worrying that fraudulent data from pranksters could ruin our analysis.

    Event Hubs has support for pipelining event streams to other Azure services. Using it with Azure Stream Analytics, for instance, allows complex analysis of data in near real time, with the ability to correlate multiple events and look for patterns. In this case, Stream Analytics would be considered a subscriber.

    Choose Event Hubs if:

    • You need to support authenticating a large number of publishers.
    • You need to save a stream of events to Data Lake or Blob storage.
    • You need aggregation or analytics on your event stream.
    • You need reliable messaging or resiliency.

Develop message-based solutions

In the terminology of distributed applications, messages have the following characteristics: it contains raw data, produced by one component, that will be consumed by another component; it contains the data itself, not just a reference to that data; the sending component expects the message content to be processed in a certain way by the destination component. The integrity of the overall system may depend on both sender and receiver doing a specific job.

Benefits of queues: increased reliability, message delivery guarantees, transactional support

  • **implement solutions that use Azure Service Bus**

    Service Bus is a message broker system intended for enterprise applications. These apps often utilize multiple communication protocols, have different data contracts, higher security requirements, and can include both cloud and on-premises services. Service Bus is built on top of a dedicated messaging infrastructure designed for exactly these scenarios. It has the following characteristics:

    • reliable asynchronous message delivery (enterprise messaging as a service) that requires polling
    • advanced messaging features like FIFO, batching/sessions, transactions, dead-lettering, temporal control, routing and filtering, and duplicate detection
    • at least once delivery
    • optional in-order delivery

    A queue is a simple temporary storage location for messages. A sending component adds a message to the queue. A destination component picks up the message at the front of the queue. Under ordinary circumstances, each message is received by only one receiver.

    Topics are like queues, but can have multiple subscribers. When a message is sent to a topic instead of a queue, multiple components can be triggered to do their work. Internally, topics use queues. When you post to a topic, the message is copied and dropped into the queue for each subscription. The queue means that the message copy will stay around to be processed by each subscription branch even if the component processing that subscription is too busy to keep up.

    A relay is an object that performs synchronous, two-way communication between applications. Unlike queues and topics, it is not a temporary storage location for messages. Instead, it provides bidirectional, unbuffered connections across network boundaries such as firewalls. Use a relay when you want direct communications between components as if they were located on the same network segment but separated by network security devices.

    Use Service Bus topics if you:

    • Need multiple receivers to handle each message

    Use Service Bus queues if you:

    • Need an At-Most-Once delivery guarantee.
    • Need a FIFO guarantee.
    • Need to group messages into transactions.
    • Want to receive messages without polling the queue.
    • Need to provide a role-based access model to the queues.
    • Need to handle messages larger than 64 KB but less than 256 KB.
    • Queue size will not grow larger than 80 GB.
    • Want to publish and consume batches of messages.
  • **implement solutions that use Azure Queue Storage queues**

    Queue storage is a service that uses Azure Storage to store large numbers of messages that can be securely accessed from anywhere in the world using a simple REST-based interface. Queues can contain millions of messages, limited only by the capacity of the storage account that owns it.

    Use Queue storage if you:

    • Need an audit trail of all messages that pass through the queue.
    • Expect the queue to exceed 80 GB in size.
    • Want to track progress for processing a message inside of the queue.

    Every request to a queue must be authorized and there are several options to choose from:

    • Azure Active Directory: You can use role-based authentication and identify specific clients based on AAD credentials.
    • Shared Key: Sometimes referred to as an account key, this is an encrypted key signature associated with the storage account. Every storage account has two of these keys that can be passed with each request to authenticate access. Using this approach is like using a root password - it provides full access to the storage account.
    • Shared access signature - A shared access signature (SAS) is a generated URI that grants limited access to objects in your storage account to clients. You can restrict access to specific resources, permissions, and scope to a data range to automatically turn off access after a period of time.

Broken URLs for Thomas Maurer:

- Extend Azure Resource Manager template functionality
- Custom configuration and application settings in Azure Web Sites (8 years old - looks goofy)
- Develop line-of-business apps for Azure Active Directory - could be replaced with: https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/add-application-portal or https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/five-steps-to-full-application-integration-with-azure-ad

Broken tests for Pluralsight:

AKS is out of scope! (from 22 IaaS questions almost half are about AKS)

Updated: