Application monitoring

Once you’ve built and deployed an application, users will start to interact with the application. Usually the application operations teams would like to keep track of those how the application is being used so that when the application is not behaving or performing as it should, additional insight into the application usage would help speed up troubleshooting the issue.

Azure Monitor is a platform of tools that helps with application monitoring and provides the necessary insight and operational data that is required to ensure that production systems are running efficiently. It is a centralized solution for collecting, analyzing and acting on the telemetry data that is fed into it.

Where does telemetry data come from?

Telemetry data is collected from multiple sources. It can come from the application itself such as data about the application’s functionality or performance, the Operating System that the application is running on and from Azure itself. Data from Azure resources, the subscription or tenant monitoring data which can include information about the operations of those resources and services.

Once telemetry data is collected, it is sorted into metrics and logs.

  • Metrics
  • Logs

Metrics are values used to describe a part of a system at a point in time.

Logs are records of various types of data over some time and are much more verbose than metrics.

Application Insights

The Application Insights function watches a range of things about the application such as:

  • How long has the application been available or how long has there been an application outage
  • Show the number of client request errors in the last hour
  • Show the server responses in the last hour

Alerts

You can specify the criteria for alerts using alert rules and when these rules are triggered, a specific action to take can be configured.

For example, the database administrator should be alerted when the production database server CPU spikes over 80%.

To set up the alert rule for this scenario :

  1. Choose the production database as the target resource
  2. Set the criteria to CPU usage being greater than 80%
  3. Specify the action to take as a text message sent immediately to the database admin

Data formats

Data represents a variety of useful information that often needs to be stored, sorted, categorized and analyzed to inform decision-making. Data is organized in data structures which represent the data as entities with attributes or characteristics.

Data can be classified as structured, semi-structured or unstructured.

Structured Data

Structured data has a fixed schema where all the data share the same fields and data type for each field. The schema for structured data is usually tabular with columns for the fields and rows for each entity. Structured data is often stored in databases with multiple tables that can reference each other with key values in a relational model.

IDNameSurnameEmail
1NaiomiNaidooNaiomi.Naidoo@technology.online
2FirstnameLastnameFirstname@yahoo.com
Structured data in a table

Semi-structured data

Semi-structured data is information that has some structure but there is variation between the entity instances.

Scenario: Some customers may have an email address while others may have multiple email addresses or no email address at all.

JavaScript Object Notation (JSON) is a common data format used for representing semi-structured data because of it’s flexible nature.

//Customer 1
{
  "id": "1",
  "name": "Naiomi",
  "surname": "Naidoo",
  "contact":
  {
    "email": "naiomi@naidoo.com",
    "phone": "+27121231234"
  }
}
//Customer 2
{
  "id": "2",
  "name": "Firstname",
  "surname": "Lastname",
  "contact":
  {
    "email": "firstname@yahoo.com",
    "phone": "+27987654321"
  }
  "location":
  {
    "city": "Sandton"
  } 
}

Unstructured data

Documents, images, audio, video and binary files can be considered unstructured data.

Types of unstructured data

Azure Cosmos DB

Cosmos DB is a distributed database engine with core features provided for any type of implementation model.

Features of Cosmos DB

  • Turnkey global distribution

Cosmos DB enables global data distribution and availability as a configuration setting in the portal, via command-line or ARM template, making data replication to a new location within the chosen region as seamless as possible. Both manual and automatic failover is supported as well as multi-read and multi-write from primary and replica databases.

  • Elastic storage and throughput

Cosmos DB will automatically scale database storage and throughput in a pay for consumption based model. There is no need to pre-provision resources to account to future growth. Cosmos DB measures throughput in a standardized way referred to as Request Units (RUs) and can be considered as an abstraction of physical resources. RUs are provisioned per second, eg. 2000 RU/s.

Throughput is provisioned at a database or container level.

Container LevelDatabase Level
Isolated throughputContainers share throughput
  • Low latency

Microsoft’s financially back SLA provides performance metrics for read and write requests < 10 ms 99% of the time.

  • Flexible consistency model

Data replication options are available over 5 sliding scale consistency models to optimize the database for a specific workload. Consistency can be configured globally per connection.

Credit : https://docs.microsoft.com/en-us/azure/cosmos-db/consistency-levels
  • Enterprise-grade security

A unified security model exists across all APIs, providing built-in encryption at rest and in-transit. IP-based access control is supported.

To connect to a Cosmos DB, 2 pairs of keys, read-write and read-only are used and managed by the service to control access to the account and data.

APIs

Cosmos DB exposes data through a variety of models and APIs. When you request data using a specific API, Cosmos DB will automatically handle the translation of data from the underlying data format to the data model required for the API.

APIDescription
SQL APICore API with many unique features.

Supports JavaScript logic and SQL queries.
MongoDB APICompatible with MongoDB v3.2 protocol.

Supports aggregation pipeline.
Gremlin APICompatible with the Apache TinkerPop graph traversal language (Gremlin).

Returns results in GraphSON (extended JSON) format.
Table APIService-level compatibility with Azure Storage Tables.

Migrate applications with no code changes.
Cassandra APISupports Cassandra Query Language (CQL) v4 protocol.

Works out of the box with CQL shell.
etcd APIImplements etcd wire protocol.

Can be used as a backing store for Azure Kubernetes Service.

Resource Model

Data in Azure Cosmos DB is stored in a hierarchy of resources.

Indexing

Cosmos DB automatically indexes all fields within all items or documents by default. While indexing can be useful for many workloads, indexing all fields and items can have a performance impact on more complex data sets.

Performance optimization to control and tune indexing is possible to balance trade-offs between write and query performance.

Index policies can be created to configure indexes by specifying the following:

  • List of paths to index
  • Different types of indexing to perform
  • List of paths to exclude

Types of indexes

RangeHashSpatial
Provides comparison functionalityQuick lookup for exact match informationUsed for geographical information

Create a Function App in Azure Portal

Pre-requisites:

An Azure subscription
An existing resource group
An existing storage account

1) From the Azure portal Home page, select Create a resource

2) On the New page, select Compute > Function App

3) On the Basics tab, populate the following fields and then click on Next : Hosting

  • Subscription
  • Resource Group
  • Function App Name
  • Publish
  • Runtime Stack

Runtime stack is not required if you’ve selected to publish a Docker container

  • Version
  • Region

4) On the Hosting tab, populate the following fields and click on Next : Networking

  • Storage account
  • Operating System
  • Plan type

5) On the Networking tab, select the appropriate option for network injection and click on Next : Monitoring

6) On the Monitoring tab, Enable or Disable Application Insights and click on Next : Tags

7) On the Tags tab, create tags to categorize your function app and click on Next : Review + Create

8) Review the configuration details of the Function App and click on Create

Containerization with Docker

Containers are a powerful tool for isolating, packaging and shipping applications.

What are containers?

To understand what containers are, it is important to first understand virtualization & virtual machines (VMs). VMs enable multiple operating systems in a single set of hardware which provides benefits such as:

  • Effective resource allocation

If two VM’s share the same hardware, each VM can take advantage of any under-utilized resources in the hardware.

  • Application isolation

Applications running on different VMs do not have access to each other’s data.

Containers takes the idea of virtualization further, while VMs virtualize the hardware, containers virtualize the operating system. The unit of isolation is a container image and is smaller than a VM image.

A container is a self-contained unit of software that contains everything required to execute the software. Containers are portable and resource-efficient. Multiple containers can run on the same operating system while still running as separate isolated processes. Container images are hosted in container engines, eg. Docker.

A container image will behave the same on any container engine and will enable you to build applications locally and deploy the same container image to a test or production environment.

Docker

Containerization in the context of Docker is considered as a runtime instance of a Docker image that contains the following:

  • A Docker image
  • An execution environment
  • A standard set of instructions

Core elements of the Docker ecosystem

The differences between a VM and a container

Virtual Machine (VM)Container
Hosts one or more applicationsHosts the application and it’s dependencies
Contains the necessary binaries and librariesShares the OS kernel with other containers
Exposes the guest OS to interact with the applicationsNot coupled to infrastructure and only requires the Docker Engine to be installed on the host
Executes isolated processes in the user’s workspace on the host OS

Benefits for developers

  1. Applications are portable and packaged in a standard way which makes deployment easier and repeatable.
  2. Tests, packaging and integration can be automated in a consistent application lifecycle.
  3. Supports microservice architectures.
  4. Alleviates platform compatibility issues.
  5. Simplifies release management with reliable deployments that improves the speed and frequency of releases.
  6. Enables consistency between between development, testing and production environments.
  7. Supports scalability for workloads on-demand for different use cases.
  8. Any issues or bugs can be isolated for debugging at a container level.
  9. Supports continuous integration in a deployment pipeline.

Docker build process

Dockerfile

A Dockerfile is composed from a base image, you can find a repository of base images on Docker Hub which already contains the latest updates and security configurations. A Dockerfile contains a set of instructions for building the Docker image.

//Example of a Dockerfile that will execute on Ubuntu

FROM ubuntu
CMD echo "Hello Naiomi!"

Build a Dockerfile into an image

Build the Dockerfile into a Docker image within your preferred IDE or run the following command

docker image build [OPTIONS] PATH | URL | -

Run the Docker image

docker run {image-name}

View all Docker images

docker images

Azure Event Grid

Azure Event Grid is a managed event routing service that enables standardized event consumption using a publish-subscribe model.

An event is something that has occurred and is limited to 64KB in Azure. Some examples include

  1. A new client has signed up with your organization
  2. A client has initiated a payment that needs to take a specific clearing route

Azure Event Grid supports a number of event sources, an event source is where the event has taken place. Looking at the examples above, these events could have taken place in

  1. Customer Relationship Management system
  2. A digital banking channel

Generally, publishers of events send information on a specific end-point or topic and may choose to have an individual topic or multiple topics.

An event subscription is the mechanism that routes events to multiple handlers and subscribers. Subscriptions are also used by handlers to intelligently filter incoming events.

Event handlers is the application or service that processed the event eg. Azure Functions, Event Hubs, Azure Logic Apps or Webhooks.

Various authentication types are provided to support Event Grid such as Webhook event delivery, Event subscriptions & custom topic publishing. RBAC & various action types are also supported to manage and control authorization.

With Webhooks, you can include additional parameters for security such as a secret or an access token that is passed in as a query string. Only HTTPS endpoints are supported.

Custom topics support two types of authentication mechanisms, either a secret key or a shared access signature.

Benefits

  1. Simple and powerful with easy configuration
  2. The ability to filter on event types or event publish paths
  3. A single endpoint can subscribe to many events
  4. A single endpoint can publish multiple copies to many subscribers
  5. Can accommodate high throughput (millions per second)
  6. Consumption based model – pay per event
  7. Reliable with 24-hour retry capability and exponential backoff
  8. Many built-in event types
  9. The flexibility to create custom events

Potential Architectural Patterns

Comparison of messaging services

Event GridEvent HubService Bus
Reactive ProgrammingBig data pipelineHigh-value enterprise messaging
Event distribution (discrete)Event streaming (series)Message
React to changesTelemetry & distributed data streamingOrder processing, financial transactions

Delivery Caveats

  1. Each message is tried at least once for each subscription
  2. Events are sent to the registered endpoint of each subscription immediately
  3. If an endpoint does not acknowledge receipt of an event, Event Grid retries delivery of the event
  4. You can customize the retry schedule
  5. If an event is un-deliverable, it is sent to a storage account which in itself is an event source that you can act on

Introduction to Azure Functions

What are Azure Functions?

Azure Functions is a serverless, cross-platform and open source solution that enables a developer to implement functionality with minimal code on managed infrastructure.

Azure Functions scales dynamically and supports several programming languages such as .NET, Java, Javascript, Python, etc.

Azure Functions should be triggered to execute and can be connected to different sources and targets through bindings.

To host a Function in Azure, a function app is required which will logically group together functions for easier management, deployment, scaling and sharing of resources.

Various hosting plans are available to host a function app:

PlanAdvantagesDisadvantages
Consumption Plan1. Pay only when your functions are executed.

2. Dynamically scale for usage and demand.
1. Cold starts – a brief delay when the function starts executing.
Premium Plan1. Perpetually warm instances.

2. VNet connectivity

3. Unlimited execution duration.

4. Premium instance sizes.
1. Price
Dedicated Plan1. Dedicated VMs

2. Reuse your existing app services.

1. Not serverless

Anatomy of a Function App

Core Files

  • host.json

The host.json file contains global configuration options and will impact all functions within the function app.

{
     "version": "2.0",
     "logging": {
         "applicationInsights": {
              "samplingExcludedTypes": "Request",
              "samplingSettings": {
                   "isEnabled": true
              }
         }
     }
}
  • function.json

The function.json file contains the configuration metadata such as related triggers and bindings for an individual function.

To learn more about the JSON schema for Azure Functions function.json files, check out http://json.schemastore.org/function.

  • local.settings.json

The local.settings.json file contains local configuration settings for your application. Local configuration settings are ignored by Git and Azure. If you’re developing a function app with .Net Core, you can use the IConfiguration infrastructure to easily read environment variables, user secrets and other configuration providers in addition to the local.settings.json file.

{
    "Values": {
        "AzureWebJobsStorage": UseDevelopmentStorage-true",
        "FUNCTIONS_WORKER_RUNTIME": "dotnet"
    }
}

Language and runtime support

Azure Functions can run on both Linux or Windows depending the runtime stack you choose when creating your Function.

LanguageRuntime StackLinuxWindowsPortal editing
C# class library.NET✔️✔️
C#.NET✔️✔️✔️
JavascriptNode.js✔️✔️✔️
PythonPython✔️
JavaJava✔️✔️
PowershellPowershell Core✔️✔️✔️
TypeScriptNode.js✔️✔️
Go/Rust/OtherCustom Handlers✔️✔️

Exam Tips for AZ-204 Developing Solutions in Azure

The AZ-204 certification can be a great way to consolidate and demonstrate your skills as a developer to stand out from the crowd. Microsoft exams have moved from technology-based exams to role-based exams providing a more comprehensive set of skills and technologies that a specific role may use regularly in their roles.

Why get certified?

  1. Career progression
  2. The knowledge!
  3. Organizational credibility

What you should already know before starting your journey to getting AZ-204 certified

Be familiar with Azure

If you haven’t yet taken the AZ-900 Fundamentals of Azure exam, this would be a great place to start your journey to not just getting familiar with Azure but being able to:

  • Create a subscription
  • Create a resource group
  • Deploy a resource
  • Recognize features in Azure Portal

Understand how Azure solutions are constructed

Be able to use and recognize different services (some, not all) provided by the platform to meet different requirements and needs.

What is covered in the AZ-204?

Common Azure Cloud Shell Commands

CommandDescription
az group create –name {$name} –location {$location}Create a new resource group
az group delete –name {$resourceGroup}Delete a resource group
az webapp list-runtimes –linuxRetrieve list of Linux runtimes for App Service
az functionapp createCreate a new function app
az storage account create –name {$name} –location {$location} –resource-group {$rg} –sku {$sku} –kind {$blobStorage} –access-tier {$hot}Create a new storage account
brew tap azure/functions
brew install azure-functions-core-tools@4

if upgrading on a machine that has 2.x or 3.x installed:

brew link –overwrite azure-functions-core-tools@4
Install Azure functions core tools on MacOS
az role definition create –role-definition @<file path>Create a new role
az role assignment create –assignee {$userName} –role “{$nameOfRole}”Assign a role
az storage account show-connection-string –name {$storageAccount} –resource-group {$rg} –query connectionString –output tsv)Retrieve the connection string for a storage account

What is the “CLOUD”?

The “CLOUD” once meant something available over an internet connection. These days, the cloud is a new way of thinking about system architecture. The definition of the cloud is refined regularly and there is an enormous amount of thought spent on cloud patterns, the best ways to build distributed systems across multiple data centers and resources.

The cloud is a new way of thinking about

  • 1. A massive scale of servers
  • 2. Enormous network bandwidth
  • 3. Buying and selling hardware and software

Cloud services can be categorized as

SaaSPaaSIaaS
Software as a ServicePlatform as a ServiceInfrastructure as a Service
Users subscribe to the software and only pay for what is used (eg. monthly subscription).Allows customers to instantly provision servers, network, switches, firewalls and other physical devices.Includes infrastructure (servers, storage & networking), however these components are configured and managed by the PaaS provider.
Allows users to connect to and use cloud-based apps over the internet.Ability to outsource your infrastructure to the cloud.Can also include middleware, development tools, business intelligence services and database systems.
Applications are run through the browser and do not require any downloads or installations on the client side.Each resource is offered as a separate service component.Designed to support the complete web application lifecycle (building, testing, deploying, managing and updating).
Applications can have native clients too that will sync with the cloud (eg. One Drive).Cloud providers manage the servers, hard drives, networking, virtualization and storage.
Vendors manage all the potential technical issues such as data and servers.Infrastructure is provisioned and managed over the internet.
Hardware and software updates are automatic.
Cross-device compatibility.
Applications are accessible from any location.

NIST defines 5 essential characteristics of the cloud:

  1. On-demand self-service
  2. Broad network access
  3. Resource pooling
  4. Rapid elasticity
  5. Measured service

Deployment models available from cloud providers include:

  1. Private
  2. Public
  3. Hybrid

Benefits of using the cloud

  1. Reduces the cost of managing a data center
  2. Pay for the services used with no long-term contracts
  3. Automatic software or infrastructure updates
  4. 99.9% availability agreements
  5. Professional troubleshooting
  6. Dynamic scalability
  7. Dynamic elasticity
  8. Global datacenters in various regions