Tuesday, December 24, 2024

Mocking Custom Responses with Azure API Management – Custom Mock Response

This article is the second part of a three-part series. We will discuss how to render custom mock responses using APIM policy. Below are the different parts of this article series.

Custom mock responses

Often, we need more than just a 200 OK response without a body. Instead, we require comprehensive responses formatted as JSON messages. Adding to the complexity, the response often needs to vary based on specific query string parameters.

Here’s the approach I used to generate custom mock responses in Azure API Management based on varying query string parameters.

Navigate to your API Management instance and select the specific API operation you want to configure.


Click on the Inbound Processing section and open the Policy Code Editor.







We will modify the inbound section of the policy. We will add a policy segment to dynamically generate the response body based on a specified query string parameter.

Here is the code I used


<inbound>
        <base />
        <choose>
            <when condition="@(context.Request.Url.Query.GetValueOrDefault("id") == "1")">
                <return-response>
                    <set-status code="200" reason="OK" />
                    <set-header name="Content-Type" exists-action="override">
                        <value>application/json</value>
                    </set-header>
                    <set-body>{
                        "studentId": "1",
                        "name": "Jane Smith",
                        "grade": "B"
                    }</set-body>
                </return-response>
            </when>
            <when condition="@(context.Request.Url.Query.GetValueOrDefault("id") == "2")">
                <return-response>
                    <set-status code="200" reason="OK" />
                    <set-header name="Content-Type" exists-action="override">
                        <value>application/json</value>
                    </set-header>
                    <set-body>{
                        "studentId": "2",
                        "name": "Jane Smith",
                        "grade": "B",
                    }</set-body>
                </return-response>
            </when>
            <otherwise>
                <return-response>
                    <set-status code="404" reason="Not Found" />
                    <set-header name="Content-Type" exists-action="override">
                        <value>application/json</value>
                    </set-header>
                    <set-body>{
                            "error": "Student ID not found"
                        }</set-body>
                </return-response>
            </otherwise>
        </choose>
    </inbound>

You can test the outcome by navigating to the Test console and specifying the appropriate query string parameter, as shown below.









Then, submit the request to verify that the appropriate response is returned based on the specified query string parameter.






Tuesday, December 17, 2024

Ensuring Static IP for Azure App Service When Accessing External APIs Over the Internet

Azure App Service assigns a range of outbound IPs when accessing external resources. However, if the external resource requires IP whitelisting, the default configuration may not be practical. This article outlines the steps to ensure that the external API is accessed using a static public IP.

Following is the default configuration. With this setting, the external API may be called with any of the IPs within the range














To achieve this objective, I chose to use NAT Gateway and related components. we need to complete the following tasks in order to implement the solution:
  • Integrate the Web App with a Subnet in a Virtual Network
  • Create a Public IP
  • Create and Configure a NAT Gateway
  • Associate the Web App with the NAT Gateway
  • Test

If we had the external API or resource within a private network (e.g On-Premises) we could've used Hybrid Connections.

Let's discuss the implementation of each item

1. Integrate the Web App with a Subnet in a Virtual Network

Create a virtual network and a subnet or consume an existing virtual network





















Next, navigate to your Web App, go to the Networking section, and enable Virtual Network Integration to connect it to the designated subnet.































2. Create a Public IP



































3.Create and Configure a NAT Gateway

























Specify Outbound IP. We can specify multiple public IP addresses if we want once NAT Gateway is configured

















Specify the Subnet



















4.Associate the Web App with the NAT Gateway

Since resources are within the same network, the NAT Gateway will be automatically configured with your Web App













5. Test

Let's test the solution. To validate the setup, I deployed a sample .NET API that calls an external service, which returns the calling IP address. 


    [ApiController]
    [Route("api/[controller]")]
    public class GatewayTestController : ControllerBase
    {
        private readonly HttpClient _httpClient;
        public GatewayTestController(HttpClient httpClient)
        {
            _httpClient = httpClient;
        }

        [HttpGet]
        public async Task GetOutboundIp()
        {
            // Call external API over internet
            var response = await _httpClient.GetAsync("https://httpbin.org/ip");
            if (!response.IsSuccessStatusCode)
            {
                return StatusCode((int)response.StatusCode, "Failed to get outbound IP");
            }

            var callerIP = await response.Content.ReadAsStringAsync();
            return Ok(callerIP);
        }
    }

Following is the response I get. This matches exactly with the Public IP I provisioned



Monday, December 9, 2024

Structured Approach to a Successful Azure Integration Services Implementation

Implementation of integration solutions using Azure Integration Services (AIS) components is quite common. It can harness the benefits of an Integration Platform as a Service (iPaaS). While the integration process may appear straightforward, it requires careful governance. Typically, the implementation timeline is tight, and the presence of multiple moving parts adds to the complexity of the process.

The following activities are recommended for a successful AIS engagement:

Planning & Discovery
  • Project Plan Development: Establish the project’s scope, objectives, timelines, budget, and deliverables. This foundational task involves identifying stakeholder requirements and defining a roadmap for successful implementation.
  • Discovery Workshop(s): Engage stakeholders in collaborative sessions to gather requirements, align expectations, and document business processes to ensure the integration aligns with organizational goals.
Deliverables: Project Plan Document, Discovery Workshop Artifacts, Open Communication Channel (e.g MS Teams)

Design
  • Develop High-level Architecture: Define the overall architectural vision, including system components and their interactions, to serve as the blueprint for detailed designs.
  • Develop Low-level Architecture: Provide a granular view of individual components, interfaces, and data flows to guide development and implementation teams.
  • Design Documentation: Compile detailed design specifications, architectural diagrams, and decision logs to serve as reference material for all stakeholders.
  • Design Walk-Through Workshop(s): Present the architecture and design plans to stakeholders for validation, feedback, and alignment before moving to implementation.
  • Proof of Concept: Build a small-scale prototype to validate critical components, test assumptions, and mitigate risks before full-scale development begins.
Deliverables: High-level Architecture Document, Low-level Architecture Document, Consolidated Design Document, Workshop Notes, Proof of Concept

Implementation
  • Integration Environment Readiness: Set up environments, such as staging and testing platforms, to support development and ensure readiness for system integration. It is essential to ensure that both source and target environments are prepared for implementation. If the environments are in different locations (e.g., cloud vs. on-premises), the team must exercise extra caution regarding security and access management.
  • Landing Zone Configuration: Establish a secure and scalable Azure foundation, ensuring compliance with best practices for networking, identity, and governance.
  • DevOps Environment Configuration: Set up CI/CD pipelines, repositories, and automated workflows to streamline development and deployment processes.
  • Specific AIS Component Configuration: Configure Azure Integration Services components like Logic Apps, Service Bus, and API Management as per the design.
  • Development of Components: Build custom integrations, workflows, and connectors required for the solution based on detailed designs. Most of the development will be conducted using Azure Functions and Logic App workflows. It is crucial to follow a well-structured solution design that incorporates appropriate cloud components. Additionally, adopting a "shift-left" approach to security is highly recommended to address potential risks early in the development lifecycle
  • Implement Disaster Recovery & High Availability: Establish redundancy, failover mechanisms, and recovery processes to ensure resilience and minimal downtime.
  • Data Migration: Plan, map, and execute the migration of legacy data to the new system, ensuring data integrity and minimal disruption.
Deliverables: Integration Environment Readiness Confirmation, Landing Zone Configuration Document, DevOps Environment Configuration Document, AIS Component Configuration in Infrastructure as Code (IaC) in repository , Developed Components in repository, Disaster Recovery & High Availability Plans, Data Migration Report

Testing
  • Unit Testing: Validate individual components or modules to ensure they meet functional requirements and perform as expected.
  • Integration Testing: Assess the system as a whole to verify that all components communicate and function seamlessly together.
  • Vulnerability Assessment & Penetration Testing: Identify security vulnerabilities and test the system’s ability to withstand potential threats or breaches.
  • User Acceptance Testing: Engage end-users to validate the system against business requirements and ensure it meets their needs.
Deliverables: Unit Testing Report, Integration Testing Report, Security Testing Report, User Acceptance Testing Deliverables

Deployment
  • Configure CI/CD Pipelines: Finalize and optimize pipelines to automate deployment processes, enhancing efficiency and consistency.
  • Deployment to Non-Prod Environments: Deploy the solution to development and staging environments for final validation and testing.
  • Deployment to Prod Environments: Execute the live deployment of the solution in production, ensuring minimal downtime and proper configuration.
  • Post-Deployment Verification: Validate the deployment success by verifying all functionalities, data, and integrations are working as intended.
Deliverables: CI/CD Pipeline Configuration Files, Non-Production Deployment Validation, Production Deployment Plan, Post-Deployment Verification Report

Closure
  • Establish Monitoring & Logging: Configure monitoring tools and logging mechanisms to ensure ongoing visibility into system performance and health.
  • Issue Resolution: Address any post-deployment issues or bugs, ensuring the system operates smoothly and meets expectations.
  • Project Closure Documentation: Prepare final documentation, including lessons learned, handover details, and user manuals, to conclude the project formally.
Deliverables: Monitoring & Logging Configurations, Issue Resolution Logs, Project Closure Documentation

Friday, November 29, 2024

Presentation - Enterprise Integration Solutions with Azure Integration Services

I had the privilege of delivering a lightning talk at the Perth Azure Group on Enterprise integration Solutions using Azure Integration Services.

Following is the presentation I conducted.


Following are few snaps from the event























The following resources are excellent starting points for anyone interested in learning more.

Wednesday, November 27, 2024

Mocking Custom Responses with Azure API Management – Simple Mock Response

Mocking API responses is often essential to support the quality assurance team and enable development testing activities. Azure API Management (APIM) provides a comprehensive solution for this, allowing us to mock responses with specific status codes and even craft custom response messages. This is achieved by leveraging custom policies within APIM, enabling greater flexibility and control over simulated API behavior.

This article is the first of a three-part series. In this article, I will cover simple mock responses, while the rest in the series will discuss on crafting custom response messages. Below are the different parts of this article series.


Simple mock responses

Let's assume we need to send a 200 OK response to an API that has no backend service connected. Here are the steps you need to follow:

Navigate to the API operation and select "Add Policy" in the Inbound Processing section.



 











Select the "Mock responses" policy.












Next, select the desired response. In this case, choose "200 OK."














That's it! When you navigate to the test console and execute the API, you will receive a 200 OK response, even though no backend service is configured.






Wednesday, November 20, 2024

Securely Access Azure Key Vault Secrets from an On-Premises Application: A First Step in Cloud Migration

Cloud adoption and modernization are often complex processes. Consequently, organizations typically migrate their workloads to the cloud in phases. To maximize business value, it is crucial to identify the most suitable use case. One promising candidate is migrating valuable secrets to the cloud, where robust security measures have been proven effective. You can emphasize this as a security enhancement and an improvement in compliance adherence.

In this article, I’ll explain how to keep your applications within your on-premises environment while securely migrating credentials, such as database connection strings and encryption keys to an Azure Key Vault instance.

Following would be the design we follow
















For this example, I will use a simple C# console application to represent an enterprise application. Additionally, I will use a self-signed certificate to illustrate the process. However, when implementing this in your organization, you should use a properly issued certificate to ensure security and compliance.

Following are the steps I followed:

Navigate to your Entra ID instance and create a new App registration. Provide default values for parameters.















Next, generate a self-signed certificate for this example. If your organization already has an issued certificate, you may reuse that. We will generate both a .pfx file (containing the private key) and a .cer file (containing the public key). The .pfx file can be securely stored in Azure Key Vault.

# Generate the self-signed certificate
$cert = New-SelfSignedCertificate -CertStoreLocation Cert:\CurrentUser\My -Subject "CN=ConsoleKeyVault" -KeySpec KeyExchange

# Export the certificate with private key to a PFX file
$certPath = "C:\Cert\AppCertificate.pfx"
$certPassword = ConvertTo-SecureString -String "Password" -Force -AsPlainText
Export-PfxCertificate -Cert $cert -FilePath $certPath -Password $certPassword

# Export the public key in CER format
Export-Certificate -Cert $cert -FilePath 

Once that is done, you can see the certificate is configured in your development environment

Next, navigate to your App registration in the Azure portal and go to the Certificates & secrets section. There, upload the .cer certificate to associate it with your application.


You need to obtain the Tenant ID and Client ID of your App registration. You can find both in the Overview tab of the App registration.










Then we need to navigate to the Key Vault instance and provide appropriate permissions. Since our application needs to read secrets from Azure Key Vault, the appropriate role to assign is Key Vault Secrets User.































That completes the required configuration. Now, let's move to our console application and set up the connection to Azure Key Vault.

We need to consume following NuGet packages. Let's install them first.

dotnet add package Azure.Identity
dotnet add package Azure.Security.KeyVault.Secrets

Below is a sample code snippet to retrieve a secret from Azure Key Vault. In my Key Vault, I have a secret named "food-auth-client-id", and the following program demonstrates how to access this credential securely within on-premises environment.











using System.Security.Cryptography.X509Certificates;
using Azure.Identity;
using Azure.Security.KeyVault.Secrets;


string keyVaultUrl = "https://test-fedora-01.vault.azure.net/";
string clientId = "xxxx-cab5-4b32-8380-a9e76c063677";
string tenantId = "xxxx-xxx-xxx-xxx-xxxx";
string certificateThumbprint = "xxxxA03B2697CDA8D58ABB32DCB48B6995F7994D";

// Retrieve the certificate from the local certificate store
var store = new X509Store(StoreLocation.CurrentUser);
store.Open(OpenFlags.ReadOnly);
var certificate = store.Certificates.Find(X509FindType.FindByThumbprint, certificateThumbprint, validOnly: false)[0];

// Authenticate using ClientCertificateCredential
var credential = new ClientCertificateCredential(tenantId, clientId, certificate);
var client = new SecretClient(new Uri(keyVaultUrl), credential);

// Retrieve the secret
KeyVaultSecret secret = await client.GetSecretAsync("food-auth-client-id");
string foodAuthClientId = secret.Value;

// Use the connection string in your application
Console.WriteLine($"Retrieved the key: {foodAuthClientId}");

I was able to retrieve the secret as shown below



Wednesday, October 16, 2024

Optimizing Cost Management for Azure Functions: Planning, Control, and Best Practices

Azure Functions is a serverless, fully managed compute platform that enables seamless process automation. However, its rapid feature deployment can sometimes lead to governance misalignment, which may impact organizations in several ways, including increased costs.

In this article, I will share few strategies to help you effectively govern and optimize your costs.

1. Select the right hosting plan

There are many hosting plans you can select from.






Following is a very high level analysis.

Hosting Plan

Suitable Scenarios

Type

Cost

Flex Consumption  

Event-driven workloads, rapid scaling, VNET integration

Shared

Medium

Consumption

Cost effective serverless apps, infrequent & unpredictable workloads

Shared

Low

Functions Premium

High performance, longer execution times, VNET integration

Dedicated

High

App Service

Dedicated, integration with existing app service plans, VNET integration

Dedicated

High

Container Apps Environment

Containerized workloads, VNET integration

Dedicated

High


2. Optimize your code to minimize function execution time

Two major factors influencing your Azure Functions' costs are execution time and memory usage. Inefficient code can cause delays, leading to higher-than-expected expenses.

It is always recommended to test performance in your local development environment to identify inefficiencies early. Application Insights is a powerful tool for monitoring and analyzing performance, helping to pinpoint and optimize non-performing segments.


4. Optimize network traffic

It is highly recommended to monitor and manage outbound (egress) traffic from your Azure Functions. Here are some effective strategies to reduce egress costs:

  • If possible consume resources within the same region.
  • Cache & compress external API calls
  • Aggregate & batch data processing where possible

5. Monitor & analyze cost using Azure Cost Management

Regularly monitor Azure Cost Management & Billing to track and optimize your spending. Here are some effective strategies you can implement:

  • Create budgets in Azure Cost Management
  • Setup alerts

Tuesday, October 15, 2024

How to debug issues easily with Application Insights - How to see only my code

Let's assume we have a complex solution with multiple external integrations. Identifying issues can be challenging, as error messages are often intricate and difficult to interpret.

Consider the following example, where multiple integrations are in place.




















How can I check only my code to focus on what I can control? It’s simple—just select the "Just My Code" checkbox.

















Once I do that, I can identify the specific points in my code that triggered the error.


















Now, I understand the root cause of the problem and can explore possible solutions.

Dynamically Modify API Responses Based on Subscription Tier with Azure API Management Policies

Azure API Management (APIM) provides more than just API Gateway features; it offers a comprehensive suite of capabilities that strengthen the entire API ecosystem.

Here are some key features provided by Azure API Management:
  • API Gateway
  • Developer Portal
  • Policy Management
  • Analytics and Monitoring
  • Security Features
  • Multi-Cloud and Hybrid Support
  • Versioning and Revisioning
  • Scalability
  • Integration
  • Custom Domains and Branding
  • Products & Subscriptions
We can combine multiple features from these categories based on our specific requirements.

In this article, I will illustrate how to combine Policy Management features with Product & Subscription features to implement a specific use case. 

Following is my business case

I want to offer my COVID data API in two product tiers. Users who subscribed to the premium product would see the entire response including the death count. But, the users who subscribed to the basic product would see the response without the death count

Here is the approach I used to implement the solution:

First we need to create two products within our Azure API Management instance:
  • Starter – A basic plan where customers would see only a subset of the response
  • Unlimited – A premium plan where the customers would see the full response















Then let's create two subscriptions for those products using Subscriptions section

















Let's navigate to your API and enable access for both the Starter and Unlimited products.






















Next, navigate to the Outbound Processing section of the API operation and open the Policy Editor to enforce a conditional response based on the subscribed product.



















Here is the policy snippet I used:
    <outbound>
        <base />
        <choose>
            <when condition="@(context.Response.StatusCode == 200 && context.Product?.Name != "Unlimited")">
                <set-body>@{
                        var response = context.Response.Body.As<JObject>();
                        var rawData = response["rawData"] as JArray;
                        if (rawData != null) {
                            foreach (var record in rawData) {
                                record["Deaths"]?.Parent.Remove();
                            }
                        }

                        return response.ToString();
                    }</set-body>
            </when>
        </choose>
        <cache-store duration="15" />
    </outbound>
Let's test this with the Postman client. 

Let's first try with the Starter product. We will use the subscription key associated with the Starter product. As you can see, the Deaths property is not included within the rawData collection.


























Now, let's check the same request using the Unlimited product. This time, the Deaths property is included in the response.







Thursday, September 19, 2024

Performing URL-Based Load Testing with Azure Load Testing

Load testing is a crucial practice for enterprise APIs to ensure optimal performance under varying traffic conditions. Azure Load Testing, a fully managed service, helps evaluate the performance, scalability, and capacity of your applications, particularly APIs. It enables you to simulate high-scale traffic, uncover performance bottlenecks, and optimize system resilience.

Azure Load Testing allows you to create sophisticated load tests using tools like Apache JMeter while also offering the flexibility to perform URL-based load testing without the need for external tools.

In this article, I will demonstrate how to perform URL-based load testing for an API exposed through the Azure API Management service using Azure Load Testing. 

Configuration of Load Test

Go to your Azure Load Testing instance, click on "Test", and then select "Create a URL-based test" to begin setting up your load test.















Specify Test details

















In Test plan section click on Add request













Enter the details of your API, including the endpoint URL and request method. Since my API retrieves Star Wars characters by their ID, I configured the id parameter as a variable, allowing the test to simulate different requests dynamically.


















Next, I created a CSV file containing multiple rows with different id values to be injected into the id parameter of my API. To simulate an error scenario, I intentionally included an id value of 1500, which does not exist in my API. This helps evaluate how the system handles unexpected inputs and errors under load.






















In the Load section, I configured key parameters such as the number of engine instances, concurrent users, and test duration. Since this test is for demonstration purposes only, I selected the minimum values to keep the load minimal while still assessing the API’s response behavior.





















For monitoring purposes, I added relevant resources to my load test, allowing me to analyze their performance directly from the load test dashboard. Since my API Management instance and Application Insights are crucial for this test, I included them to gain deeper insights into API performance, request handling, and potential bottlenecks.



















Next, I navigated to the Test Criteria section, where I defined success and failure conditions based on key performance metrics. I set the test to fail if:
  • The 90th percentile response time exceeds 1000ms
  • The error percentage is greater than 5%
















Execution of Load Test

With the Load Test configuration complete, it's time to execute the test and analyze the results.

















Here are the results I received from the load test.

















I got following key statistics
  • Load
  • Duration
  • Response time
  • Error percentage
  • Throughput

The Load Test has failed because it didn’t meet the test criteria I set. Specifically, the error percentage surpassed the 5% limit.

Analysis of error requests using related resources

Since we've integrated Application Insights as a related resource, we can dive deeper into the analysis. Let's focus on the Failed Requests chart, which will provide insights into the specific requests that failed during the load test, helping us identify potential issues or bottlenecks in the API.














To further investigate, click on the Failed Requests chart in Application Insights, then select Drill into Logs and navigate to Failures.

















The failure dashboard reveals that there were 9 "Not Found" (404) errors during the load test.













It tells me that there are no Star Wars characters with 1500 as the id