Monday, February 26, 2024

Understand your Azure spending: Harnessing Power BI to analyze monthly expenditure

Cloud cost management, a component of FinOps, presents a complex challenging exercise. Azure, being a public cloud, hosts diverse workloads across different service tiers and regions, making cost management a difficult task.

In this article, I will demonstrate how I developed a Power BI dashboard to delve into and analyze the costs associated with my usage

There are several methods to access usage and associated costs. While utilizing the Azure cost management API is one approach, for this article, I will opt for the monthly usage file, which offers a more convenient solution.

As the first step we need to navigate to the subscription and navigate to the invoices section.




Afterward, proceed to the 'More Options' section and download the usage file in CSV format for a designated billing period

Next, upload the CSV file to your Power BI environment. Once uploaded, you'll be able to view the schema in the data pane.











Let's begin creating our dashboard. Firstly, we'll analyze the cost by each resource type. To do this, drag the 'Cost' and 'MeterCategory' columns onto the canvas. Then, convert the visualization to a Pie Chart.


Now, let's proceed to our second visualization. This visualization will enable us to analyze the cost of each service based on the plan or tier. To achieve this, we will create a table displaying the 'Cost', 'MeterSubCategory', and 'MeterName'.
















We've already gained some valuable insights! However, let's continue to delve deeper. We can examine the daily costs for specific resources. To do this, add another table displaying 'Cost', 'Date', and 'MeterName' as indicated below.
















Moving on to our final chart, let's visualize the usage pattern for specific services over time. To accomplish this, we'll create a line chart incorporating the 'Cost', 'Date', and 'MeterName' fields.



















Now, let's enhance the visual appeal by adding titles and formatting






















Let's analyze a specific resource. If I want to analyze my costs and usage specifically on Cognitive Services, I can select that service from the first chart (Cost by Resource Type), and the other charts will filter accordingly.






















I notice that I've spent more on GPT4 prompt tokens. To analyze further, I can navigate to my second chart (Cost by Resource Plan/Tier) and click on the relevant item.






















This process helps me identify cost patterns and gain a better understanding of how Azure costing operates.

Sunday, February 18, 2024

Optimize Your Azure Spending: How to visualize expenditure across services with Azure Cost Management - Cost Analysis

Managing our cloud expenditures can present a formidable challenge. The presence of multiple tiers and a diverse range of services further complicates the task. However, cost management is very important in building a well-architected cloud.

Azure provides a feature called Cost Management, which is the central place to monitor and govern our cloud expenditure.

The landing page of the Cost Analysis will give you a basic idea on your cloud cost. Additionally, it will show which categories to look after in order to control and govern your costs.



However, you can't drill down to each service to further analyze what products/services caused the expenditure. But, this is analysis is mandatory to implement the Cost Optimization pillar as per the Azure well-architected framework.

Fortunately, the Cost Analysis tool offers distinct dashboards tailored for examining expenses at the service level.

To access these dashboards, simply select the "Services" option from the Cost by Resource menu as shown in the diagram below.


Now you will be navigated to a different dashboard with interesting insights.











If you have multiple workloads originating from a particular service, you can expand that service to view the expenditure breakdown for each individual product or component.




Clicking on a specific service, such as Azure App Service, will navigate you to another dashboard. Notably, this dashboard provides a suggested monthly budget value to assist with financial planning.



With insights gleaned from the aforementioned dashboards, we can make informed decisions regarding our cloud expenditure. By creating tailored budgets for specific services and setting up alerts to notify us when these services are nearing predefined thresholds, we can effectively manage our spending and optimize resource utilization.

Wednesday, February 7, 2024

Interact with the cache using Azure Cache for Redis - Redis Console

Azure Cache for Redis proves immensely valuable for optimizing response times by caching data, thereby mitigating latency. Utilizing this service enables significant enhancements in performance.

You can interact with your Redis cluster redis-cli by installing required tools in your client workstation.

Alternatively, Azure offers a convenient solution directly within the Azure portal: the Redis Console. Accessible through the Console menu, this integrated feature provides a user-friendly interface for streamlined management of your Redis cache.







Within Redis Console, you can interact with your redis cluster using redis-cli commands. Following are some examples

scan 0 //Get current keys

GET hello //Get value for a specific key

HGETALL userprofile //Get value for a specific key where the value is a collection, object





Overall, the experience with Redis console is seamless.

Monday, January 29, 2024

Optimizing Static File Performance: Implementing Caching and Compression with Azure Front Door

Azure Front Door is a global CDN service that enables you to securely expose your web artifacts to the external world. In this short article, I will demonstrate the process of caching and compressing responses by leveraging the caching and compression features provided by Azure Front Door.

It is advisable to apply caching and compression to static files such as CSS, images, JSON files, CSV, etc., as opposed to dynamic content. Therefore, careful route planning is imperative before embarking on the implementation of caching and compression strategies.

Following is an example.

  • route 1 - /api/*
  • route 2 -/assets/*
Following the example mentioned above, we'll designate the /api route for dynamic API content and the /assets route for static content. Let's proceed with the implementation. Let's focus on /assets route.

Let's start by navigating to the Front Door manager and selecting the desired endpoint.

Click on "Add a route" to begin configuring the routes for your Front Door setup.

















Next, specify the path of the route to match











Let's explore how to define cache and compression settings.









I prefer selecting the Use Query String option, as it allows Front Door to independently cache responses with query strings. However, there are other options available for you to choose from.

This completes the necessary steps to optimize your responses for static content.

Tuesday, January 9, 2024

Investigate the root cause for latency with Azure Application Insights

In this article, I will demonstrate how to pinpoint the root cause when end users experience general latency. Azure Monitor - Application Insights will be instrumental in this process.

Firstly, we need to navigate to the Performance blade of the Application Insights instance.
















Following that, it's better apply filters to refine the dataset.








As we aim to identify the lowest performance, it is advisable to conduct the investigation using the 99th percentile.










There is a distinct outlier present. Let's delve deeper into the investigation by narrowing down the time range to examine specific instances of the failure.




















The DELTA indicates the extent to which the selected data points differ from the rest of the transactions within the chosen timeframe.

The Insights tile lets you identify the specific method, representing the best possible cause, that contributes to the majority of the latency, as illustrated below.

















Utilize the Distribution of Durations tile to narrow down and select the incidents with the least performance. Check samples for further analysis.























This analysis is very important in pinpointing the root cause of latency and facilitating the implementation of necessary corrective actions.

Thursday, December 21, 2023

Simulating Azure event hubs functionality end to end with Azure Data Explorer and generate data feature

To evaluate Azure Event Hubs functionality, typically we need to develop an application for data ingestion and another for data consumption

In my previous blog post, I outlined the process of discovering ingested data in Event Hubs using Azure Data Explorer. In this article, I will demonstrate how to ingest data into Event Hubs without writing a single line of code.

Our first step is to navigate to the Event Hubs instance and access the Generate data (preview) feature.














Multiple options are provided to ingest payload either from pre-canned datasets or custom payloads based on a given schema.















Once you click on the send button, the data will be ingested to our even hub instance.

Then, you can easily discover this data in your pre-configured Azure Data Explorer. I have detailed the steps involved in configuring Azure Data Explorer in my previous blog post.











This approach allows you to explore the functionality of Event Hubs without writing any code!

Tuesday, December 12, 2023

Visualizing ingested events in Azure Event Hub with Azure Data Explorer

In modern cloud-based solutions, event-driven architectures are very common. Microsoft Azure facilitates event processing through Azure Event Hubs, offering essential building blocks to implement scalable solutions capable of processing large volumes of events and data with low latency and high reliability.

Debugging or testing event-based solutions can be challenging due to the nature of their architecture. It requires implementing ingesting applications and consumers to verify the functionality of event-based solutions.

In this article, I will demonstrate how to leverage Azure Data Explorer to visualize ingested data without any delay. With this solution, there is no need to create any custom solutions to view the contents of our Event Hub instance.

First, we need to create our Azure Data Explorer cluster.


Next, we will create a database in Data Explorer.


To connect Data Explorer with Event Hubs, let's enable a managed account at the Data Explorer instance.


After enabling the managed account at the Data Explorer instance, we need to assign the required permissions to the Event Hub.


Now, let's configure Data Explorer for our Event Hub. Navigate to the Event Hub and select the "Analyze data" option.














Let's link our Data Explorer instance















Now that our configuration is complete, let's ingest some data and explore it in Data Explorer.

To explore our data, navigate to Data Explorer and go to the Query section. Then, select the table and run your query to explore the ingested data.