How to Monitor Azure VM Memory Usage for Performance Optimization in DevOps Environments


2 views

When working with Azure VMs as build servers (like ADO/VSTS instances), tracking memory usage is crucial for right-sizing your infrastructure. While CPU metrics are readily available in Azure Monitor, memory metrics aren't enabled by default due to Azure's hypervisor-level architecture constraints.

You have several built-in options without needing third-party tools:


# PowerShell snippet to enable memory diagnostics
Set-AzVMDiagnosticsExtension -ResourceGroupName "YourRG" 
    -VMName "YourVM" 
    -DiagnosticsConfigurationPath "diag_config.json" 
    -StorageAccountName "yourstorageaccount"

Create a diag_config.json file with memory counters:


{
    "PerformanceCounters": {
        "scheduledTransferPeriod": "PT1M",
        "PerformanceCounterConfiguration": [
            {
                "counterSpecifier": "\\Memory\\Available MBytes",
                "sampleRate": "PT1M",
                "unit": "Count",
                "annotation": []
            },
            {
                "counterSpecifier": "\\Memory\\% Committed Bytes In Use",
                "sampleRate": "PT1M",
                "unit": "Percent",
                "annotation": []
            }
        ]
    }
}

Once enabled, create a custom workbook in Azure Monitor:


// KQL query for memory usage
Perf
| where ObjectName == "Memory" 
| where CounterName == "Available MBytes" or CounterName == "% Committed Bytes In Use"
| summarize avg(CounterValue) by bin(TimeGenerated, 15m), CounterName, Computer
| render timechart

For more advanced scenarios, configure Log Analytics to collect performance data:


# ARM template snippet for enabling Log Analytics
{
    "type": "Microsoft.Compute/virtualMachines/extensions",
    "name": "[concat(parameters('vmName'), '/Microsoft.Insights.VMDiagnosticsSettings')]",
    "apiVersion": "2018-06-01",
    "location": "[parameters('location')]",
    "properties": {
        "publisher": "Microsoft.Azure.Diagnostics",
        "type": "IaaSDiagnostics",
        "typeHandlerVersion": "1.5",
        "autoUpgradeMinorVersion": true,
        "settings": {
            "WadCfg": {
                "DiagnosticMonitorConfiguration": {
                    "PerformanceCounters": {
                        "scheduledTransferPeriod": "PT1M",
                        "PerformanceCounterConfiguration": [
                            {
                                "counterSpecifier": "\\Memory\\Available MBytes",
                                "sampleRate": "PT1M"
                            }
                        ]
                    }
                }
            },
            "StorageAccount": "[parameters('storageAccountName')]"
        },
        "protectedSettings": {
            "storageAccountName": "[parameters('storageAccountName')]",
            "storageAccountKey": "[listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')), providers('Microsoft.Storage', 'storageAccounts').apiVersions[0]).keys[0].value]",
            "storageAccountEndPoint": "https://core.windows.net"
        }
    }
}

While Datadog offers richer visualization, native solutions provide sufficient memory tracking for most DevOps scenarios. The key advantage of Azure-native methods is tight integration with other Azure services and automation capabilities.


When analyzing performance metrics for Azure VMs serving as ADO (VSTS) build servers, many developers encounter a puzzling limitation - while CPU metrics are readily available in Azure Monitor, memory usage data isn't displayed by default. This creates significant challenges when trying to right-size VMs for optimal cost-performance balance.

Azure provides several native ways to access memory metrics:

  • Diagnostics Extension: The Windows Azure Diagnostics (WAD) extension can collect memory counters, but requires configuration
  • Azure Monitor Agent: The newer AMA can collect performance counters including memory metrics
  • Log Analytics: Memory data can be queried through KQL when properly configured

To configure WAD to collect memory metrics, add this to your diagnostics configuration:


"PerformanceCounters": {
    "scheduledTransferPeriod": "PT1M",
    "PerformanceCounterConfiguration": [
        {
            "counterSpecifier": "\\Memory\\Available Bytes",
            "sampleRate": "PT1M"
        },
        {
            "counterSpecifier": "\\Memory\\% Committed Bytes In Use",
            "sampleRate": "PT1M"
        }
    ]
}

Once configured, you can query memory usage with KQL:


Perf
| where ObjectName == "Memory" 
| where CounterName == "Available MBytes"
| summarize avg(CounterValue) by bin(TimeGenerated, 1h), Computer
| render timechart

For teams needing richer visualization without extensive configuration:

  • Azure Monitor Workbooks: Create custom dashboards with memory metrics
  • Azure Monitor Insights: The VM Insights solution provides memory charts
  • Third-party tools: Datadog, New Relic, or Dynatrace offer deeper visibility

When analyzing memory patterns for build servers, focus on:

  1. Peak usage during parallel builds
  2. Memory pressure during large solution compilations
  3. Idle state memory consumption

Consider this PowerShell snippet to log memory usage during builds:


$buildMemoryLog = @()
$adoProcess = Get-Process -Name msbuild -ErrorAction SilentlyContinue

if($adoProcess) {
    $memoryUsage = [math]::Round($adoProcess.WorkingSet64 / 1MB, 2)
    $buildMemoryLog += [PSCustomObject]@{
        Timestamp = Get-Date
        MemoryMB = $memoryUsage
        BuildDefinition = $env:BUILD_DEFINITIONNAME
    }
}

$buildMemoryLog | Export-Csv -Path "D:\build_memory_log.csv" -Append