When managing Azure subscriptions, discovering unaccounted-for resources is common, especially when they're older than the 90-day Activity Log retention period. Application Insights resources are particularly prone to this issue as they're often created automatically during deployments.
While the Azure Portal's Activity Log only maintains 90 days of history, several methods exist to retrieve older creation data:
1. Azure Resource Graph Query
This provides immediate visibility across your entire Azure estate:
resources
| where type == "microsoft.insights/components"
| project name, resourceGroup, subscriptionId,
createdTime = properties.creationDate,
createdBy = tostring(properties.creatorId)
| where createdTime < ago(90d)
2. ARM API Historical Data
Azure Resource Manager maintains creation metadata beyond the UI visibility:
GET https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/microsoft.insights/components/{resourceName}?api-version=2020-02-02
3. Cross-Reference with Deployment History
Check historical deployments even if specific resource logs are gone:
az deployment operation list \
--resource-group myResourceGroup \
--name myDeployment \
--query "[?properties.provisioningState=='Succeeded']"
For an Application Insights resource named "myAppInsights":
# PowerShell approach
$ai = Get-AzApplicationInsights -ResourceGroupName "myRG" -Name "myAppInsights"
$ai.CreationDate
$ai.Tags["createdBy"]
Remember that some automated processes (like DevOps pipelines) create resources without clear user attribution. In such cases, checking the resource tags or associated automation accounts might yield clues.
Implement these practices to avoid similar situations:
- Enable Azure Diagnostic Settings to stream activity logs to Storage
- Mandate resource tagging policies including creator information
- Set up Azure Policy to enforce owner annotations
When troubleshooting mysterious Azure resources (like these Application Insights instances), the 90-day Activity Log retention becomes problematic. Microsoft's design intentionally limits log retention for cost optimization, but this creates forensic gaps for resource lifecycle management.
Try these approaches when standard logs are unavailable:
# Check Azure Resource Graph (limited but useful)
resources
| where type == "microsoft.insights/components"
| where resourceGroup == "YOUR_RG_NAME"
| project name, subscriptionId, resourceGroup, location, tags
Deployments often leave forensic traces in ARM templates:
# PowerShell snippet to check deployment history
Get-AzResourceGroupDeployment -ResourceGroupName "TARGET_RG" |
Sort-Object -Property Timestamp -Descending |
Select-Object -First 10 |
Format-Table -Property Timestamp, DeploymentName, @{Label="User";Expression={$_.Parameters.Values | Where-Object {$_.Key -eq "_artifactsLocationSasToken"}}}
Implement this policy definition to mandate creator tagging:
{
"if": {
"allOf": [
{
"field": "tags",
"exists": "false"
},
{
"field": "type",
"equals": "microsoft.insights/components"
}
]
},
"then": {
"effect": "audit",
"details": {
"field": "tags",
"value": "[concat(union(field('tags'), json('{\"createdBy\": \"[deployment().properties.templateLink.identity.userId]\"}')), '')]"
}
}
}
For critical environments, implement log forwarding to:
- Azure Log Analytics workspace with extended retention
- SIEM solutions like Splunk or Sentinel
- Storage account with lifecycle management
# ARM template for diagnostic settings automation
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"resourceName": {
"type": "string"
},
"workspaceId": {
"type": "string"
}
},
"resources": [
{
"type": "microsoft.insights/components/providers/diagnosticSettings",
"apiVersion": "2021-05-01-preview",
"name": "[concat(parameters('resourceName'), '/Microsoft.Insights/service')]",
"properties": {
"workspaceId": "[parameters('workspaceId')]",
"logs": [
{
"category": "Audit",
"enabled": true,
"retentionPolicy": {
"enabled": true,
"days": 365
}
}
]
}
}
]
}
When all else fails, try these advanced methods:
- Check associated storage accounts for deployment artifacts
- Review service principal audit logs in Azure AD
- Search Azure DevOps/GitHub repos for matching resource definitions