How to Diagnose and Resolve Excessive tempdb.ldf Growth in SQL Server 2005


3 views

When working with SQL Server 2005, encountering a bloated tempdb.ldf file (45GB in this case) typically indicates one of two scenarios:

  • Long-running transactions that aren't properly committed/rolled back
  • Improperly configured tempdb transaction log settings

From your description, several critical factors stand out:

1. Recovery Model: SIMPLE (confirmed)
2. Peak Usage: Morning reporting workloads
3. Growth Pattern: Correlates with concurrent report execution
4. Affected Component: Transaction log (not data files)

Even in SIMPLE recovery mode, the transaction log must maintain:

  • All active transactions (open but not committed)
  • Transaction details until the next checkpoint
  • Rollback information for in-flight operations

Immediate Mitigation

To reclaim space immediately:

USE tempdb
DBCC SHRINKFILE ('templog', TRUNCATEONLY)
GO

Long-term Configuration Changes

Add these settings to your tempdb configuration:

ALTER DATABASE tempdb 
MODIFY FILE (NAME = 'templog', 
             SIZE = 4GB, 
             FILEGROWTH = 1GB)
GO

Use this query to identify problematic sessions during your reporting window:

SELECT 
    s.session_id,
    s.login_name,
    s.status,
    t.transaction_id,
    t.name,
    t.transaction_begin_time,
    DATEDIFF(MINUTE, t.transaction_begin_time, GETDATE()) AS duration_minutes,
    CAST(s.memory_usage * 8.0 / 1024 AS DECIMAL(10,2)) AS memory_mb
FROM sys.dm_tran_active_transactions t
JOIN sys.dm_tran_session_transactions st ON t.transaction_id = st.transaction_id
JOIN sys.dm_exec_sessions s ON st.session_id = s.session_id
WHERE s.is_user_process = 1
ORDER BY duration_minutes DESC;

For applications using many temp tables/TVFs:

-- Instead of:
SELECT * INTO #temp FROM large_table

-- Use explicit column selection:
SELECT key_columns INTO #temp FROM large_table

For reporting workloads, consider implementing:

  • Dedicated reporting replica
  • Table variable instead of temp tables where appropriate
  • Batch processing instead of large atomic operations

When dealing with SQL Server 2005's tempdb log file (templog.ldf) growing uncontrollably to 45GB+, we're typically seeing symptoms of either:

  • Long-running transactions in tempdb
  • Improper transaction isolation levels
  • Memory pressure forcing excessive tempdb usage
  • Unoptimized temp table/TVF usage patterns

First, let's verify the actual space usage pattern:


-- Check tempdb file sizes and space usage
SELECT 
    name AS [File Name], 
    physical_name AS [Physical Path],
    size/128.0 AS [Total Size in MB],
    size/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS INT)/128.0 AS [Available Space in MB],
    growth AS [Growth Value],
    is_percent_growth
FROM tempdb.sys.database_files;

-- Monitor active transactions in tempdb
SELECT 
    session_id, 
    transaction_id,
    database_transaction_log_bytes_used,
    database_transaction_log_bytes_reserved,
    database_transaction_begin_time,
    CASE database_transaction_type
        WHEN 1 THEN 'Read/write'
        WHEN 2 THEN 'Read-only'
        WHEN 3 THEN 'System'
    END AS [Transaction Type]
FROM sys.dm_tran_database_transactions
WHERE database_id = DB_ID('tempdb');

Immediate Action: Controlled Shrinking

Instead of regular shrinking (which fragments tempdb), use this controlled approach:


USE tempdb;
GO
DBCC SHRINKFILE (templog, 1024); -- Target 1GB
GO

Configuration Changes


-- Set proper growth parameters for production
ALTER DATABASE tempdb 
MODIFY FILE (NAME = templog, FILEGROWTH = 1024MB, MAXSIZE = 8192MB);

Code-Level Optimizations

For heavy reporting workloads using temporary tables and TVFs:


-- Instead of:
SELECT * INTO #TempTable FROM LargeTable WHERE...

-- Use explicit creation with proper indexing:
CREATE TABLE #OptimizedTemp (
    ID INT PRIMARY KEY,
    DataValue NVARCHAR(100)
)

INSERT INTO #OptimizedTemp (ID, DataValue)
SELECT ID, DataValue FROM LargeTable WHERE...

-- Always clean up explicitly
DROP TABLE #OptimizedTemp

For reporting workloads:

  • Implement snapshot isolation for read-heavy operations
  • Consider using table variables (@var) instead of #temp tables for small datasets
  • Implement proper connection pooling to prevent transaction leakage

Create a SQL Agent job to monitor tempdb growth:


-- Log tempdb size hourly
INSERT INTO DBA_Monitoring.dbo.TempDBGrowthLog
SELECT 
    GETDATE() AS LogTime,
    SUM(size/128.0) AS TotalSizeMB,
    SUM(size/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS INT)/128.0) AS FreeSpaceMB
FROM tempdb.sys.database_files;