In a production Oracle environment running on 19c, a critical performance degradation impacted business operations during peak hours. Despite being on a modern database version, the absence of Diagnostic Pack licensing meant AWR was not available. This case study demonstrates how Statspack was implemented and leveraged to identify the root cause and deliver a ~40% performance improvement using fundamental performance analysis techniques.
Environment Overview
| Component | Details |
|---|---|
| Database Version | Oracle 19c |
| Edition | Standard Edition (No AWR) |
| Application | Oracle E-Business Suite |
| Workload Type | OLTP (High concurrent users) |
| Peak Users | ~1200 concurrent sessions |
Problem Statement
During peak business hours, users experienced intermittent slowness with transactions taking ~3x longer than baseline. There was no historical performance repository due to AWR unavailability. Key constraints included:
- Diagnostic Pack not licensed — AWR disabled
- No ADDM recommendations available
- Production-critical issue requiring immediate resolution
Strategy and Approach
Even in Oracle Database 19c, we relied on core DBA principles: enable Statspack, capture snapshots at 30-minute intervals, and perform comparative analysis between peak vs non-peak workload and before vs during the degradation window.
Implementation Steps
Step 1: Install Statspack
Connect as SYSDBA and run the Statspack creation script. This creates the PERFSTAT schema and all required objects:
sqlplus / as sysdba
@?/rdbms/admin/spcreate.sql
You will be prompted for the PERFSTAT password, default tablespace, and temporary tablespace. Choose an appropriate tablespace with sufficient space (minimum 100MB recommended for active environments).
Step 2: Automate Snapshot Collection
Schedule automatic snapshots every 30 minutes using the built-in Statspack job:
-- Connect as PERFSTAT user
sqlplus perfstat/<password>
-- Take a manual snapshot
EXECUTE statspack.snap;
-- Schedule automatic snapshots (every 30 mins) using spauto.sql
@?/rdbms/admin/spauto.sql
-- Or manually schedule with DBMS_JOB
VARIABLE jobno NUMBER;
BEGIN
DBMS_JOB.SUBMIT(:jobno,
'statspack.snap;',
SYSDATE,
'SYSDATE + 30/1440');
COMMIT;
END;
/
Step 3: Verify Snapshots
-- Check available snapshots
SELECT snap_id, snap_time, snap_level
FROM stats$snapshot
ORDER BY snap_time DESC;
-- Count total snapshots
SELECT COUNT(*) FROM stats$snapshot;
Step 4: Generate Statspack Report
Generate a report between two snapshot IDs covering the peak period:
sqlplus perfstat/<password>
@?/rdbms/admin/spreport.sql
-- Enter begin and end snapshot IDs when prompted
Step 5: Query Top Wait Events Directly
SELECT event,
total_waits,
time_waited_micro/1000000 time_waited_secs,
average_wait
FROM stats$system_event se,
stats$snapshot s
WHERE s.snap_id BETWEEN &begin_snap AND &end_snap
AND se.snap_id = s.snap_id
AND event NOT LIKE 'SQL*Net%'
ORDER BY time_waited_micro DESC
FETCH FIRST 10 ROWS ONLY;
Analysis Findings
Key finding: log file sync dominated database time — a clear indication of commit-related contention. No high-cost SQL was identified, ruling out query inefficiency as the primary cause. The load profile showed elevated user commits per second and an increased redo generation rate.
Root Cause
Frequent commits inside application loops (row-by-row processing) caused high log file sync waits, redo log contention, increased I/O latency, and reduced transaction throughput.
Resolution
The application team introduced commit batching to significantly reduce commit frequency — modifying loop logic to commit every N rows rather than every single row. Minor redo log tuning was also performed:
-- Check redo log switch frequency (should not exceed 4-5 per hour ideally)
SELECT TO_CHAR(first_time,'YYYY-MM-DD HH24') hour,
COUNT(*) switches
FROM v$log_history
WHERE first_time > SYSDATE - 1
GROUP BY TO_CHAR(first_time,'YYYY-MM-DD HH24')
ORDER BY 1;
-- Check redo log sizes
SELECT group#, members, bytes/1024/1024 size_mb, status
FROM v$log
ORDER BY group#;
Results
| Metric | Before | After |
|---|---|---|
| Transaction Time | High (3x baseline) | Down ~40% |
| log file sync waits | Very High | Minimal |
| User Experience | Poor | Stable |
Purging Old Statspack Data
Statspack data grows quickly in busy environments. Purge old snapshots regularly:
-- Purge a range of snapshots
EXECUTE statspack.purge(
i_begin_snap => &oldest_snap_id,
i_end_snap => &newest_snap_id,
i_snap_range => TRUE
);
-- Or use the provided script
@?/rdbms/admin/sppurge.sql
Key Takeaways
- Modern DB does not mean automatic performance — even in Oracle 19c, performance issues require structured analysis
- Wait events reveal the truth — hit ratios appeared normal, but wait events exposed the actual bottleneck
- Application design matters — commit strategy directly impacts performance; database tuning alone cannot fix design inefficiencies
- Statspack still matters — highly effective in modern 19c environments, especially without AWR licensing
- Always capture snapshots during peak workload with intervals of 30 minutes or less
Why Not AWR?
| Reason | Explanation |
|---|---|
| Licensing | Diagnostic Pack not enabled |
| Cost Constraints | Avoid additional licensing overhead |
| Practicality | Statspack provided sufficient insight for this issue |
This case reflects real-world experience in Oracle E-Business Suite environments, high-concurrency OLTP systems, and performance tuning under licensing constraints. Have questions? Reach out at sdanwarahmed@gmail.com.
Leave a comment