Collect.sh and utlstat.sh basically provide the same information as utlbstat.sql and utlestat.sql sql scripts. The sql scripts bstat/estat generate a report that is "large granularity", that is it is usually taken over a long period of time say between 5 minutes to 12 hours. Often we'd like to know what the highs and lows were between the start and stop period. One solution would be to run bstat/estat numerous times over the period. For one this would be very costly in database resources since bstat/estat does alot of object creation and deletion as well as calculations. Also if one wanted to have a resume report.txt that was an summary of all the other short period report.txts generated over the full period, it would take alot of calculation to generate this report.
The scritps collect.sh and utlstat.sh are one solution to this problem.
The script collect.sh takes all the data used by bstat/estat and spools it into files. This operation is very efficient since there is no calculations and no generation of objects in the database.
The script utlstat.sh reads the files generated by collect.sh and generates a report.txt file for any period with the collect period.
To use: 1) choose a directory for the OUTPUT ( ./LOG is the default ) $ export MON_LOG=`pwd`/log 2) run collect.sh to collect performance data $ collect.sh & 3) run utlstat.sh to see result of performance data examples: statistics $ utlstat.sh systat wait events $ utlstat.sh sevt file io $ utlstat.sh fstat $ utlstat.sh you can also specify the interval of time $ utlstat.sh all 0_60 # first 60 seconds $ utlstat.sh all end # last sample period $ utlstat.sh all -600_end # last 10 minutes $ utlstat.sh all 0_600 # first 10 minutes $ utlstat.sh all 10:15_10:20 # 10:15 to 10:20 $ utlstat.sh all 10:15_+300 # 10:15 to 10:20 4) to stop the collect remove the file *.end in log $ collect.sh end 5) otherwise the collect stops in 12 hours