Tuesday, September 17, 2013

Selecting range of data to calculate the Confidence in Tivoli Performance Analyzer.

The confidence of the data for the Performance Analyzer for the task attributes are based on the inputs for the domain from the Tivoli Data Warehouse.

This confidence is listed in the TEP GUI whenever the user opts to view the analysis of the attribute computed by the Performance Analyzer.
e.g: Let;s say the user wants to view the CPU performance of a certain configured agent, ( Assuming that the historical collection was setup ), data will show up as graphs and charts along with forecast and status details.

User would like to see the confidence in the data as well - This is calculated based on the Least Square Regression algorithm ( which I will try not to get into here )

The confidence is great when there are no breaks in data. but low when there is disruption of data and so on.

Here I will go about how to change the calculation in the Tivoli Performance Analyzer Configuration window - on how to select a set of data for this calculation.
Open the PA configuration window and I have picked the CPU Utilization for Linux as my demo.

Click on   "Add constraint " in the PA configuration button -> Input tab as shown below.




Click on "Add Constraint" and in this example I am tweaking the CPU Utilization ( Linux ) and you can change or choose and pick the attributes based on your business needs. I chose the  "Linux CPU Averages" -> and  "Time Stamp" and chose the range I wanted to start computing the data collection.


I suggest a couple of tries - trial and error to get the confidence level showing up to your business needs.

First option is to just add one constraint, and once you are familiar - you can add multiple constraints and set the date range.

Here are some samples that I tried. ( you can experiment it based on this and then use a scheme that fits )
- Set the Granularity to "Hourly"
- Click on Add Constraint -> Chose Linux CPU Averages and "Time Stamp"
- Click on the window just below the Time stamp and chose the date ( in my case I said 'let's get all data from Sep 16th 12:00  )
- Click on Apply.  This should set the task when run to pick data from Sep 16th onwards.
- let the task run.


Some more tips to see the results faster. ( this is optional - but just a suggestion )
1. Set the task to run every often ( I tried running it every minute ) just to see the reaction.





2. Turn the debugging on from Info to Debug - so that I was able to see the logs when the task ran. ( this is optional - but just a suggestion )
If you do this - then remember to stop and start the PA agent ( itmcmd agent stop pa and start pa )
AgentIdUniqueId__file=local.cfg
ConsolePassword__file=local.cfg
DisableVACM=true
EngineBoots__file=local.cfg
LogCount=3
LogFile=/opt/IBM/ITM/logs/kpacma.log
LogLevel=Debug  <==============
LogSize=10000000
NotificationQueueLimit=50
UdpPort=-1


3. View the traces - kpacma.log,  ( this is optional - but just a suggestion )
This is what shows up when the task is run, Here I set the 2 constraints from the front end to watch for everything from 7:00 PM to 8:00 PM on the 16th Sep 2013.
The 1130916190000000
=  1 -13-09-16---19-00-00
=  Ignore the first digit,      2013 Sep  16,   7:00 PM  
likewise the
'1130916120000000'  
is
1-13-09-16-20-00 00  essentially  8 PM.

( There will be just 1 timestamp if there is 1 constraint configured. )



Another example :   2013 Sep 16 12;00 AM to 7 PM

2013-09-17 03:12:09: SELECT  "Linux_CPU_Averages_HV"."AVG_CPU_Usage_Moving_Average" , "Linux_CPU_Averages_HV"."Timestamp" , "Linux_CPU_Averages_HV"."TMZDIFF" , "Linux_CPU_Averages_HV"."Timestamp" , "Linux_CPU_Averages_HV"."TMZDIFF" , "Linux_CPU_Averages_HV"."System_Name" , "Linux_CPU_Averages_HV"."System_Name" , "Linux_CPU_Averages_HV".WRITETIME , "Linux_CPU_Averages_HV"."TMZDIFF"  FROM itmuser."Linux_CPU_Averages_HV" "Linux_CPU_Averages_HV" WHERE "Linux_CPU_Averages_HV"."System_Name" IN ( 'nc9118041057:LZ') AND ("Linux_CPU_Averages_HV"."Timestamp" > '1130916000000000')  AND ("Linux_CPU_Averages_HV"."Timestamp" < '1130916190000000')  ORDER BY  "Linux_CPU_Averages_HV"."System_Name"  , "Linux_CPU_Averages_HV".WRITETIME
2013-09-17 03:12:09: All 19 measurement(s) processed and found 1 result(s)






Things to watch:
- See that you hit the Apply button after making changes to the constraints so that the rule gets applied.
- The "Forecast Overlay" panel in the GUI - does not change the graphs to the "From date" and the "End date". Only the calculations of the confidence, strength and number of records change.
- Check that the task is set to something reasonable time frame so that you can watch the effects of the changes. ( I tried it for every minute - once I got it running - changed the run interval back to the original setting )
- to check if the data is being pulled enable the logging , stop start agent and trace the kpacma.log . The logs can get big if you have a lot of tasks running and difficult to watch. so try greping appropriate task and the server.

Hope this small tutorial helped.

No comments:

Post a Comment