To get the Desktop session when you bring up the vncsession on an AIX, do this.,
Go to .vnc/xstartup file
Add these few lines.
-------X--------- X- Cut and Paste --X------
#!/bin/sh
/usr/dt/bin/Xsession
-X--------------------Done ------------X
Check that the execute permissions are present for the xstartup file.
execute the vncserver command,
From there on you should be able to bring up the XSession on an AIX platform
Thursday, September 27, 2012
Tuesday, September 25, 2012
ITPA Short term historical directory size , KHD_HISTSIZE_EVAL_INTERVAL , ITPA
On a ITM 623 FP1 / 630 and on a Linux environment...
Symptom:
If the TEPS GUI does not show any data after ITPA agent is brought up - it could be that the data is clogged.
And you have checked that the 'itpa' agent is running, ware house proxy, S&P ( Summarization and Pruning ) hd, sy and others are all running - and there is disk space.
BUT the ITPA GUI for Performance Analyzer is not rendering the data or the statistics.
If you happen to look at the logs files ITM_FOLDER/logs/*_pa* - and in the last file
If this error shows up in the _pa_*.log file.
(not in the kpacma.log file )
(5061DAD9.000A-B:khdxhwst.cpp,96,"CTHistWriteStatus") KHD_HISTSIZE_EVAL_INTERVAL (20) < 60, defaulting to 900
(5061DAD9.000B-B:khdxhwst.cpp,263,"setHistWriteStatus") ATTENTION: Stopped writing short-term historical data to files in directory /opt/IBM/ITM630/lx8266/pa/hist//.
(5061DAD9.000C-B:khdxhwst.cpp,265,"setHistWriteStatus") Total size of historical files 64284KB exceeded the maximum of 2048KB.
This is because of the following lines in the ITM_FOLDER/config/pa.ini
The KHD_HISTSIZE_EVAL_INTERVAL=20 <== Incorrect.
It means the interval cannot be set to 20 secs.
It has to be above 60. ( or else it sets it self to 15 mins , 15 * 60 )
2. ( the sample I am showing here is on a Linux server )
The size of the historical folder ( Directory Size - "du -s" ) has to be 2M after which the WPA sends it to the DB.
KHD_TOTAL_HIST_MAXSIZE=2
du -s on the hist folder shows
[root@lin02-tfam config]# du -s
6560 . <== this is why the data collection was stopped in the short term historical folder.
Resolution:
http://pic.dhe.ibm.com/infocenter/tivihelp/v15r1/index.jsp?topic=%2Fcom.ibm.itm.doc_6.2.2fp1%2Fitm_historyshortterm_limitgrowth.htm
3. Tip : to see what the Warehouse proxy is doing - then set this ....
Symptom:
If the TEPS GUI does not show any data after ITPA agent is brought up - it could be that the data is clogged.
And you have checked that the 'itpa' agent is running, ware house proxy, S&P ( Summarization and Pruning ) hd, sy and others are all running - and there is disk space.
BUT the ITPA GUI for Performance Analyzer is not rendering the data or the statistics.
If you happen to look at the logs files ITM_FOLDER/logs/*_pa* - and in the last file
If this error shows up in the _pa_*.log file.
(not in the kpacma.log file )
(5061DAD9.000A-B:khdxhwst.cpp,96,"CTHistWriteStatus") KHD_HISTSIZE_EVAL_INTERVAL (20) < 60, defaulting to 900
(5061DAD9.000B-B:khdxhwst.cpp,263,"setHistWriteStatus") ATTENTION: Stopped writing short-term historical data to files in directory /opt/IBM/ITM630/lx8266/pa/hist//.
(5061DAD9.000C-B:khdxhwst.cpp,265,"setHistWriteStatus") Total size of historical files 64284KB exceeded the maximum of 2048KB.
This is because of the following lines in the ITM_FOLDER/config/pa.ini
The KHD_HISTSIZE_EVAL_INTERVAL=20 <== Incorrect.
It means the interval cannot be set to 20 secs.
It has to be above 60. ( or else it sets it self to 15 mins , 15 * 60 )
2. ( the sample I am showing here is on a Linux server )
The size of the historical folder ( Directory Size - "du -s" ) has to be 2M after which the WPA sends it to the DB.
KHD_TOTAL_HIST_MAXSIZE=2
du -s on the hist folder shows
[root@lin02-tfam config]# du -s
6560 . <== this is why the data collection was stopped in the short term historical folder.
Resolution:
http://pic.dhe.ibm.com/infocenter/tivihelp/v15r1/index.jsp?topic=%2Fcom.ibm.itm.doc_6.2.2fp1%2Fitm_historyshortterm_limitgrowth.htm
3. Tip : to see what the Warehouse proxy is doing - then set this ....
In the ITPA you can set a trace unit of (unit:khdx all) to see history-related (KHD) processing.. maybe that will give more information. |
Tuesday, September 11, 2012
Installation of SPSS on ITM for predicting the Non-Linear Trending of computer resources.
Note : Views are my own and not that of my employer.
Here I am giving some tips on installation of the SPSS component on ITM . ( IBM Tivoli Monitoring)
ITM support both the Linear Trending and the Non-Linear trending of data.
For more on the product : refer to
http://www-01.ibm.com/software/analytics/spss/support/spss_license.html
(SPSS = Statistical Package for Social Sciences )
Below you will see how to install this, and also some graphs of the resources from a beginner stand point.
( SPSS component is a add-on component license that one has to buy to install on top of ITM which will give the Non -Linear trending.)
The Linear trending however does not require any special license and will be installed as part of the ITM.
SPSS gives the ability to do Non-linear predictive analysis and is better than Linear prediction.
Licensing:
http://www-01.ibm.com/software/analytics/spss/support/spss_license.html
Product Downloads for evaluation :
http://www-01.ibm.com/software/analytics/spss/downloads/
Installing SPSS component on a Linux Server RHEL 64 bit
Download the SPSS package, and unzip and start the installer ( X session - required )
Some installation snapshots are put up here for easier installation process.
Go to MEMS Tool ( or the itmcmd manage ) and configure the ITPA .
Go up to the configuration page and validate the SPSS path.
Click on Validate - if the default path was set during the installation, then it will prompt a valid response.
Save and restart the agent.
Now, on the TEPS GUI, open the Performance Analyzer configuration button.
and drill down to the OS Linux - you will see the new Non- linear Analytic tasks being configured.
Here in this example, I concentrate on Linux OS only.
The new tasks end with the NLT suffix indicates that the Non-Linear tasks are created.
Click on the tasks and check that the 'run at startup' button is set, ( If the active button is not set - then enable this option to activate it )
Set the task interval to desired time ( in my case I set it to 1 hr- so that this task runs every 1 hr )
(Start with one of the NLT Tasks and then replicate this to other NLT tasks )
Click on the Distribution tab, and select the managed servers and move it under Assigned. So that the collections happen for this managed servers.
Next,
( my choice ) set the granularity to Hourly.
and Compute trend on 'all available data'
Click 'Apply',
Now. assuming that the historical collections is configured. check the MOS WOS panel.
This lists the different states the analytical tasks are in.
The possible messages are COMPUTED or FAILED.
Computed means that the resource was computed, but there was not.
To speed things up - we can start and stop the PA ( since we have configured the NLT task to 'Run at Startup' )
Let's take just the Disk Utilization for example.
Go to Performance Analyzer Configuration window and click on Disk Utilization, The input data selection window pops up.
Check if the History collection have been set up for this attribute.
(If not, Go to History collection and check that the Linux Disk (Superseded ) is set to collect history data )
If the data collection is in progress and the Disk Utilization NLT is computed, this is the message that should show up if your cursor is on
Performance Analyzer Warehouse Agent.
If the state is computed. move the cursor to
and click on the 'Workspace Gallery'
This will pop-up a new window. In this select the
'Disk Utilization Non Linear Trending ' button .
This should render the Disk Utilization Non Linear report.
This is for Space Used Percent Input attribute.
A sample graph for "Space Used Percent' would be like
indicating that the disk forecast based on the past data will be at 10%.
Now, I change the input attribute from Space Used Percent to 'Space Available'
History Collection Configuration :
Tips to check if the NLT historical collection is configured.
Go to History Collection and click on Tivoli Perf domain for OS Agent:
and verify
a) Collection Interval ( this means that the collection is being done - once a day and also saved in the Agent once a day )
and just to speed it up
set the Collection interval to 5 minutes and move it to Warehouse DB once every 15 minutes.
Click on the Distribution tab and remember to assign the server under the "Start Collection on"
Repeat this for all the resource NLT tasks like CPU, DIsk, Memory etc.
Here I am giving some tips on installation of the SPSS component on ITM . ( IBM Tivoli Monitoring)
ITM support both the Linear Trending and the Non-Linear trending of data.
For more on the product : refer to
http://www-01.ibm.com/software/analytics/spss/support/spss_license.html
(SPSS = Statistical Package for Social Sciences )
Below you will see how to install this, and also some graphs of the resources from a beginner stand point.
( SPSS component is a add-on component license that one has to buy to install on top of ITM which will give the Non -Linear trending.)
The Linear trending however does not require any special license and will be installed as part of the ITM.
SPSS gives the ability to do Non-linear predictive analysis and is better than Linear prediction.
Licensing:
http://www-01.ibm.com/software/analytics/spss/support/spss_license.html
Product Downloads for evaluation :
http://www-01.ibm.com/software/analytics/spss/downloads/
Installing SPSS component on a Linux Server RHEL 64 bit
Download the SPSS package, and unzip and start the installer ( X session - required )
Some installation snapshots are put up here for easier installation process.
Go to MEMS Tool ( or the itmcmd manage ) and configure the ITPA .
Go up to the configuration page and validate the SPSS path.
Click on Validate - if the default path was set during the installation, then it will prompt a valid response.
Save and restart the agent.
Now, on the TEPS GUI, open the Performance Analyzer configuration button.
and drill down to the OS Linux - you will see the new Non- linear Analytic tasks being configured.
Here in this example, I concentrate on Linux OS only.
The new tasks end with the NLT suffix indicates that the Non-Linear tasks are created.
Click on the tasks and check that the 'run at startup' button is set, ( If the active button is not set - then enable this option to activate it )
Set the task interval to desired time ( in my case I set it to 1 hr- so that this task runs every 1 hr )
(Start with one of the NLT Tasks and then replicate this to other NLT tasks )
Click on the Distribution tab, and select the managed servers and move it under Assigned. So that the collections happen for this managed servers.
Next,
( my choice ) set the granularity to Hourly.
and Compute trend on 'all available data'
Click 'Apply',
Now. assuming that the historical collections is configured. check the MOS WOS panel.
This lists the different states the analytical tasks are in.
The possible messages are COMPUTED or FAILED.
Computed means that the resource was computed, but there was not.
To speed things up - we can start and stop the PA ( since we have configured the NLT task to 'Run at Startup' )
Let's take just the Disk Utilization for example.
Go to Performance Analyzer Configuration window and click on Disk Utilization, The input data selection window pops up.
Check if the History collection have been set up for this attribute.
(If not, Go to History collection and check that the Linux Disk (Superseded ) is set to collect history data )
If the data collection is in progress and the Disk Utilization NLT is computed, this is the message that should show up if your cursor is on
Performance Analyzer Warehouse Agent.
If the state is computed. move the cursor to
and click on the 'Workspace Gallery'
This will pop-up a new window. In this select the
'Disk Utilization Non Linear Trending ' button .
This should render the Disk Utilization Non Linear report.
This is for Space Used Percent Input attribute.
A sample graph for "Space Used Percent' would be like
indicating that the disk forecast based on the past data will be at 10%.
Now, I change the input attribute from Space Used Percent to 'Space Available'
History Collection Configuration :
Tips to check if the NLT historical collection is configured.
Go to History Collection and click on Tivoli Perf domain for OS Agent:
and verify
a) Collection Interval ( this means that the collection is being done - once a day and also saved in the Agent once a day )
and just to speed it up
set the Collection interval to 5 minutes and move it to Warehouse DB once every 15 minutes.
Click on the Distribution tab and remember to assign the server under the "Start Collection on"
Repeat this for all the resource NLT tasks like CPU, DIsk, Memory etc.
Monday, September 10, 2012
ITPA : Not sufficient data points
Note : This scenario was tested on ITM 6.2.3 FP1 and also an example is for "CPU Utilization"
One may use similar configuration for other domains like ITCAM and others that ITPA supports.
If this message appears on the TEPS GUI for a certain task - it means that a) either the Historical Collections was not configured or b ) there was insufficient Historical data to do its' computations.
E.g:
This means that the CPU Utilization for the OS agent was not configured. ( in this case I am testing it on a RHEL 5.8 - Linux )
The message can be either a "FAILED - Not Sufficient Data Points"
or "COMPUTED - Not Sufficient Data Points"
The "Failed" message (as seen above ) means that the historical configuration was not done.
The second error message you could see is "COMPUTED" however,it will show Not Sufficient data -points as seen below. This means that there are not sufficient collections for the Performance Analyzer to do computations on.
To fix this:
Click on the History Collection button :
Go to Linux : and create a new collection for Linux CPU (Superseded.)
Right click on "Linux" and Create a new Collection. ( if not done already )
Basic Tab:
Collection configuration:
Distribution tab: Move your server or the group under "Start Collection on"
Set up the configuration for Warehouse and the Summarization Agent.
Enable the Daily, Weekly, Hourly and also Detailed data on Summarization and Pruning Agent.
Now, setup similarly the History collection on ITPA Domain ( in this case the OS Agent):
Click on the Tivoli Performance Analyzer Domain for OS Agent.
Check that the managed server/Group is configured for collection, under the distribution tab.
Set up History collection settings.
Now, let the analytical task for this agent run,
After the task is run, the TEPS GUI ->Performance Analyzer -> OS - will show a different message.
It did compute - however
Yet, the "Not sufficient Data points" is still showing- because ITPA needs a certain amount of data before it can analyze and predict the trend for this instrument.
Once the data is there - this message should go away.
One may use similar configuration for other domains like ITCAM and others that ITPA supports.
If this message appears on the TEPS GUI for a certain task - it means that a) either the Historical Collections was not configured or b ) there was insufficient Historical data to do its' computations.
E.g:
This means that the CPU Utilization for the OS agent was not configured. ( in this case I am testing it on a RHEL 5.8 - Linux )
The message can be either a "FAILED - Not Sufficient Data Points"
or "COMPUTED - Not Sufficient Data Points"
The "Failed" message (as seen above ) means that the historical configuration was not done.
The second error message you could see is "COMPUTED" however,it will show Not Sufficient data -points as seen below. This means that there are not sufficient collections for the Performance Analyzer to do computations on.
To fix this:
Click on the History Collection button :
Go to Linux : and create a new collection for Linux CPU (Superseded.)
Right click on "Linux" and Create a new Collection. ( if not done already )
Basic Tab:
Collection configuration:
Distribution tab: Move your server or the group under "Start Collection on"
Set up the configuration for Warehouse and the Summarization Agent.
Enable the Daily, Weekly, Hourly and also Detailed data on Summarization and Pruning Agent.
Now, setup similarly the History collection on ITPA Domain ( in this case the OS Agent):
Click on the Tivoli Performance Analyzer Domain for OS Agent.
Check that the managed server/Group is configured for collection, under the distribution tab.
Set up History collection settings.
Now, let the analytical task for this agent run,
After the task is run, the TEPS GUI ->Performance Analyzer -> OS - will show a different message.
It did compute - however
Yet, the "Not sufficient Data points" is still showing- because ITPA needs a certain amount of data before it can analyze and predict the trend for this instrument.
Once the data is there - this message should go away.
Subscribe to:
Posts (Atom)