http://www.wikio.fr WebSphere And Tivoli Tricks: April 2011

Saturday, April 23, 2011

How to delete a profile from a WebSphere 6.1 application server

Step 1.

Location of tool:

\AppServer\bin\manageprofiles.bat

syntax:

manageprofiles.bat -delete -profileName (case sensitive commands)

Note: This used to be wasprofile.bat in version 6.0.x and is depricated in 6.1.

The Windows service for the profile will have been set to disabled now.

Step 2.

Delete the folder e.g \ApplicationServer\profiles\

Step 3.

manageprofiles.bat -validateAndUpdateRegistry

This process validates the profile registry and lists the non-valid profiles that it purges.

Step 4.

Remove the Windows service that is set to run the profile.

WASService.exe -remove service_name

e.g If the service name is "IBM Websphere Application Server V6.1 - Node"

Then the command will be:

WASService.exe -remove "Node"

Friday, April 22, 2011

Performance tuning WebSphere Application Server 7 on AIX 6.1

This week I've been working with AIX 6.1.0.0 and WAS 7.0.7 Network Deployment. The cluster topology is as follows:

  • 1 Power6 blade running the Deployment Manager and a managed Web Server (IHS 7.0.7 + Plugin)
  • 4 LPARs with 8 x 3.5Ghz and 16 Gb RAM (on a Power 750) as nodes with 1 Application Server instance per node
Before I get into the detail, I'd like to point out that I hadn't used AIX for a while, and found this developerWorks article really useful to remind me of all the commands that I'd forgotten.

The aim of the game this week has been throughput, the test involves a 25 step HTTP interaction delivered by jMeter with no pauses and very little ramp up. It's not a "real world" test at all and the goal was to identify the optimum
throughput point for the application beyond which latency becomes unacceptable.

Before we started tuning, nmon was showing the CPUs as busy, but with a higer than expected proportion of system time and noticable context switching suggesting that threads were spinning waiting for resources.  I must stress that the tuning below is not a replacement for performance analysis of your application to understand where programmatic improvements can be introduced to minimise locking (see page 16 of this for some app dev guidance). Further, as with any performance tuning, application of this tuning may deliver a throughput enhancement, but equally it may move the point of contention to elsewhere in the stack. In short, I'm not claiming that I have a one-size-fits-all set of magic setting that will act as the silver bullet to all your performance problems, but I wanted to share what's worked for me this week along with why I used the settings that I did. I'll also collate the information that is documented in a variety of places into a single resource for you and cover:

  • AIX environment variables
  • IBM HTTP Server configuration
  • WebSphere web server plug-in configuration
AIX environment variables
The WAS Info Center has plenty of information on AIX OS setup, but not much that seems to be pertinent to our issue. There is good information however, in both the AIX 6.1 Info Center and chapter 4.5 of this Redbook - you should bookmark both. The AIX Info Center has a set of recommended environment variables, and the Redbook builds on that with network settings which I confess I didn't use this week as we've not been network bound. Below is the list of environment variables from the AIX Info Center plus a few extras from some expert colleagues:

  • AIXTHREAD_COND_DEBUG = OFF
  • AIXTHREAD_COND_RWLOCK = OFF
  • AIXTHREAD_MUTEX_DEBUG = OFF
  • AIXTHREAD_MUTEX_FAST = ON
  • AIXTHREAD_SCOPE = S
  • SPINLOOPTIME = 500
  • YEILDLOOPTIME = 32
Chapter 4.7 of the Redbook details creating a shared rc.was files so that you can apply these settings to all of your servers, but I simply applied them in Servers -> Server Types -> Application Servers -> <server name> -> Java and Process Management -> Process definition -> Environment Entries as shown below:

image
IBM HTTP Server configuration
After reading a number of Technotes and support articles, I finally decided that this Technote was the most useful resource for improving the out of the box configuration of the multi-processing module in IBM HTTP Server. As each IHS server thread has it's own copy of the WebSphere plugin, under the relentless load we were generating this seemed like a potential area of weakness wherby different plugins could have different views of the availability of the available Application Servers, so we settled on the configuration used in example 1 of the Technote which has a single server thread:

httpd.conf
ThreadLimit 2000
ServerLimit 1
StartServers 1
MaxClients 2000
MinSpareThreads 2000
MaxSpareThreads 2000
ThreadsPerChild  2000
MaxRequestsPerChild  0

If you're running on a UNIX-based platform, you may need to have ulimit -s 512 in the session which starts IHS.

Note that using a single server in this way means that if that process exits unexpectedly you potentially lose 2000 in-flight connections.

Web server plug-in configuration
Combining the wisdom from both of these two Technotes (Understanding plugin load balancing and Recommended configuration values) I ended up making the following changes to the default plugin configuration:

  • Set all but one of your servers LoadBalanceWeight to 20 and the remaining server to 19. This prevents the plugin from reducing the weights by finding a common denominator and results in the the weights getting reset less frequently.
  • If you're using session affinity (who isn't?) the ensure that you set IgnoreAffinityRequests=false on your ServerCluster entry. This works around a known limitation of the plugin which can result in skewed weighting when using round robin distribution and session affinity.
Prior to making these changes we were seeing servers marked down by the plugin, which consequentially resulted in the other servers bearing more load. These changes prevented servers from being marked down and gave a smoother workload distribution amongst the cluster.

plugin-cfg.xml
<ServerCluster CloneSeparatorChange="false" GetDWLMTable="true" IgnoreAffinityRequests="false" LoadBalance="Round Robin" Name="myCluster" PostBufferSize="64" PostSizeLimit="-1" RemoveSpecialHeaders="true" RetryInterval="60">
<Server CloneID="14uddvvpt" ConnectTimeout="5" ExtendedHandshake="false" LoadBalanceWeight="20" MaxConnections="-1" Name="node01Server01" ServerIOTimeout="60" WaitForContinue="false">
...
<Server CloneID="14udeeloa" ConnectTimeout="5" ExtendedHandshake="false" LoadBalanceWeight="19" MaxConnections="-1" Name="node01Server02" ServerIOTimeout="60" WaitForContinue="false">

Basic Unix start-up scripts to start WebSphere Deployment Manager & Nodes.

In this example the script live in a the folder/scripts/admin in the WebSphere bin directory
websphere.sh
#!/bin/sh
#
# WebSphere This shell script takes care of starting and stopping
# the WebSphere services during server reboot.
#

# See how we were called.
case "$1" in
start)
# Start websphere.
su - wasadm -c /<was_root>/bin/scripts/admin/startup.sh
;;
stop)
# Stop websphere.
su - wasadm -c /<was_root>/bin/scripts/admin/shutdown.sh
;;
restart)
$0 stop
$0 start
;;
*)
echo $"Usage: websphere.sh {start|stop}"
exit 1
esac
exit 0

----------------------------------------------------------------------
startup.sh
#sh
/<was_root>/profiles/dmgr/bin/startManager.sh -nowait
/<was_root>/profiles/<node_nn>/bin/startNode.sh -nowait
----------------------------------------------------------------------
shutdown.sh
#sh
/<was_root>/profiles/<node_nn>/bin/stopServer.sh clustera_clone01
/<was_root>/profiles/<node_nn>bin/stopServer.sh clustera_clone01
/<was_root>/profiles/<node_nn>bin/stopServer.sh clustera_clone03
/<was_root>/profiles/<node_nn>bin/stopServer.sh clustera_clone04
/<was_root>/profiles/<node_nn>bin/stopServer.sh clusterb_clone01
/<was_root>/profiles/<node_nn>bin/stopServer.sh clusterb_clone02
/<was_root>/profiles/<node_nn>bin/stopNode.sh
/<was_root>/profiles/dmgr/bin/stopManager.sh
---------------------------------------------------------------------

Stop script multiple cell instances WebSphere

#!/usr/bin/ksh


export Dmgr1=<your Deployment Manager profile location>
export Dmgr2=<your Deployment Manager profile location>


for script in $Dmg1r/bin/stopManager.sh $Dmgr2/bin/stopManager.sh


do
if [[ -f $script ]]
then
echo "Running $script"
$script
fi
done


Stop all WebSphere Deployment Managers with shell script


#!/usr/bin/ksh


export Profile1=<your node 01 profile location>
export Profile2=<your node 02 profile location>


for script in $Profile1/bin/stopNode.sh $Profile2/bin/stopNode.sh
do
if [[ -f $script ]]
then
echo "Running $script"
$script
fi
done


Stop all Application Servers with shell script (Single profile - change for multiple profiles)


#!/usr/bin/ksh


export Profile1=<your node 01 profile location>


echo "Stopping all Application Servers in $Profile1"


for server in `$Profile1/bin/serverStatus.sh -all | grep "Server name" | awk '{print $4}' | grep -v nodeagent`
do
out "Stopping $server"
$Profile1/bin/stopServer.sh $server >/dev/null &
done


echo "All servers are being stopped"

Thursday, April 21, 2011

Slef-signed certificates using OpenSSL

The openssl toolkit is used to generate an RSA Private Key and a CSR (Certificate Signing Request). It can also be used to generate self-signed certificates which can be used for testing purposes or internal usage.
Step 1: Generate a Private Key
————————
The first step is to create your RSA Private Key. This key is a 1024 bit RSA key which is encrypted using Triple-DES and stored in a PEM format so that it is readable as ASCII text.
openssl genrsa -des3 -out websphere-tivoli.blogspot.com.key 1024
Step 2: Generate a CSR (Certificate Signing Request)
—————————————–
Once the private key is generated a Certificate Signing Request can be generated. The CSR is then used in one of two ways. 1. The CSR will be sent to a Certificate Authority, such as Thawte or Verisign who will verify the identity of the requestor and issue a signed certificate. 2. To self-sign the CSR.
During the generation of the CSR, you will be prompted for several pieces of information. These are the X.509 attributes of the certificate. One of the prompts will be for “Common Name (e.g., www.websphere-tivoli.blogspot.com)”. It is important that this field be filled in with the fully qualified domain name of the server to be protected by SSL. If the website to be protected will be https://www.activexpert.blog.co.in, then enter www.activexpert.blog.co.in at this prompt. If you want to create a so called “wildcard” certificate, which means the same certificate can be used on an unlimited number of subdomains, just enter an asterisk as the hostname, in our example that would be *.blog.co.in. The command to generate the CSR is as follows:
openssl req -new -key websphere-tivoli.blogspot.com.key -out websphere-tivoli.blogspot.com.csr
Step 3: Remove Passphrase from Key
—————————–
One unfortunate side-effect of the pass-phrased private key is that Apache will ask for the pass-phrase each time the web server is started. Obviously this is not necessarily convenient as someone will not always be around to type in the pass-phrase, such as after a reboot or crash. mod_ssl includes the ability to use an external program in place of the built-in pass-phrase dialog, however, this is not necessarily the most secure option either. It is possible to remove the Triple-DES encryption from the key, thereby no longer needing to type in a pass-phrase. If the private key is no longer encrypted, it is critical that this file only be readable by the root user! If your system is ever compromised and a third party obtains your unencrypted private key, the corresponding certificate will need to be revoked. With that being said, use the following command to remove the pass-phrase from the key:
cp websphere-tivoli.blogspot.com.key websphere-tivoli.blogspot.com.key.temp
openssl rsa -in activexpert.key.temp -out activexpert.key
The newly created activexpert.key file has no passphrase in it anymore.
-rw-r–r– 1 root root 745 Apr 20 22:19 websphere-tivoli.blogspot.com.csr
-rw-r–r– 1 root root 891 Apr 20 23:22 websphere-tivoli.blogspot.com.key
-rw-r–r– 1 root root 963 Apr 20 23:22 websphere-tivoli.blogspot.com.key.temp
Step 4: Generating a Self-Signed Certificate
———————————-
At this point you will need to generate a self-signed certificate because you either don’t plan on having your certificate signed by a CA, or you wish to test your new SSL implementation while the CA is signing your certificate. This temporary certificate will generate an error in the client browser to the effect that the signing certificate authority is unknown and not trusted.
To generate a temporary certificate which is good for 365 days, issue the following command:
openssl x509 -req -days 365 -in websphere-tivoli.blogspot.com.csr -signkey
websphere-tivoli.blogspot.com.key -out websphere-tivoli.blogspot.com.crt
Step 5: Installing the Private Key and Certificate
————————————–
I generally create a SSL directory under apache and move these certs there.
cp websphere-tivoli.blogspot.com.crt /usr/local/apache/SSL/
cp websphere-tivoli.blogspot.com.key /usr/local/apache/SSL/
Step 6: Configuring SSL Enabled Virtual Hosts
————————————
<VirtualHost www.websphere-tivoli.blogspot.com:443>
SSLEngine on
SSLCertificateFile /usr/local/apache/conf/SSL/websphere-tivoli.blogspot.com.crt
SSLCertificateKeyFile /usr/local/apache/conf/SSL/websphere-tivoli.blogspot.com.key
SetEnvIf User-Agent “.*MSIE.*” nokeepalive ssl-unclean-shutdown
</VirtualHost>
If you want to redirect connections to the standard, unencrypted port 80, simply use the following lines:
<VirtualHost www.websphere-tivoli.blogspot.com:80>
RedirectPermanent / https://www.websphere-tivoli.blogspot.com
</VirtualHost>
Step 7: Restart Apache and Test
/etc/init.d/apache2 restart
or
usr/local/apache/bin/apachectl stop
usr/local/apache/bin/apachectl start

Certificate management instructions using OpenSSL for WebSphere MQ v5.3.1 on the HP NonStop Server

How do I use the HP OpenSSL utility to manage SSL certificates for my Queue Manager?

Below are instructions for processing a signed certificate by a Certificate Authority using the HPNSS OpenSSL utility for a WebSphere MQ v5.3.1 queue manager. The example commands will need to be altered with the file names you have created.. However, sample scripts have been provided to assist you and can be found in the /opt_installation_path/opt/mqm/samp/ssl directory.
.
1. Create a private key.

openssl genrsa -rand -des3 -out "server_key.pem" 1024.

2. Generate a certificate request.

openssl req -new -days 365 -key "server_key.pem" -out "server_request.pem

3. Once the request is generated send the certificate to a Certificate Authority (such as VeriSign, Global Sign, etc.) for signature.

4. When the CA provides a signed certificate, use the cat command to add the signed request to the private key.

cat server_signed_request.pem server_key.pem > cert.pem

Files:
- The server_signed_request.pem file is the name of the signed certificate request.
- The server_key.pem is the file that contains the private key.

Note:
The procedure for step 4 can be found in the create_ALICE_cert.sh script provided with the WebSphere MQ 5.3.x product.

5. Add the Signer certificate to the trust certificate(s) file.

cat rootcert.pem > trust.pem

Notes:
If a trust.pem file is present remove or rename file prior to issuing the "cat" command

If the certificate request is signed by an intermediate certificate, the certificate chain for the signed personal certificate will need to be added to the trust.pem file. You need to add the root certificate and the intermediate to the trust.pem file. Review create_trust_file.sh script for the syntax.

6. Create a stashed password for the personal certificate file that contains the private key.

a. Export the personal certificate into a PKCS #12 format

openssl pkcs12 -export -in cert.pem -inkey server_key.pem -out personal_cert.p12 -passin pass:certkey -password pass:certkey -chain -CAfile trust.pem

b. Rename the resulting stash file that was created with a name that describes its function.

mv Stash.sth QMName_Stash.sth

Instructions for creating a stashed password file is included in the WebSphere MQ v5.3 System Administration manual and the exportcerts.sh script includes an example.

7. Make sure the trust certificate file, the personal certificate file, and the stashed password file are in the queue manager's ssl directory.

$MQNSKVARPATH/qmgrs/Queue Manager Name/ssl/

Review the installcerts.sh script for an example of using the cp(copy) command to place the pertinent files in the appropriate directory.

Note:
If you need to delete a CA certificate, then simply edit the trust.pem file and delete the certificate. After any operation on the certificate files always perform a verify to check the changes are correct.

The procedure to build and verify the sample configuration(setup.sh) uses the sample shell scripts and MQSC command files in the directory opt_installation_path/opt/mqm/samp/ssl. 

Wednesday, April 20, 2011

configuring SSL with Webserver to Appserver

Extract the default Personal Certificate
1. Login to the WebSphere Application Server Administrative Console
2. Select Security > SSL certificate and key management > Key Stores and certificates
3. Select NodeDefaultKeyStore for a stand-alone deployment or
CellDefaultKeyStore for a network deployment.
4. Click Personal Certificates, select the default check box, and then click Extract.
5. Give the extracted file a path and name, such as: /root/defaultCert.ARM.
Note: The convention is to give the file a .ARM extension.
6. Leave encoding set to Base64.
7. Click OK.


Locate your *.kdb file
1. In the httpd.conf file, find the directory in which the plugin-cfg.xml file is
stored by searching for the WebSpherePluginConfig line. It should look something like this:
WebSpherePluginConfig "/opt/IBM/HTTPServer/Plugins1/config/webserver1/plugin-cfg.xml"
2. Find the directory in which the key database file (*.kdb) is stored by searching
for the term "keyring" in the plugin-cfg.xml file. For example:
<Property Name="keyring" Value="/opt/IBM/HTTPServer/Plugins1/config/webserver1/plugin-key.kdb"/>
Note this location as you will need to use it later.


Add the extracted certificate to your key database file
1. Go to the directory for ikeyman and start it:
cd /opt/IBM/HTTPServer/bin
./ikeyman
2. Click Key Database File > Open, and then select a key database type of CMS.
3. Specify the filename and loacation you found above. For example: plugin-key.kdb and
/opt/IBM/HTTPServer/Plugins1/config/webserver1/plugin-key.kdb
4. Click OK, and then enter the password. Note: If you have not given this file another password,
the default password from WebSphere Application Server is WebAS (case sensitive).
5. Click Personal Certificates drop down and then select Signer Certificates.
6. Click Add.
7. Browse to the file you exported with the extension *.ARM, Select it, then Open and click OK. Supply a name if prompted.
8. Select Key Database File > Save As and save to the original location.
9. Select Key Database File > Exit.
10. Restart the IBM HTTP Server.

Understanding HTTP plug-in failover in a clustered environment

After setting up the HTTP plug-in for load balancing in a clustered IBM® WebSphere® environment, the HTTP plug-in is not performing failover in a timely manner or at all when a cluster member becomes unavailable.

Cause

In most cases, the preceding behavior is observed because of a misunderstanding of how HTTP plug-in failover works or might be due to an improper configuration. Also, the type of Web server (multi-threaded versus single threaded) being used can affect this behavior.

Resolving the problem

The following document is designed to assist you in understanding how HTTP plug-in failover works, along with providing you some helpful tuning parameters and suggestions to better maximize the ability of the HTTP plug-in to failover effectively and in a timely manner.

Note: The following information is written specifically for the IBM HTTP Server, however, this information in general is applicable to other Web servers which currently support the HTTP plug-in (for example: IIS, SunOne, Domino®, and so on).

Failover

Related information

HTTP plug-in Load Balancing in a clustered environment

Understanding IBM HTTP Server plug-in Load Balancing in a clustered environment

After setting up the HTTP plug-in for load balancing in a clustered IBM WebSphere environment, the request load is not evenly distributed among back-end WebSphere Application Servers.

Cause
In most cases, the preceding behavior is observed because of a misunderstanding of how HTTP plug-in load balancing algorithms work or might be due to an improper configuration. Also, the type of Web server (multi-threaded versus single threaded) being used can effect this behavior.

Resolving the problem
The following document is designed to assist you in understanding how HTTP plug-in load balancing works along with providing you some helpful tuning parameters and suggestions to better maximize the ability of the HTTP plug-in to distribute load evenly.

Note: The following information is written specifically for the IBM HTTP Server, however, this information in general is applicable to other Web servers which currently support the HTTP plug-in (for example: IIS, SunOne, Domino, and so on).

Also, The WebSphere plug-in versions 6.1 and later offer the property "IgnoreAffinityRequests" to address the limitation outlined in this technote. In addition, WebSphere versions 6.1 and later offer better facilities for updating the configuration through the administrative panels without manual editing.

For additional information regarding this plug-in property, visit IgnoreAffinityRequests


Load Balancing
  • Background
    In clustered Application Server environments, IBM HTTP Servers spray Web requests to the cluster members for balancing the work load among relevant application servers. The strategy for load balancing and the necessary parameters can be specified in the plugin-cfg.xml file. The default and the most commonly used strategy for workload balancing is ‘Weighted Round Robin’. For details refer to the IBM Redbooks technote, Workload Management Policies.

    Most commercial Web applications use HTTP sessions for holding some kind of state information while using the stateless HTTP protocol. The IBM HTTP Server attempts to ensure that all the Web requests associated with a HTTP session are directed to the application server who is the primary owner of the session. These requests are called session-ed requests, session-affinity-requests, and so on. In this document the term ‘sticky requests’ or ‘sticky routing’ will be used to refer to Web requests associated with HTTP sessions and their routing to a cluster member.

    The round robin algorithm used by the HTTP plug-in in releases of V5.0, V5.1 and V6.0 can be roughly described as follows:
    • While setting up its internal routing table, the HTTP plug-in component eliminates the non-trivial greatest common divisor (GCD) from the set of cluster member weights specified in the plugin-cfg.xml file.

      For example, if we have three cluster members with specified static weights as 8, 6, and 18, the internal routing table will have 4, 3, and 9 as the starting dynamic weights of the cluster members after factoring out 2 = GCD(4, 3, 9).

      <ServerCluster CloneSeparatorChange="false" LoadBalance="Round Robin"
      Name="Server_WebSphere_Cluster" PostSizeLimit="10000000" RemoveSpecialHeaders="true" RetryInterval="60">

      <Server CloneID="10k66djk2" ConnectTimeout="0" ExtendedHandshake="false" LoadBalanceWeight="8" MaxConnections="0" Name="Server1_WebSphere_Appserver" WaitForContinue="false">
      <Transport Hostname="server1.domain.com" Port="9091" Protocol="http"/>
      </Server>

      <Server CloneID="10k67eta9" ConnectTimeout="0" ExtendedHandshake="false"
      LoadBalanceWeight="6" MaxConnections="0" Name="Server2_WebSphere_Appserver" WaitForContinue="false">
      <Transport Hostname="server2.domain.com" Port="9091" Protocol="http"/>
      </Server>

      <Server CloneID="10k68xtw10" ConnectTimeout="0" ExtendedHandshake="false" LoadBalanceWeight="18" MaxConnections="0" Name="Server3_WebSphere_Appserver" WaitForContinue="false">
      <Transport Hostname="server3.domain.com" Port="9091" Protocol="http"/>
      </Server>

      <PrimaryServers>
      <Server Name="Server1_WebSphere_Appserver"/>
      <Server Name="Server2_WebSphere_Appserver"/>
      <Server Name="Server3_WebSphere_Appserver"/>
      </PrimaryServers>
      </ServerCluster>

    • The very first request goes to a randomly selected application server. For non-sticky requests, the HTTP plug-in component attempts to distribute the load in a strict round robin fashion to all the eligible cluster members—the cluster members whose internal dynamic weight in the routing table > 0.

    • On each sticky or non-sticky request which gets routed to a cluster member, the internal weight of the cluster member in the routing table will get decremented by 1.

    • Non-sticky Web requests will never get routed to any cluster member whose present dynamic weight in the routing table is 0. However, a sticky request can get routed to a cluster member whose dynamic weight in the routing table is 0, and can potentially decrease the cluster member weight to a negative value.

    • When the internal weights of all the cluster members are 0, the plug-in will fail to route any non-sticky requests. When this happens, the plug-in component resets the cluster member internal weights in its routing table.

    • The resetting may not take the internal weights to their original starting values!

    • The present version of the resetting process attempts to find the minimal number m (m > 0) which will make (w + m * s) > 0 for all cluster members, where w is the internal weight immediately before reset and s is the starting weight in the routing table.

      In our example, we have the starting weights as <4, 3, 9>. Assume that just before getting reset, the weights in the routing table were <-20, -40, 0> -- the negative numbers are due to the routing of number of sticky requests to the first two cluster members.

      The value of m is 14 in this hypothetical instance and the dynamic weights immediately after reset in the routing table will be:

      < (-20 + 14 * 4),  (-40 + 14 * 3), (0 + 14 * 9)> = <36, 2, 126>

  • Analysis
    HTTP sticky requests (for example: session-affinity-requests) can skew up the load balancing situation as explained and illustrated below. This is a known limitation with HTTP plug-in load balancing. The imbalanced load distribution is caused by the sticky requests. The HTTP Plug-in routes sticky requests to an affinity cluster member directly without doing Round Robin load balancing. However, the sticky requests do change the cluster members weights and effect the Round Robin load balancing for new sessions. The HTTP Plug-in resets the weights when it cannot find a cluster member with a positive weight. The effect can be illustrated with the following example:

    For example, assume you have a two cluster members (Server1_WebSphere_Appserver and Server2_WebSphere_Appserver) to serve an application, each cluster member has the same weight of 1, and Round Robin load balancing is being used:

    <ServerCluster CloneSeparatorChange="false" LoadBalance="Round Robin"
    Name="Server_WebSphere_Cluster" PostSizeLimit="10000000" RemoveSpecialHeaders="true" RetryInterval="60">

    <Server CloneID="10k66djk2" ConnectTimeout="0" ExtendedHandshake="false" LoadBalanceWeight="1" MaxConnections="0" Name="Server1_WebSphere_Appserver" WaitForContinue="false">
    <Transport Hostname="server1.domain.com" Port="9091" Protocol="http"/>
    </Server>

    <Server CloneID="10k67eta9" ConnectTimeout="0" ExtendedHandshake="false"
    LoadBalanceWeight="1" MaxConnections="0" Name="Server2_WebSphere_Appserver" WaitForContinue="false">
    <Transport Hostname="server2.domain.com" Port="9091" Protocol="http"/>
    </Server>

    <PrimaryServers>
    <Server Name="Server1_WebSphere_Appserver"/>
    <Server Name="Server2_WebSphere_Appserver"/>
    </PrimaryServers>
    </ServerCluster>

    • When the first request is sent to Server1_WebSphere_Appserver Server1_WebSphere_Appserver will have weight of 0.

    • If the next 10 requests all have sticky routing (for example: session affinity) with Server1_WebSphere_Appserver, they will all get routed to Server1_WebSphere_Appserver. Server1_WebSphere_Appserver will now have a weight of -10, and have handled 11 requests while Server2_WebSphere_Appserver has not received any requests yet.

    • The second request without sticky routing will be sent to Server2_WebSphere_Appserver, Server2_WebSphere_Appserver will now have a weight of 0.

    • When the third request without sticky routing is received, since neither Server1_WebSphere_Appserver or Server2_WebSphere_Appserver has a positive weight, the HTTP plug-in will reset the weights as 1 and 11 for both servers respectively.

    • The third request without sticky routing will then go to Server1_WebSphere_Appserver, and changes its weight to 0.

    • All next 11 requests without sticky routing will go to Server2_WebSphere_Appserver, until it is weight reaches 0.

    • So far, a total of 24 requests have been sent to the servers, 14 new sessions. The load distribution would be balanced as:

      Server1_WebSphere_Appserver:
      total-requests 12, session-affinity-requests 10, sessions 2

      Server2_WebSphere_Appserver:
      total-requests 12, session-affinity-requests 0, sessions 12

    • Now if each session creates 10 session-affinity-requests, the load distribution would look imbalanced:

      Server1_WebSphere_Appserver:
      total-requests 22, session-affinity-requests 20

      Server2_WebSphere_Appserver:
      total-requests 132, session-affinity-requests 120

    Note: The HTTP sessions on Server2_WebSphere_Appserver started concurrently, and not one sequentially in one after another fashion.

    From the preceding example, you can see how sticky requests could effect the load distribution.

    As explained by the preceding example, in the presence of sticky requests, the load distribution between cluster members can potentially get skewed. The amount of unevenness in load distribution depends on the traffic patterns. The amount of skew in load distribution among cluster members is directly dependent on:
    • The number of sticky requests received by a cluster member thereby making its dynamic weight in the routing table to be a large negative number.

    • The concurrent use of multiple HTTP sessions and the corresponding sticky requests for a cluster member again contributing towards a large negative number for the dynamic weight.

    Note: The presence of a large number of concurrent users increases the probability of having the traffic patterns which may cause distortions in load distribution. In fact, the presence of HTTP session sticky routing, there are no perfect solutions to the potential problem of uneven load distribution. However the following two configuration strategies especially the first one, can be applied to minimize the effect.
    • For each cluster member, provide relatively large starting weights, which do not have any non-trivial GCD, in the plugin-cfg.xml file. In real life situations, handling somewhat uniform internet traffic, this should prevent the following:
      • Frequent resets
      • The dynamic weight of any cluster member from reaching a high negative number before getting reset
      • A high value of the dynamic weight of any cluster member after a reset.

    • Use a multi-threaded Web server (for example: Releases of IBM HTTP Server V2.0, V6.0, UNIX) versus a single-threaded Web server (for example: Releases of IBM HTTP Server V1.3, UNIX) and keep the number of IBM HTTP Server processes to a low value while specifying a high value of the number of threads per IBM HTTP Server process in the httpd.conf configuration file.

    The plug-in component performs load balancing within an IBM HTTP Server process. Individual instances of IBM HTTP Server processes do not share any global information regarding load balancing. Thus a low number of Web server processes should smooth out somewhat the unevenness in load balancing. A higher value of threads per IBM HTTP Server process will provide the Web server the ability to handle peak loads.


  • Suggested Configurations
    For all clustered WebSphere installations we suggest the following configurations. All the files changes should be done manually. It should be noted that perfectly even load distribution may never happen in the presence of sticky routing. However, in real life, in a stable situation, one should see a fairly uniform load distribution among cluster members.

    Also, ideally speaking, after making the desired configuration changes, simulation, performance, and soak tests should be executed before final acceptance. The results of the tests and also real life application deployment experience may necessitate some amount of fine tuning of relevant parameters.
    1. In the plugin-cfg.xml file, set the load balancing algorithm to "Random" as follows. Example of a two member cluster:

      <ServerCluster CloneSeparatorChange="false" LoadBalance="Random"
      Name="Server_WebSphere_Cluster" PostSizeLimit="10000000" RemoveSpecialHeaders="true" RetryInterval="60">

      <Server CloneID="10k66djk2" ConnectTimeout="0" ExtendedHandshake="false" LoadBalanceWeight="2" MaxConnections="0" Name="Server1_WebSphere_Appserver" WaitForContinue="false">
      <Transport Hostname="server1.domain.com" Port="9091" Protocol="http"/>
      </Server>

      <Server CloneID="10k67eta9" ConnectTimeout="0" ExtendedHandshake="false"
      LoadBalanceWeight="2" MaxConnections="0" Name="Server2_WebSphere_Appserver" WaitForContinue="false">
      <Transport Hostname="server2.domain.com" Port="9091" Protocol="http"/>
      </Server>

      <PrimaryServers>
      <Server Name="Server1_WebSphere_Appserver"/>
      <Server Name="Server2_WebSphere_Appserver"/>
      </PrimaryServers>
      </ServerCluster>

      The preceding configuration should have the maximum beneficial effect on the uniformity of load distribution. When "Random" algorithm is selected for load balancing, it doesn't include the number of affinity requests when selecting a server for handling a new request.

    2. For releases of IBM HTTP Server V2.0 and V6.0 executing on UNIX boxes, manually alter the <IfModule worker.c> paragraph to look like the following. The number of processes are specified as 2 which should result in decent load balancing among cluster members. The number of threads per process is specified to be 250, which should be adequate to handle the expected peak load in most clustered environments.

      UNIX:
      <IfModule worker.c>
      ThreadLimit 250
      ServerLimit 2
      StartServers 2
      MaxClients 500
      MinSpareThreads 2
      MaxSpareThreads 325

      ThreadsPerChild 250
      MaxRequestsPerChild 10000
      </IfModule> 
Related information HTTP plug-in Failover in a clustered environment

WebSphere Self-signed certificates

WebSphere v6.1 automatically replaces expiring self-signed certificates by default. If dates are put forward on a server for testing purposes the certificates will be regenerated and expiring certificates and signers will be deleted. This can be turned off by going to: Sercurity > SSL certificate and key management > Manage certificate expiration and un-ticking the appropriate options.
If the certificates become invalid you may receive one of the following exceptions:
CWPKI0311E: The certificate with subject {0} has a start date {1} 
which is valid after the current date/time.  This will can happen 
if the client's clock is set earlier than the server's clock. 
Please verify the clocks are in sync between this client and server 
and retry the request.
or
Exception stack trace: javax.naming.NamingException: Error during 
resolve [Root exception is org.omg.CORBA.COMM_FAILURE: 
CAUGHT_EXCEPTION_WHILE_CONFIGURING_SSL_CLIENT_SOCKET: JSSL0080E: 
javax.net.ssl.SSLHandshakeException - The client and server could 
not negotiate the desired level of security.  Reason: com.ibm.jsse2
.util.h: No trusted certificate found  vmcid: 0x49421000  minor code: 70 
completed: No]
In order to increase the lifetime of the certificates and resolve the issue the following steps were taken:
  1. locate the key.p12 and trust.p12 files under the dmgr profile ie:
    <profile_root>\config\cells\<cellname>\key.p12
  2. Open the key.p12 file with the IKEYMAN tool (\bin\ikeyman.bat). You must select PKCS12 from the key database type drop down in order to open the file.
  3. The defualt password for websphere application server certificate stores is: WebAS?
  4. Select Personal Certificates from the key database content area drop down
  5. Delete the existing default certificate
  6. Create a new self signed certificate with the following details:
    Key Label: default
       Version X509 V3
       Key Size: 1024
       Common Name: <fullyQualifiedHostname>
       Organization: IBM
       Country or region:US
       ValidityPeriod: 3650 (We selected 10 years for the length of the certificate)
  7. Extract the certificate you just created with the following settings:
    Data Type: Base64-encoded ASCII data
       Certificate file name: newDefault.arm
       Location: D:\temp\
  8. Open the \config\cells\\trust.p12 file
  9. Again the password is WebAS?
  10. In the Key Database Content area select signer certificates from the large drop down.
  11. Delete any existing default or default_x certificates
  12. Click Add and browse to the extracted certificate from the key.p12 file, D:\temp\newDefault.arm
  13. Enter a label of default
  14. close the IBM Key Management tool
  15. Copy the Key.p12 and trust.p12 file to the following locations:
    Deployment Manager:
       <profilehome_dmgr>\config\cells\<cellname>\nodes\<nodename>
    All nodes:
    <profilehome_nodex>\config\cells\<cellname>
       <profilehome_dmgr>\config\cells\<cellname>\nodes\<nodename>
  16. Restart the DMGR all nodeagents and servers

Tuesday, April 19, 2011

Create a Highly Available Dispatcher

NOTE: Before beginning,  configure the two load balancers (LB) exactly the same in order to allow failover and continued service.
From the Primary Load Balancer gui (Start > Program Files > IBM WebSphere > Edge Components > Load Balancer for IPv6 > Load Balancer for IPv6 ):
  1. Right click on the Dispatcher and click Connect to host…
  2. Connect to the :10099
  3. Right click on the High Availability icon and select Add Heartbeat…
  4. Leave the local machine’s IP addres in the first textbox and enter the secondary LB’s IP address in the second and click OK.
  5. Right click on the High Availability icon and select Add High Availability Backup….
  6. Set the role as Primary, the IP of the secondary server and the port number 10099.
  7. Repeat the same process for the secondary LB except select Backup for the servers role.
  8. Once complete click the refresh statistics button and confirm the state changes to: Synchronized .
  9. Once the process is complete it is important to edit the goActive and goStandby scripts.
    1. These files can be found in the <edge_home>/lb/servers/samples directory.
    2. Follow the instructions within each script file, editing the set CLUSTER,INTERFACE and NETMASK values.
    3. Copy both scripts into the lb bin directory at: <edge_home>/lb/servers/bin/
    4. ensure that you remove the .sample after the filename so they read goActive.bat and goStandby.bat
If you wish to test your configuration. Disable the network addapter on the primary LB and watch the secondary change into active state. Did your service remain available?

How to: Easily setup WebSphere Edge Components Load Balancer for webservers

Dispatcher provides the ability to spray requests between multiple servers. In the WebSphere Stack it allows load balancing between multiple webservers which in turn can relay requests to multiple application servers. They provide high availability and scalability. These instructions dictate how to setup a configuration similar to the following image:
Load Balancer Diagram: not so complicated!
From the follow article in the WebSphere Infocenter http://publib.boulder.ibm.com/infocenter/wasinfo/v6r1/index.jsp?topic=/com.ibm.websphere.edge.doc/welcome.html
To set up a simple IP spraying with WebSphere Edge Components Dispatcher 6.1 on Windows 2003 Server, follow these instructions:
  1. Install the Dispatcher from the WebSphere Edge Components CD or archive file  (From experience: don’t install the first release 6.1.0.0. IBM is up to release 6.1.042 at time of writing. NOTE: You will need the .lic. licence, file from the original install)
  2. Once installed (simple point and next scenario) open the Start > IBM WebSphere > Edge Components > Load Balancer > Load Balancer.
  3. Expand the Load Balancer in the tree hierarchy.
  4. Right click on the Dispatcher and select Start Configuration Wizard.
  5. Click Next on the Dispatcher Configuration Wizard welcome screen.
  6. Click Next again on the What to expect… page
  7. Read the What Must I Do Before I Begin List and confirm access from the Load Balancer to the webservers on the desired ports (IE. https (443) / http (80) to hostname:port)
  8. Click Create Configuration.
  9. Select the host you wish to configure (the default is the local machine’s hostname on port 10099) and click Update configuration and Continue
  10. Enter the desired domain to balance and Click Update configuration and Continue.
  11. The wizard will confirm the cluster has been added, click next.
  12. Enter the desired port number (IE: 443 for https 80 for http)
  13. The wizard will confirm the port has been added, click next.
  14. Add the IP address of the desired servers to be load balanced (your webservers).
  15. Once all the servers in the cluster have been added click next
  16. Leave the default of Yes for the advisor creation and add a name (IE: HTTPS)
  17. Open the loop back instructions for Windows 2000/2003
  18. On each of the webservers being balanced (NOT the load balancer itself!) follow these instructions:
    1. Open the control panel and click Add Hardware Wizard
    2. Click next on the Welcome screen
    3. Let the wizard search for new hardware
    4. On the Is the hardware connected? screen select the Yes radio button.
    5. At the bottom of the installed hardware list select Add a new hardware device and click next
    6. Select Install the hardware that i manually select from a list.
    7. Select Netword adapters
    8. Select Microsoft and the Microsoft Loopback Adapter
    9. Click next to install the adapter.
    10. click finish when complete.
    11. Open Network Connections and right click on the new Microsoft Loopback Adapter and select properties
    12. Select Internet Protocol (TCP/IP) and click Properties
    13. Select Use the following IP address and enter the IP address for the cluster, the proper subnet mask for the server and leave the default gateway empty.
    14. Enter the loop back address for the Preferred DNS server (IE: 127.0.0.1) and leave the alternate empty.
    15. Click OK and OK again.
    16. Repeat on all web servers in the cluster.
  19. Click Exit at the end of the Wizard
  20. Save the configuration (no spaces in the name) and restart the IBM Dispatcher service. (run > services.msc)
Want an highly available load balancer? Configuring high availability for IBM Load Balancer