6.1.0.4: Readme for IBM WebSphere Portal Enable for z/OS 6.1 fix pack 4 (6.1.0.4) - cluster

Product readme


Abstract

IBM WebSphere® Portal Enable for z/OS fix pack installation instructions - cluster.

Content

6.1.0.4: Readme for IBM WebSphere Portal Enable for z/OS 6.1 fix pack 4 (6.1.0.4) - cluster


6.1.0.4: Readme for IBM WebSphere Portal 6.1 z/OS fix pack 4 (6.1.0.4) - cluster

Table of Contents





What is new with Fix Pack 6.1.0.4
  • This fix pack updates previous levels of WebSphere Portal 6.1.0.x for z/OS or IBM Workplace® Web Content Management 6.1.0.x for z/OS to the 6.1.0.4 service release level.
  • If the 6.1.5 feature pack was enabled on 6.1.0.3, the 6.1.0.4 fix pack upgrades the installation to 6.1.0.4 service release level and the 6.1.5 features to the 6.1.5.1 level.
  • If the 6.1.5 feature pack was not enabled on 6.1.0.3 and you want to enable it, first upgrade to 6.1.0.4 and then perform the steps to install the Feature Pack 6.1.5 fix pack 1.
  • If you are upgrading to 6.1.0.4 from version 6.1.0, 6.1.0.1, or 6.1.0.2 and you want to enable the feature pack, first upgrade to 6.1.0.4 and then perform the steps to install the Feature Pack 6.1.5 fix pack 1.
  • The following items are included in this fix pack:

Back to top


About Fix Pack 6.1.0.4

Installing this Fix Pack raises the fix level of your product to version 6.1.0.4.

Important: WebSphere Portal Enable for z/OS Version 6.1.0.4 can be used for performing a fresh installation. Therefore, when installing WebSphere Portal Enable for z/OS Version 6.1.0.4 from scratch, first put the Portal Version 6.1 GA code into SMP/E, then apply the PTFs of the Fix Pack.

PTF ordering information:
There are two SMP/E PTF packaging for Fix Pack 6.1.0.4. Based on the current system's FMID, you can order and apply one set of the following PTFs. You only have to apply one set of PTFs because either packaging provides the same function of upgrading portal to service release level 6.1.0.4 with optional enablement of Feature Pack 6.1.5.1.

For z/OS systems with Portal 6.1.0.1, 6.1.0.2, or 6.1.0.3, based on FMID HPTL610 , use the following PTFs to upgrade - UA54459, UA54460, UA54461, UA54504, UA54579, UA54580, UA54581, UA54582, UA54600, UA54601, UA54602, UA54603, UA54604, UA54605, UA54606, UA54607, UA54608, UA54609.

For z/OS systems with Portal 6.1.0.3 which was previously installed using the Portal 6.1.5 full release packaging and based on FMID HPTL615, use the following PTFs to upgrade - UA54617, UA54739, UA54743, UA54812, UA54813, UA54814, UA54833, UA54834, UA54876, UA54877, UA54878, UA54879, UA54880, UA54881, UA54882, UA54883, UA54884, UA54885.

Use the link to order PTFs: https://www14.software.ibm.com/webapp/ShopzSeries/ShopzSeries.jsp

Back to top




Space requirements


The WebSphere Portal Enable for z/OS Fix pack Version 6.1.0.4 requires update the storage requirements for the Data Sets as follows:


Library DDNAME No. of primary 390 Tracks No. of additional (also known as secondary)
390 Tracks
DIR Blks
AEJPCLIB 150 25 30
AEJPEXEC 26 15 1
AEJPHFSA 120 100 100
AEJPHFSB 33450 450 300
AEJPHFSC 150 450 100
AEJPHFSD 19500 450 100
AEJPHFSE 6150 100 100
AEJPMLIB 13 7 2
AEJPPLIB 150 80 100
AEJPSAMP 12 6 5
AEJPSLB2 22 10 8
AEJPSLIB 253 50 100
SEJPCLIB 168 25 30
SEJPEXEC 26 15 3
SEJPMLIB 13 7 2
SEJPPLIB 203 40 100
SEJPSAMP 12 6 5
SEJPSLB2 22 10 8
SEJPSLIB 503 50 100
SEJPHFS 72150 1350 NOLIMIT
SMPPTS 39750 1800 NOLIMIT
SMPWRK6 15000 1500 750


Verify that the free space is available before beginning the installation. For WebSphere Application Server configuration HFS, there are at least 1000 Cylinders of disk spaces available.

Back to top




Cluster upgrade planning
There are two options for performing upgrade in a clustered environment. One option is to upgrade the cluster while the entire cluster has been taken offline from receiving user traffic. The upgrade is performed on every node in the cluster before the cluster is brought back online to receive user traffic. This is the recommended approach for an environment with multiple Portal clusters since 24x7 availability can be maintained. Please see the following document for details: Multiple Cluster Setup with WebSphere Portal. It is also the simplest approach to use in a single cluster environment if maintenance windows allow for the Portal cluster to be taken offline.

For single cluster environments, which cannot tolerate the outage required to take a cluster offline and perform the upgrade, you can utilize the single-cluster 24x7 availability process. Review the following requirements and limitations for performing product upgrades while maintaining 24x7 availability in a single cluster (NOTE: Ensure that you understand this information before upgrading your cluster):

Assumptions for maintaining 24x7 operation during the upgrade process:

  • If you want to preserve current user sessions during the upgrade process, make sure that WebSphere Application Server distributed session support is enabled to recover user session information when a cluster node is stopped for maintenance. Alternatively, use monitoring to determine when all (or most) user sessions on a cluster node have completed before stopping the cluster node for upgrade to minimize the disruption to existing user sessions.
  • Load balancing must be enabled in the clustered environment.
  • The cluster has at least two horizontal cluster members.
  • Limitations on 24x7 maintenance:
    • If you have not implemented horizontal scaling and have implemented only vertical scaling in your environment such that all cluster members reside on the same node, the fix pack installation process will result in a temporary outage for your end users due to a required restart. In this case, you will be unable to upgrade while maintaining 24x7 availability.
    • If you have a single local Web server in your environment, maintaining 24x7 availability during the cluster upgrade may not be possible since you might be required to stop the Web server while applying corrective service to the local WebSphere Application Server installation.
    • When installing the fix pack in a clustered environment, the portlets are only deployed when installing the fix pack on the primary node. The fix pack installation on additional (also known as secondary) nodes simply synchronizes the node with the deployment manager to receive updated portlets. During the portlet deployment on the primary node, the database will be updated with the portlet configuration. This updated database, which is shared between all nodes, would be available to additional nodes before the additional nodes receive the updated portlet binary files. It is possible that the new portlet configuration will not be compatible with the previous portlet binary files, and in a 24x7 production environment problems may arise with anyone attempting to use a portlet that is not compatible with the new portlet configuration. Therefore it is recommended that you test your portlets before upgrading the production system in a 24x7 environment to determine if any portlets will become temporarily unavailable on additional nodes during the time between the completion of the fix pack installation on the primary node and the installation of the fix pack on the additional node.
    • In order to maintain 24x7 operations in a clustered environment, it is required that you stop WebSphere Portal on one node at a time and upgrade it. It is also required that during the upgrade of the primary node, you manually stop node agents on all other cluster nodes that continue to service user requests. Failure to do so may result in portlets being shown as unavailable on nodes having the node agent running.
    • When uninstalling the fix pack in a clustered environment, the portlets are only redeployed when uninstalling the fix pack on the primary node. The fix pack uninstall on additional nodes simply synchronizes the node with the deployment manager to receive updated portlets. During the portlet redeployment on the primary node, the database will be updated with the portlet configuration, which would be available to additional nodes before the additional nodes receive the updated binary files, since all nodes share the same database. It is recommended that you test your portlets before uninstalling on the production system in a 24x7 environment because the possibility of such incompatibility might arise if the previous portlet configuration is not compatible with the new portlet binary files.

Back to top




Steps for installing Fix Pack 6.1.0.4 (single-cluster procedure)

Before you begin:

Familiarize yourself with the Portal Upgrade Best Practices available from IBM Remote Technical Support for WebSphere Portal.

  • Portal Upgrades: Best Practices
  • For instructions on how to validate your upgrade environment prior to the upgrade, see the instructions for running the Health Checker tool for WebSphere Portal at:

  • Health Checker tool for WebSphere Portal V6.1


  • NOTE: When instructed to stop or start the WebSphere_Portal server, stop or start all Portal server instances including vertical cluster members on the node.

    1. Perform the following steps before upgrading to Version 6.1.0.4:


      a. Review the supported software requirements for this fix pack. If necessary, upgrade all software before applying this fix pack. If updates are required to WebSphere Application Server level, it is best to update the WebSphere Application Server level before upgrading portal and perform that update first on the Deployment Manager. Instructions are also provided to install WebSphere Application Server updates on each node in the cluster during the time that node is taken offline from receiving user traffic. NOTE: You can download the latest WebSphere Application Server interim fixes from http://www.ibm.com/software/webservers/appserv/was/support/.

      b. If neither a file nor symbolic link for<DeploymentManager_root>/plugins/wp.base.jar does exist, perform the following steps:

        1. Stop Deployment Manager.
        2. Create a symbolic link to the product HFS/zFS :
          ln -s <portal_install_root>/PortalServer/base/wp.base/shared/app/wp.base.jar
          <DeploymentManager_root>/plugins/wp.base.jar

          for example:

          ln -s /usr/lpp/zPortalServer/V6R1/PortalServer/base/wp.base/shared/app/wp.base.jar
          /WebSphere/V6R1/DeploymentManager/plugins/wp.base.jar
        3. Start Deployment Manager.

      c. Verify that the information in the wkplc.properties, wkplc_dbtype.properties, and wkplc_comp.properties files are correct on each node in the cluster:

        • Enter a value for the PortalAdminPwd and WasPassword parameters in the wkplc.properties file.
        • Ensure that the DbUser (database name) and DbPassword (database password) parameters are defined correctly for all database domains in the wkplc_comp.properties file.
        • Ensure that the value of the XmlAccessPort property in wkplc_comp.properties matches the value of the port used for HTTP connections to the WebSphere Portal server.
      d. Order the PTFs using the PTF ordering information in the About Fix Pack 6.1.0.4 section.

      e. The WebSphere Portal 6.1.0.4 fix pack performs index (schema) modifications to the Portal databases during the fix pack application. Please see the following documentation for the details: http://www.ibm.com/support/docview.wss?uid=swg27018892. If using DB2 on z/OS it is possible to disable the automatic application and instead manually apply the indexes. For that make sure the following property is set on the primary node in wkplc_dbtype.properties before starting the fix pack application: DbSafeMode=true. If DbSafeMode is not set to true then ensure that the DbUser in the wkplc_comp.properties file for all domains on DB2 has explicit DBADM authority.


    2. Ensure that automatic synchronization is disabled on all nodes to be upgraded. When the automatic synchronization is enabled, the node agent on each node automatically contacts the deployment manager at startup and then every synchronization interval to attempt to synchronize the node's configuration repository with the master repository managed by the deployment manager.
      1. In the administrative console for the deployment manager, select System Administration > Nodes Agents in the navigation tree.
      2. Click nodeagent for the required node.
      3. Click File Synchronization Service.
      4. Uncheck the Automatic Synchronization check box on the File Synchronization Service page to disable the automatic synchronization feature and then click OK.
      5. Repeat these steps for all other nodes to be upgraded.
      6. Click Save to save the configuration changes to the master repository.
      7. Select System Administration > Nodes in the navigation tree
      8. Select all nodes that are not synchronized, and click on Synchronize
      9. Select System Administration > Node agents in the navigation tree
      10. For the primary node, select the nodeagent and click Restart
      11. Select the nodeagents of all additional nodes and click Stop

    NOTE: Do not attempt to combine steps 3 and 4 together! The update must be performed first on the primary node then on the additional nodes, in accordance with the below instructions.

    3. Perform the following steps to upgrade WebSphere Portal on the primary node:


      a. Only Required if following the 24x7 single cluster upgrade: Stop IP traffic to the node you are upgrading:
        • If you are using Sysplex Distributor, then
          VARY TCPIP,,SYSPLEX,QUIesce,JOBNAME=<WP_Controler> or
          VARY TCPIP,,SYSPLEX,QUIesce,POrt=<portnum>.
          Reactivate it by replacing the QUIesce with RESUME keyword.
        • If you are using IP sprayers for load balancing to the cluster members, reconfigure the IP sprayers to stop routing new requests to the Portal cluster member(s) on this node.
        • If you are using the Web server plug-in for load balancing, perform the following steps to stop traffic to the node:
          • In the Deployment Manager administrative console, click Servers>Clusters>cluster_name>Cluster members to obtain a view of the collection of cluster members.
          • Locate the cluster member you are upgrading and change the value in the Configured weight column to zero. NOTE: Record the previous value to restore the setting when the upgrade is complete.
          • Click Update to apply the change.
          • If automatic plug-in generation and propagation is disabled, manually generate and/or propagate the plugin-cfg.xml file to the Web servers.
          • Note that the web server plug-in will check periodically for configuration updates based on the value of Refresh Configuration Interval property for the Web server plug-in (default value is 60 seconds). You can check this value on the Deployment Manager administrative console by selecting Servers>Web Servers>web_server_name>Plug-in Properties.
          • If automatic propagation of the plug-in configuration file is enabled on the web server(s) disable it from the Deployment Manager administrative console by going to Servers>Web Servers>web_server_name>Plug-in Properties and unchecking Automatically propagate plug-in configuration file b.
      b. Perform the following steps to run the installation program:
        1. Check the status of all active application servers and stop any active application servers on the node.
        2. If you have installed WebSphere Portal or WCM APARs interim fixes, use Portal Update Installer to uninstall them before applying the 6.1.0.4 PTFs in SMP/E. For information on the Portal Update Installer for z/OS see the following link: http://www.ibm.com/support/docview.wss?rs=688&uid=swg21326670
        3. Apply the PTFs for the 6.1.0.4 cumulative fix pack using SMP/E. Make sure your primary portal system is mounted with the newly maintained 6104 product HFS/zFS.
        4. Run './applyPTF.sh' from <AppServer_root>/bin directory with the following user ID requirements:

          The user ID needs to have either UID = 0 or all of the following five UNIXPRIV class profile privileges assigned to it:

          CONTROL access to SUPERUSER.FILESYS
          UPDATE access to SUPERUSER.FILESYS.MOUNT
          READ access to SUPERUSER.FILESYS.CHOWN
          READ access to SUPERUSER.FILESYS.CHANGEPERMS
          READ access to SUPERUSER.FILESYS.PFSCTL

          The user ID needs to be a member of the WebSphere Configuration Group (WSCFG1)

          Instead of running the applyPTF.sh script with a user ID that matches both of the requirements mentioned above, you could run the script with WSADMIN, if the WSADMIN user ID was created in your installation and owns all WebSphere product related configuration file systems (including those that are not part of WebSphere Portal). In such an environment, make sure, to not intermix the usage of the WSADMIN user ID and other user IDs to make file system updates.
        5. Run the ./ConfigEngine.sh CONFIG-WP-PTF-6104 command from the <wp_profile_root>/ConfigEngine/ directory with a User ID with sufficient authority to install portal. If DbSafeMode (see 1.e. above) is not set to true then the User ID must also have jdbc access settings in the profile.
      c. After the fix pack is installed, check the status of the primary node in the Deployment Manager administrative console. Perform the following steps:

        a. In the Deployment Manager administrative console, click System Administration>Nodes.

        b. If the primary node has a status of Not Synchronized or Unknown, ensure that the node agent is running on the node, then click Synchronize and wait for the synchronization to complete. The end of synchronization is indicated by the message "BBOO0222I: ADMS0003I: The configuration synchronization completed" in the system log.

        c. Wait at least 20 minutes before performing the next step to ensure that the node agent EAR expansion process completes.

        d. Restart the WebSphere_Portal server on the primary node.


      d. Run the following task to activate the portlets:
      • ./ConfigEngine.sh activate-portlets -DPortalAdminPwd=password -DWasPassword=password

      e. Verify that your system is operational by entering the server's URL in a browser and logging in to browse the content.

      f. If you upgraded from a prior release with additional interim fixes installed, but uninstalled them in step 3.b.2 above, you must reinstall interim fixes that were not integrated into the version 6.1.0.4 installation. Before reinstalling an interim fix, go to the WebSphere Portal product support page to see if there is a newer version of the interim fix because these are often specific to a version and release of the product. Search on the APAR number to find more information.

      g. Install any APARs that must be applied. Refer to the Recommended fixes and updates for WebSphere Portal and Web Content Management page.

      h. Only Required if following the 24x7 single cluster upgrade: Restore IP traffic to the node you upgraded:


        a. If you are using Sysplex Distributor, then
        VARY TCPIP,,SYSPLEX,RESUME,JOBNAME=<WP_Controler> or
        VARY TCPIP,,SYSPLEX,RESUME,POrt=<portnum>.

        b. If you are using IP sprayers for load balancing, reconfigure the IP sprayers to restore traffic to the upgraded node.

        c. If you are using the Web server plug-in for load balancing, perform the following steps to restore traffic to the upgraded node:


          i. If you previously disabled automatic propagation of the Web server(s), re-enable it now using the Deployment Manager administration console by going to Servers>Web Servers>web_server_name>Plug-in Properties and checking Automatically propagate plug-in configuration file .

          ii. In the Deployment Manager administrative console, click Servers>Clusters>cluster_name>Cluster members to obtain a view of the collection of cluster members.

          iii. Locate the cluster member you upgraded and change the value in the Configured weight column back to the original value.

          iv. Click Update to apply the change.

          v. If you are not using automatic generation and propagation for the Web server plug-in, manually generate and/or propagate the plugin-cfg.xml file to the Web servers.



      NOTE: Do not attempt to upgrade additional nodes until after completing Step 3 (Primary Node)! The update for the primary must be performed and completed first then upgrade any additional nodes. Additional nodes upgrades can be performed sequentially or in parallel. Update the additional nodes in accordance with the below instructions.

    4. Perform the following steps to upgrade WebSphere Portal on each additional node after completing the upgrade on the primary node:

      a. Only Required if following the 24x7 single cluster upgrade: Stop IP traffic to the node you are upgrading:
      • If you are using Sysplex Distributor, then
        VARY TCPIP,,SYSPLEX,QUIesce,JOBNAME=<WP_Controler> or
        VARY TCPIP,,SYSPLEX,QUIesce,POrt=<portnum>.
      • If you are using IP sprayers for load balancing, reconfigure the IP sprayers to stop routing new requests to the node.
      • If you are using the Web server plug-in for load balancing, perform the following steps to stop traffic to the node:

        i. In the Deployment Manager administrative console, click Servers>Clusters>cluster_name>Cluster members to obtain a view of the collection of cluster members.

        ii. Locate the cluster member you are upgrading and change the value in the Configured weight column to zero. NOTE: Record the previous value to restore the setting when the upgrade is complete.

        iii. Click Update to apply the change.

        iv. If automatic plug-in generation and propagation is disabled, manually generate and/or propagate the plugin-cfg.xml file to the Web servers.

        v. Note that the web server plug-in will check periodically for configuration updates based on the value of Refresh Configuration Interval property for the Web server plug-in (default value is 60 seconds). You can check this value on the Deployment Manager administrative console by selecting Servers>Web Servers>web_server_name>Plug-in Properties.


      b. Perform the following steps to run the installation program:
        1. Check the status of all active application servers and stop any active application servers on the node.
        2. If you have installed WebSphere Portal or WCM APARs interim fixes, use Portal Update Installer to uninstall them before applying the 6.1.0.4 PTFs in SMP/E.
        3. If the node agent on the additional is not started, then start it.
        4. Apply the PTFs for the 6.1.0.4 cumulative fix pack using SMP/E. Make sure your additional portal system is mounted with the newly maintained 6104 product HFS/zFS.
        5. Run './applyPTF.sh' from <AppServer_root>/bin directory with the following user ID requirements:

          The user ID needs to have either UID = 0 or all of the following five UNIXPRIV class profile privileges assigned to it:

          CONTROL access to SUPERUSER.FILESYS
          UPDATE access to SUPERUSER.FILESYS.MOUNT
          READ access to SUPERUSER.FILESYS.CHOWN
          READ access to SUPERUSER.FILESYS.CHANGEPERMS
          READ access to SUPERUSER.FILESYS.PFSCTL

          The user ID needs to be a member of the WebSphere Configuration Group (WSCFG1)

          Instead of running the applyPTF.sh script with a user ID that matches both of the requirements mentioned above, you could run the script with WSADMIN, if the WSADMIN user ID was created in your installation and owns all WebSphere product related configuration file systems (including those that are not part of WebSphere Portal). In such an environment, make sure, to not intermix the usage of the WSADMIN user ID and other user IDs to make file system updates.
        6. Run the ./ConfigEngine.sh CONFIG-WP-PTF-6104 command from the <wp_profile_root>/ConfigEngine/ directory with a User ID with sufficient authority to install portal. If DbSafeMode (see 1.e. above) is not set to true then the User ID must also have jdbc access settings in the profile.
      c. After the fix pack is installed, check the status of the current node in the Deployment Manager administrative console. Perform the following steps:
      • In the Deployment Manager administrative console, click System Administration>Nodes.

        i. If the current node has a status of Not Synchronized or Unknown, ensure that the node agent is running on the node, then click Synchronize and wait for the synchronization to complete. The end of synchronization is indicated by the message "BBOO0222I: ADMS0003I: The configuration synchronization completed" in the system log.

        ii. Wait at least 20 minutes before performing the next step to ensure that the node agent EAR expansion process completes.

        iii. Restart the WebSphere_Portal server on the currently updated node.


      d. Verify that your system is operational by entering the server's URL in a browser and logging in to browse the content.

      e. If you upgraded from a prior release with additional interim fixes installed, but uninstalled them in step 4.b.2 above, you must reinstall interim fixes that were not integrated into the version 6.1.0.4 installation. Before reinstalling an interim fix, go to the WebSphere Portal product support page to see if there is a newer version of the interim fix because these are often specific to a version and release of the product. Search on the APAR number to find more information.

      f. Install any APARs that must be applied. Refer to the Recommended fixes and updates for WebSphere Portal and Web Content Management page.

      h. Only Required if following the 24x7 single cluster upgrade: Restore IP traffic to the node you upgraded:

      • If you are using Sysplex Distributor, then
        VARY TCPIP,,SYSPLEX,RESUME,JOBNAME=<WP_Controler> or
        VARY TCPIP,,SYSPLEX,RESUME,POrt=<portnum>.
      • If you are using IP sprayers for load balancing, reconfigure the IP sprayers to restore traffic to the upgraded node.

        i. If you are using the Web server plug-in for load balancing, perform the following steps to restore traffic to the upgraded node:

      • In the Deployment Manager administrative console, click Servers>Clusters>cluster_name>Cluster members to obtain a view of the collection of cluster members.

        i. Locate the cluster member you upgraded and change the value in the Configured weight column back to the original value.

        ii. Click Update to apply the change.

        iii. If automatic plug-in generation and propagation is disabled, manually generate and/or propagate the plugin-cfg.xml file to the Web servers.



    5. Perform the following post-cluster installation upgrade steps:

      a. Re-enable automatic synchronization on all nodes in the cluster if you disabled it earlier.
        1. In the administrative console for the deployment manager, select System Administration > Nodes Agents in the navigation tree.
        2. Click nodeagent for the required node.
        3. Click File Synchronization Service.
        4. Check the Automatic Synchronization check box on the File Synchronization Service page to enable the automatic synchronization feature and then click OK.
        5. Repeat these steps for all remaining nodes.
        6. Click Save to save the configuration changes to the master repository.
        7. Select System Administration > Nodes in the navigation tree
        8. Select all nodes that are not synchronized, and click on Synchronize
        9. Select System Administration > Node Agents in the navigation tree
        10. Select all node agents where automatic synchronization has been re-enabled and click Restart
      b. Perform the following steps if you are using IBM Workplace Web Content Management and you created content on the release you upgraded from:

        i. Redeploy your customization, including JSPs, to the Web Content Management enterprise application and the local rendering portlet.

        ii. Run ./ConfigEngine.sh update-properties on the primary node.

        iii. Restart all Portal Servers in the cluster.


      c. Optional: The WebSphere Portal 6.1.0.4 fix pack does not update any of the business portlets or Web Clipping portlet as these are served from the IBM WebSphere Portal Business Solutions catalog. If this fix pack is updating a fresh installation of WebSphere Portal, you should download the latest available portlets and portlet applications from the Portal Catalog. If you already have the version of the business portlets or Web Clipping portlet you need or if you are not using these functions at all, no additional steps are necessary.

      d. If you disabled the database index application before the fix pack installation modify the following property to is original value. Edit wkplc_dbtype.properties and set: DbSafeMode=false. You also must manually apply all database changes manually as explained here: http://www.ibm.com/support/docview.wss?uid=swg27018892.

      e. Optional: When using the Web Content Management JSR 286 portlet and using the default Portal theme for Web Content Management Rendering, see the following document for improving the rendering performance: WebSphere Portal 6.1.0.4 Base Tag change.

      f. If upgrading from WebSphere Portal 6.1.5 to 6.1.5.1 and using the share pages feature or the creation of templates in mashups: With WebSphere Portal 6.1.5.1, the default access control default behavior was changed to prevent possible security exposures in case of malicious users. With the changed behavior certain functions in WebSphere Portal cannot be used any more by non admin users. See the following document for details on those changes and how to enable the functionality again for certain users: WebSphere Portal Access Control default behavior changes to prevent security exposure.

      g. Optional: If using Web Content Management you can apply indexes to the JCR database that will help with performance: Run the JCR database schema migration to update the database indexes on the primary node (there is no need to run the task on the secondary nodes). Run the following task - replace <previous version> with 6.1.0.0 or 6.1.0.1 or 6.1.0.2 or 6.1.0.3 depending which release you are upgrading from. If you are upgrading from 6.1.5 use 6.1.0.3 for <previous version>:

      • ./ConfigEngine.sh upgrade-jcr-database-601x-to-6100 -DPreviousPortalVersion=<previous version> -DWasPassword=<WasPassword> -Djcr.DbPassword=<jcrdb password>

      h. After performing upgrade to 6104, refer to database specific product information to ensure a reorg/runstats command has been run to ensure proper database performance.




    Back to top




    Steps for uninstalling
    Fix Pack 6.1.0.4 (single-cluster procedure)

    Perform the following steps to uninstall a fix pack from a clustered environment:

    NOTE : If WebSphere Portal feature pack 6.1.5.x has been enabled after a fix pack upgrade then before the fix pack can be uninstalled the WebSphere Portal feature pack 6.1.5.x must be uninstalled first. For example: If the WebSphere Portal feature pack 6.1.5.1 is enabled after upgrading to 6.1.0.4, then 6.1.5.1 must be uninstalled before uninstalling 6.1.0.4. See 6.1.5.1: Readme for IBM WebSphere Portal for z/OS 6.1.5 fix pack 1 (6.1.5.1) - cluster for details on uninstalling feature pack..

    NOTE: Changing the server context root after upgrading is an unsupported uninstall path. To uninstall after changing the context root, you must first change the server context root back to the values of the previous version.

    An example of an unsupported uninstall path is: Install Portal --> upgrade Portal --> change server context root --> uninstall upgrade.
    An example of a supported uninstall path is: Install Portal --> change server context root --> upgrade Portal --> uninstall upgrade.

    NOTE: Configuring Portal Server from a stand-alone environment to a cluster environment after upgrading is an unsupported uninstall path.


    NOTE: When instructed to stop or start the WebSphere_Portal server, stop or start all server instances on the node.

    1. Perform the following steps before you uninstall the Version 6.1.0.4 fix pack:

    NOTE: The steps listed in this point must to be performed on all nodes.


      a. Ensure that you have enough free space allocated for your operating system in the appropriate directories. See supported software requirements for information.

      b. If you installed any WebSphere Portal or WCM APARs on the current release of Portal you must uninstall those prior to uninstalling the fix pack.

      c. Verify that the information in the wkplc.properties, wkplc_dbtype.properties, and wkplc_comp.properties files are correct on each node in the cluster.

      • Enter a value for the PortalAdminPwd and WasPassword parameters in the wkplc.properties file.
      • Ensure that the value of the XmlAccessPort property in wkplc_comp.properties matches the value of the port used for HTTP connections to the WebSphere Portal server.
      • Ensure that the DbUser (database name) and DbPassword (database_password) properties are defined correctly for all database domains in the wkplc_comp.properties file.

      d. The WebSphere Portal 6.1.0.4 fix pack performs index (schema) modifications to the Portal databases during the fix pack application. Please see the following documentation for the details: http://www.ibm.com/support/docview.wss?uid=swg27018892. If using DB2 on z/OS it is possible to disable the automatic application and instead manually apply the indexes. For that make sure the following property is set on the primary node in wkplc_dbtype.properties before starting the fix pack application: DbSafeMode=true. If DbSafeMode is not set to true then ensure that the DbUser in the wkplc_comp.properties file for all domains on DB2 has explicit DBADM authority.

    2. Perform the following steps to ensure that automatic synchronization is disabled on all nodes to be uninstalled, and stop the node agents on all Portal nodes in the Cell, except the primary node.
      1. In the administrative console for the deployment manager, select System Administration > Nodes Agents in the navigation tree.
      2. Click nodeagent for the required node.
      3. Click File Synchronization Service.
      4. Uncheck the Automatic Synchronization check box on the File Synchronization Service page to disable the automatic synchronization feature and then click OK.
      5. Repeat these steps for all other nodes to be uninstalled.
      6. Click Save to save the configuration changes to the master repository.
      7. Select System Administration > Nodes in the navigation tree
      8. Select all nodes that are not synchronized, and click on Synchronize
      9. Select System Administration > Node agents in the navigation tree
      10. For the primary node, select the nodeagent and click Restart
      11. Select the nodeagents of all additional nodes and click Stop


      3. Perform the following steps to uninstall the fix pack on the primary node:

        a. Only Required if following the 24x7 single cluster uninstall: Stop IP traffic to the node where you are uninstalling:

          If you are using Sysplex Distributor, then

          VARY TCPIP,,SYSPLEX,QUIesce,JOBNAME=<WP_Controler> or

          VARY TCPIP,,SYSPLEX,QUIesce,POrt=<portnum>.


        If you are using IP sprayers for load balancing, reconfigure the IP sprayers to stop routing new requests to the node.

        If you are using the Web server plug-in for load balancing, perform the following steps to stop traffic to the node:


          i. In the Deployment Manager administrative console, click Servers>Clusters> cluster_name >Cluster members to obtain a view of the collection of cluster members.

          ii. Locate the cluster member you are upgrading and change the value in the Configured weight column to zero. NOTE: Record the previous value to restore the setting when the upgrade is complete.

          iii. Click Update to apply the change.

          iv. If automatic plug-in generation and propagation is disabled, manually generate and/or propagate the plugin-cfg.xml file to the Web servers.

          v. Note that the web server plug-in will check periodically for configuration updates based on the value of Refresh Configuration Interval property for the Web server plug-in (default value is 60 seconds). You can check this value on the Deployment Manager administrative console by selecting Servers>Web Servers>web_server_name>Plug-in Properties.

          vi. If automatic propagation of the plug-in configuration file is enabled on the web server(s) disable it from the Deployment Manager administrative console by going to Servers>Web Servers>web_server_name>Plug-in Properties and unchecking Automatically propagate plug-in configuration file


        b. Perform the following steps to run the uninstallation program:
          1. Check the status of all active application servers and stop any active application servers.
          2. If you have installed WebSphere Portal or WCM APARs interim fixes, use Portal Update Installer to uninstall them before uninstalling of the 6.1.0.4 PTFs in SMP/E.
          3. From the <AppServer_root>/bin directory, run the ./backoutPTF.sh Portals <wp610x_01> command, where <wp610x_01> is wp6101_01 for 6.1.0.1, wp6102_01 for 6.1.0.2 or wp6103_01 for 6.1.0.3, to uninstall 6.1.0.4 service level and return to the prior service level.

            The user ID used to execute the command needs to have either UID = 0 or all of the following five UNIXPRIV class profile privileges assigned to it:

            CONTROL access to SUPERUSER.FILESYS
            UPDATE access to SUPERUSER.FILESYS.MOUNT
            READ access to SUPERUSER.FILESYS.CHOWN
            READ access to SUPERUSER.FILESYS.CHANGEPERMS
            READ access to SUPERUSER.FILESYS.PFSCTL

            The user ID needs to be a member of the WebSphere Configuration Group (WSCFG1).

            Instead of running the backoutPTF.sh script with a user ID that matches both of the requirements mentioned above, you could run the script with WSADMIN, if the WSADMIN user ID was created in your installation and owns all WebSphere product related configuration file systems (including those that are not part of WebSphere Portal). In such an environment, make sure, to not intermix the usage of the WSADMIN user ID and other user IDs to make file system updates.
          4. Do a SMP/E Restore of the 6.1.0.4 fix pack PTFs to remove the fix pack. Make sure your primary portal system is mounted with the maintained 610x product HFS/zFS.
          5. Run the ./ConfigEngine.sh UNCONFIG-WP-PTF-6104 command from the <wp_profile_root>/ConfigEngine/ directory with a User ID with sufficient authority to uninstall portal..
        c. If you previously customized any configuration files in the PortalServer_root/config directory, check to see if uninstalling the fix pack affected those files by restoring a version of the files that was saved when the cumulative fix was originally installed. If it did affect the files, you must perform the same customization on the restored version of each file.

        d. After the fix pack is uninstalled, check the status of the node where you are uninstalling in the Deployment Manager administrative console. Perform the following steps:

        • In the Deployment Manager administrative console, click System Administration>Nodes.
        • If the primary node has a status of Not Synchronized or Unknown, ensure that the node agent is running on the node, then click Synchronize and wait for the synchronization to complete. The end of synchronization is indicated by the message "BBOO0222I: ADMS0003I: The configuration synchronization completed" in the system log.
        • Wait at least 20 minutes before performing the next step to ensure that the node agent EAR expansion process completes.
        • Restart the WebSphere_Portal server on the primary node.

        e. Run the following task to activate the portlets:
        • ./ConfigEngine.sh activate-portlets -DPortalAdminPwd=password -DWasPassword=password

        f. Verify that your system is operational by entering the server's URL in a browser and logging in to browse the content.

        g. Only Required if following the 24x7 single cluster uninstall: Restore IP traffic to the node where you uninstalled:

        • If you are using Sysplex Distributor, then
          VARY TCPIP,,SYSPLEX,RESUME,JOBNAME=<WP_Controler> or
          VARY TCPIP,,SYSPLEX,RESUME,POrt=<portnum>.
        • If you are using the IP sprayers for load balancing, reconfigure the IP sprayers to restore traffic to the upgraded node.
        • If you are using the Web server plug-in for load balancing, perform the following steps to restore traffic to the upgraded node:

          If you previously disabled automatic propagation of the Web server(s), re-enable it now using the Deployment Manager administration console by going to Servers>Web Servers>web_server_name>Plug-in Properties and checking Automatically propagate plug-in configuration file

          i. In the Deployment Manager administrative console, click Servers>Clusters>cluster_name>Cluster members to obtain a view of the collection of cluster members.

          ii. Locate the cluster member you upgraded and change the value in the Configured weight column back to the original value.

          iii. Click Update to apply the change.

          iv. If automatic plug-in generation and propagation is disabled, manually generate and/or propagate the plugin-cfg.xml file to the Web servers.

      4. Perform the following steps to uninstall the fix pack on the additional node; NOTE: Repeat this step for each additional node:

        a. Only Required if following the 24x7 single cluster uninstall: Stop IP traffic to the node where you are uninstalling:
        • If you are using Sysplex Distributor, then

          VARY TCPIP,,SYSPLEX,QUIesce,JOBNAME=<WP_Controler> or

          VARY TCPIP,,SYSPLEX,QUIesce,POrt=<portnum>.

        • If you are using IP sprayers for load balancing, reconfigure the IP sprayers to stop routing new requests to the node.
        • If you are using the Web server plug-in for load balancing, perform the following steps to stop traffic to the node:

          In the Deployment Manager administrative console, click Servers>Clusters>cluster_name>Cluster members to obtain a view of the collection of cluster members.

          i. Locate the cluster member you are upgrading and change the value in the Configured weight column to zero. NOTE: Record the previous value to restore the setting when the upgrade is complete.

          ii. Click Update to apply the change.

          iii. If automatic plug-in generation and propagation is disabled, manually generate and/or propagate the plugin-cfg.xml file to the Web servers.

          iv. Note that the web server plug-in will check periodically for configuration updates based on the value of Refresh Configuration Interval property for the Web server plug-in (default value is 60 seconds). You can check this value on the Deployment Manager administrative console by selecting Servers>Web Servers>web_server_name>Plug-in Properties.


        b. If the node agent on the additional is not started, then start it.

        c. Perform the following steps to run the uninstallation program:

          1. Check the status of all active application servers and stop any active application servers on the node.
          2. If you have installed WebSphere Portal or WCM APARs interim fixes, use Portal Update Installer to uninstall them before uninstalling of the 6.1.0.4 PTFs in SMP/E.
          3. From the <AppServer_root>/bin directory, run the ./backoutPTF.sh Portals <wp610x_01> command, where <wp610x_01> is wp6101_01 for 6.1.0.1, wp6102_01 for 6.1.0.2 or wp6103_01 for 6.1.0.3, to uninstall 6.1.0.4 service level and return to the prior service level.

            The user ID used to execute the command needs to have either UID = 0 or all of the following five UNIXPRIV class profile privileges assigned to it:

            CONTROL access to SUPERUSER.FILESYS
            UPDATE access to SUPERUSER.FILESYS.MOUNT
            READ access to SUPERUSER.FILESYS.CHOWN
            READ access to SUPERUSER.FILESYS.CHANGEPERMS
            READ access to SUPERUSER.FILESYS.PFSCTL

            The user ID needs to be a member of the WebSphere Configuration Group (WSCFG1).

            Instead of running the backoutPTF.sh script with a user ID that matches both of the requirements mentioned above, you could run the script with WSADMIN, if the WSADMIN user ID was created in your installation and owns all WebSphere product related configuration file systems (including those that are not part of WebSphere Portal). In such an environment, make sure, to not intermix the usage of the WSADMIN user ID and other user IDs to make file system updates.
          4. Do a SMP/E Restore of the 6.1.0.4 fix pack PTFs to remove the fix pack. Make sure your additional portal system is mounted with the maintained 610x product HFS/zFS.
          5. Run the ./ConfigEngine.sh UNCONFIG-WP-PTF-6104 command from the <wp_profile_root>/ConfigEngine/ directory with a User ID with sufficient authority to uninstall portal..

        d. If you previously customized any configuration files in the PortalServer_root/config directory, check to see if uninstalling the fix pack affected those files by restoring a version of the files that was saved when the cumulative fix was originally installed. If it did affect the files, you must perform the same customization on the restored version of each file.

        e. After the fix pack is uninstalled, check the status of the node where you are uninstalling in the Deployment Manager administrative console. Perform the following steps:

        • In the Deployment Manager administrative console, click System Administration>Nodes.
        • If the current node has a status of Not Synchronized or Unknown, ensure that the node agent is running on the node, then click Synchronize and wait for the synchronization to complete. The end of synchronization is indicated by the message "BBOO0222I: ADMS0003I: The configuration synchronization completed" in the system log.
        • Wait at least 20 minutes before performing the next step to ensure that the node agent EAR expansion process completes.
        • Restart the WebSphere_Portal server on the additional node.

        f. Verify that your system is operational by entering the server's URL in a browser and logging in to browse the content.

        g. Only Required if following the 24x7 single cluster downgrade: Restore IP traffic to the node where you uninstalled:

        • If you are using Sysplex Distributor, then
          VARY TCPIP,,SYSPLEX,RESUME,JOBNAME=<WP_Controler> or
          VARY TCPIP,,SYSPLEX,RESUME,POrt=<portnum>.
        • If you are using the IP sprayers for load balancing, reconfigure the IP sprayers to restore traffic to the upgraded node.
        • If you are using the Web server plug-in for load balancing, perform the following steps to restore traffic to the upgraded node:

          In the Deployment Manager administrative console, click Servers > Clusters > cluster_name > Cluster members to obtain a view of the collection of cluster members.


            i. Locate the cluster member you upgraded and change the value in the Configured weight column back to the original value.

            ii. Click Update to apply the change.

            iii. If automatic plug-in generation and propagation is disabled, manually generate and/or propagate the plugin-cfg.xml file to the Web servers.

      5. Perform the following post-uninstallation steps:

        a. Re-enable automatic synchronization on all nodes in the cluster if you disabled it earlier.
          1. In the administrative console for the deployment manager, select System Administration > Nodes Agents in the navigation tree.
          2. Click nodeagent for the required node.
          3. Click File Synchronization Service.
          4. Check the Automatic Synchronization check box on the File Synchronization Service page to enable the automatic synchronization feature and then click OK.
          5. Repeat these steps for all remaining nodes.
          6. Click Save to save the configuration changes to the master repository.
          7. Select System Administration > Nodes in the navigation tree
          8. Select all nodes that are not synchronized, and click on Synchronize
          9. Select System Administration > Node Agents in the navigation tree
          10. Select all node agents where automatic synchronization has been re-enabled and click Restart
        b. Perform the following steps if you are using IBM Workplace Web Content Management and you created content on the release you upgraded from:
        1. Redeploy your customization, including JSPs, to the Web Content Management enterprise application and the local rendering portlet.
        c. If you disabled database updates during the uninstall of 6104 by setting DbSafeMode=true, then do the following:


      Back to top




      Known issues

      For a list of known runtime issues for WebSphere Portal 6.1.0.4/6.1.5.1, see IBM WebSphere Portal 6.1.0.4/6.1.5.1 Known Runtime Issues for z/OS for details.

      For a list of known install and uninstall issues for WebSphere Portal 6.1.0.4/6.1.5.1, see IBM WebSphere Portal 6.1.0.4/6.1.5.1 Known Install/Uninstall Issues for z/OS for details.

      Back to top




      Change History
      Initial Release

      Back to top



      Additional information

      You can find additional information on the WebSphere Portal support page.

      Back to top



      Trademarks and service marks

      For trademark attribution, visit the IBM Terms of Use Web site.

      Back to top

      Rate this page:

      (0 users)Average rating

      Document information


      More support for:

      WebSphere Portal
      Installation

      Software version:

      6.1.0.4, 6.1.5.1

      Operating system(s):

      z/OS

      Software edition:

      Enable

      Reference #:

      7017950

      Modified date:

      2010-06-21

      Translate my page

      Machine Translation

      Content navigation