6.0.1.7: Readme for IBM WebSphere Portal Enable for z/OS 6.0.1 fix pack 7 (6.0.1.7) - cluster

Product readme


Abstract

IBM WebSphere Portal for z/OS fix pack installation instructions - cluster

Content

6.0.1.7: Readme for IBM WebSphere Portal Enable for z/OS 6.0.1 fix pack 7 (6.0.1.7) - cluster


6.0.1.7: Readme for IBM WebSphere Portal Enable for z/OS 6.0.1 fix pack 7 (6.0.1.7) - cluster

Table of Contents





What is new with Fix Pack
6.0.1.7
This fix pack updates the IBM WebSphere Portal Enable for z/OS 6.0 (6.0.0.1), 6.0.1.3, 6.0.1.4, 6.0.1.5, 6.0.1.6, and all other intermediate levels to the 6.0.1.7 service release level.

The following items are included in this fix pack:

Back to top




About Fix Pack 6.0.1.7


Installing this Fix Pack for version 6.0. raises the fix level of your product to version 6.0.1.7.

Important: WebSphere Portal Enable for z/OS Version 6.0.1.7 can be used for performing a fresh installation. Therefore, when installing WebSphere Portal Enable for z/OS Version 6.0.1.7 from scratch, put the Portal Version 6.0 GA code into SMP/E, then apply the generally available PTFs (UA52250, UA52251, UA52252, UA52258, UA52259, UA52260, UA52261, UA52287, UA52304, UA52305, UA52306, UA52307, UA52308, UA52309). Now you can run the Portal installation and configuration as described in the WebSphere Portal Information Center.

See the installation instructions in the Steps for installing Fix Pack 6.0.1.7 section for information.

Back to top




Space requirements


The WebSphere Portal Enable for z/OS Fix pack Version 6.0.1.7 requires 32000 Tracks (or 2134 Cylinders) in the portal installation filesystem and 30000 Tracks (or 2000 cylinders) in the SMPPTS dataset.
The PTFs are listed below:
    UA52250
    UA52251
    UA52252
    UA52258
    UA52259
    UA52260
    UA52261
    UA52287
    UA52304
    UA52305
    UA52306
    UA52307
    UA52308
    UA52309

Verify that the free space is available before beginning the installation.

Back to top




Cluster upgrade planning

Before you begin:

Familiarize yourself with the Portal Upgrade Best Practices available from IBM Remote Technical Support for WebSphere Portal.

  • Portal Upgrades: Best Practices


  • Review the following requirements and limitations for performing product upgrades while maintaining 24x7 availability ( NOTE: Ensure that you understand this information before upgrading your cluster.):
    • There are two different approaches for applying maintenance in a 24*7 high availability operation mode - using a single cluster of WebSphere Portal or using multiple clusters (at least 2). We propose to use the multiple cluster approach due to the high complexity and limitations of the single cluster approach. Please see the following document for details:
      Multiple Cluster Setup with WebSphere Portal
    • The following Assumptions and Limitations apply to the here discussed single cluster approach:
      • Assumptions for maintaining 24x7 operation during the upgrade process
      • If you want to preserve current user sessions during the upgrade process, make sure that WebSphere Application Server distributed session support is enabled to recover user session information when a cluster node is stopped for maintenance. Alternatively, use monitoring to determine when all (or most) user sessions on a cluster node have completed before stopping the cluster node for upgrade to minimize the disruption to existing user sessions.
      • Load balancing must be enabled in the clustered environment, and multiple HTTP servers must be available to provide Web server failover support.
      • The cluster has at least two horizontal cluster members.
      • Limitations on 24x7 maintenance
      • If you have not implemented horizontal scaling and have implemented only vertical scaling in your environment such that all cluster members reside on the same node, the fix pack installation process will result in a temporary outage for your end users due to a required restart. In this case, you will be unable to upgrade while maintaining 24x7 availability.
      • If you have a single remote Web server in your environment, 24x7 availability is not possible during the maintenance of that Web server. If the Web server is installed on the same machine as one of the cluster nodes, you might be required to stop the Web server while applying corrective service to the local WebSphere Application Server installation.
      • When installing the fix pack in a clustered environment, the portlets are only deployed when installing the fix pack on the primary node. The fix pack installation on secondary nodes simply synchronizes the node with the deployment manager to receive updated portlets. During the portlet deployment on the primary node, the database will be updated with the portlet configuration. This updated database, which is shared between all nodes, would be available to secondary nodes before the secondary nodes receive the updated portlet binary files. It is possible that the new portlet configuration will not be compatible with the previous portlet binary files, and in a 24x7 production environment problems may arise with anyone attempting to use a portlet that is not compatible with the new portlet configuration. Therefore it is recommended that you test your portlets before upgrading the production system in a 24x7 environment.
      • In order to maintain 24x7 operations in a clustered environment, it is required that you stop WebSphere Portal on one node at a time and upgrade it. It is also required that during the upgrade of the primary node, you manually stop node agents on all other cluster nodes that continue to service user requests. Failure to do so may result in portlets being shown as unavailable on nodes having the node agent running.
      • When uninstalling the fix pack in a clustered environment, the portlets are only redeployed when uninstalling the fix pack on the primary node. The fix pack uninstall on secondary nodes simply synchronizes the node with the deployment manager to receive updated portlets. During the portlet redeployment on the primary node, the database will be updated with the portlet configuration, which would be available to secondary nodes before the secondary nodes receive the updated binary files, since all nodes share the same database. It is recommended that you test your portlets before uninstalling on the production system in a 24x7 environment because the possibility of such incompatibility might arise if the previous portlet configuration is not compatible with the new portlet binary files.


    Back to top




    Steps for installing
    Fix Pack 6.0.1.7

    Before you begin:

    Make sure you read through the earlier topic in this paper, Cluster upgrade planning. Also, familiarize yourself with the Portal Upgrade Best Practices available from IBM Remote Technical Support for WebSphere Portal.

  • Portal Upgrades: Best Practices
  • For instructions on how to validate your upgrade environment prior to the upgrade, see the instructions for running the Health Checker tool for WebSphere Portal at:

  • Health Checker tool for WebSphere Portal V6.0


  • Perform the following steps to upgrade to Version 6.0.1.7:
    1. Perform the following steps before upgrading to Version 6.0.1.7:
      1. Review the supported hardware and software requirements for this cumulative fix. Upgrade all hardware and software if necessary before applying this cumulative fix.
      2. Verify that the information in the wpconfig.properties, wpconfig_dbdomain.properties, wpconfig_dbtype.properties, and wpconfig_sourceDB.properties files are correct on each node in the cluster.
        • Enter a value for the PortalAdminPwd and WasPassword parameters in the wpconfig.properties file.
        • In the wpconfig_dbdomain.properties file, set the passwords for the database entries.
        • Set WpsHostPort and XmlAccessPort to the same value in the wpconfig.properties file. In a clustered environment, make sure they are the same value on both nodes. NOTE: If you are using Microsoft Internet Protocol (IP) Version 6 and you have specified the WpsHostName property as an Internet Protocol address, normalize the Internet Protocol address by placing square brackets around the IP address as follows: WpsHostName=[my.IPV6.IP.address].
        • If you want the fix pack to update the screens in the wps.ear file, add the "CopyWpsEarScreens=true" line to the wpconfig.properties file.
        • If installing an empty portal, include the following line in the wpconfig.properties file: EmptyPortal=true.
        • If using a database other than the default, grant permissions to databases within the framework by setting the DbUser (database name) and DbPassword (database password) parameters in the wpconfig_dbdomain.properties file. On the secondary node, also provide the name of the remote database server.
      3. If you restricted your access to Portal via SSL, then make sure that the property sslEnabledForJcr=true is set in the file wpconfig.properties.
      4. Use the link provided below to order the PTFs (UA52250, UA52251, UA52252, UA52258, UA52259, UA52260, UA52261, UA52287, UA52304, UA52305, UA52306, UA52307, UA52308, UA52309) for the 6.0.1.7 cumulative fix pack:

        https://www14.software.ibm.com/webapp/ShopzSeries/ShopzSeries.jsp

      5. If you plan to configure Computer Associates eTrust SiteMinder as your external security manager to handle authorization and authentication, the XML configuration interface may not be able to access WebSphere Portal through eTrust SiteMinder. To enable the XML configuration interface to access WebSphere Portal, use eTrust SiteMinder to define the configuration URL (/wps/config) as unprotected. Refer to the eTrust SiteMinder documentation for specific instructions. After the configuration URL is defined as unprotected, only WebSphere Portal enforces access control to this URL. Other resources, such as the /wps/myportal URL, are still protected by eTrust SiteMinder. If you have already set up eTrust SiteMinder for external authorization and you want to use XML Configuration Interface (xmlaccess), make sure you have followed the procedure to allow for xmlaccess execution.
      6. Required when upgrading from 6.0, 6.0.0.1, 6.0.1, 6.0.1.1, but not required when upgrading from 6.0.1.3, or later:
        • Preserve the default search collections and any search collections that you created in the previous version because the index structure of Search is not backward compatible between versions. See Migrating your search collections between versions for information on exporting and importing search collections.
      7. Check and record the value of the current Portal service level, which will be needed in case you decide to backout (uninstall) the 6017 PTF in the future.
        • Display the content of the properties file in <was_root>/properties/service/product/Portals/service-level.properties.
        • The last entry contains the current service level value. Safe keep the <service level> value from the <service level>={defect},{defect2} format of the last entry, for example 'wp6016_01'.
    2. Perform the following steps to disable automatic synchronization on all nodes:
      1. In the Deployment Manager administrative console, click System Administration>Node Agents.
      2. Click nodeagent on the required node.
      3. Click File Synchronization Service on the Configuration tab.
      4. Uncheck the Automatic Synchronization check box to disable the automatic synchronization feature and then click OK.
      5. Repeat these steps for all other nodes to be upgraded.
      6. Click Save to save the configuration changes to the master repository.
      7. Select System Administration > Nodes in the navigation tree
      8. Select all nodes that are not synchronized, and click on Synchronize
      9. Select System Administration > Node agents in the navigation tree
      10. For the primary node, select the nodeagent and click Restart
      11. Select the nodeagents of all secondary nodes and click Stop

        NOTE: Do not attempt to combine steps 3 and 4 together! The update must be performed sequentially, not in parallel on all of the server nodes in the cluster. Update the primary node first, then the secondary node and then any subsequent nodes, one at a time, in accordance with the below instructions.
    3. Perform the following steps to upgrade WebSphere Portal on the primary node:
      1. Stop IP traffic to the node you are upgrading:
        • If you are using the IP sprayers for load balancing, reconfigure the IP sprayers to stop routing new requests to the node.
        • If you are using the Web server plug-in for load balancing, perform the following steps to stop traffic to the node:
          1. In the Deployment Manager administrative console, click Servers>Clusters>cluster_name>Cluster members to obtain a view of the collection of cluster members.
          2. Locate the cluster member you are upgrading and change the value in the Configured weight column to zero. NOTE: Record the previous value to restore the setting when the upgrade is complete.
          3. Click Apply to apply the change.
          4. If automatic plug-in generation and propagation is disabled, manually generate and/or propagate the plugin-cfg.xml file to the Web servers.
      2. If you have a local Web Server on the node you are upgrading, stop the local Web server.
      3. Perform the following steps to run the installation:
        1. Log on with the WebSphere Administrative user ID, open a command prompt in the <WAS_profile_root>/bin directory and enter the command ./setupCmdLine.sh to set up the Java environment.
        2. Enter the command ./serverStatus.sh -all -user username -password password to check the status of all active application servers; NOTE: If security is not enabled, exclude the -user and -password parameters from the command.
        3. Enter the command ./stopServer.sh servername -user username -password password to stop any active application servers; NOTE: If security is not enabled, exclude the -user and -password parameters from the command.
        4. If you have installed WebSphere Portal or WCM APARs interim fixes, use Portal Update Installer to uninstall them before applying the 6.0.1.7 PTFs. For information on the Portal Update Installer for z/OS see the following link: http://www.ibm.com/support/docview.wss?rs=688&uid=swg21326670
        5. Apply the following fourteen PTFs for the 6.0.1.7 cumulative fix pack in SMP/E:

          UA52250
          UA52251
          UA52252
          UA52258
          UA52259
          UA52260
          UA52261
          UA52287
          UA52304
          UA52305
          UA52306
          UA52307
          UA52308
          UA52309

        6. Verify that Owner and Permissions for the service-level.properties and zPortalsPostInstall.properties files in the was_root/properties/service/product/Portals/ directory are set correctly (User ID requirement: UID=0):
          • Need to have the Owner:Group set to WSADMIN:WSCFG1; if this is not set correctly, run chown WSADMIN:WSCFG1 *.properties in the was_root/properties/service/product/Portals/ directory, where WSADMIN is the WebSphere Administrative user ID and WSCFG1 is the WebSphere Administrative group.
          • The permission bits must be set for both properties files to rwxrwxr-x (775); if this is not set correctly, run the following commands:

            chmod 775 service-level.properties
            chmod 775 zPortalsPostInstall.properties
        7. Log on to a telnet session with the WebSphere Administrative user ID (for example: WSADMIN), change to the was_root/bin directory. Run the ./applyPTF.sh task to apply changes to your configuration file system.
        8. Start the WebSphere_Portal server.
        9. Change to the portal_server_root/config directory and run the ./WPSconfig.sh CONFIG-WP-PTF-6017 command.
      4. Verify that the external web server mapping was not lost during the upgrade using the Deployment Manager Administrative Console:
        1. Navigate to Applications > Enterprise Applications.
        2. Click on the name: wps EA name.
        3. Click on Map modules to servers.
        4. Select WebSphere Portal Server (wps.war) module.
        5. In the server column in the table, ensure that the web servers and the cluster are still mapped for the wps application.
        6. If web server entries are not in the Server column, you need to add them back. In the Clusters and Servers drop-down menu, highlight all the entries that are currently in the Server column, PLUS the webserver entries, and click Apply. Then, Save the changes.
      5. If you are running an external Web server, such as IBM HTTP server, and you are using the WebSphere Application Server automatic generation and propagation of the plug-in, then just restart the Web server. If you are not using the automatic generation and propagation, then perform the following steps:
        1. Regenerate the Web server plug-in.
        2. Copy the plugin-cfg.xml file to the Plugin directory.
        3. Restart the Web server.
      6. After the fix pack is installed, check the status of the node you are upgrading in the Deployment Manager administrative console. If the status is Not Synchronized, ensure that the node agent is running on the node and then perform the following steps:
        1. In the Deployment Manager administrative console, click System Administration>Nodes.
        2. For the node with a status of Not Synchronized, click Synchronize.
        3. After the synchronization is complete, wait at least 30 minutes before performing the next step because the ear expander is still running.
      7. Restart WebSphere_Portal on the primary node.
      8. If you stopped a local Web server, restart it now.
      9. Run the ./WPSconfig.sh finalize-portlets-fixpack -DPortalAdminPwd=password task to activate the portlets.
      10. Verify that your system is operational by entering the server's URL in a browser and logging in to browse the content.
      11. Restore IP traffic to the node you upgraded:
        • If you are using the IP sprayers for load balancing, reconfigure the IP sprayers to restore traffic to the upgraded node.
        • If you are using the Web server plug-in for load balancing, perform the following steps to restore traffic to the upgraded node:
          1. In the Deployment Manager administrative console, click Servers>Clusters>cluster_name>Cluster members to obtain a view of the collection of cluster members.
          2. Locate the cluster member you upgraded and change the value in the Configured weight column back to the original value.
          3. Click Apply to apply the change.
          4. If automatic plug-in generation and propagation is disabled, manually generate and/or propagate the plugin-cfg.xml file to the Web servers.

            NOTE: Do not attempt to upgrade secondary or subsequent nodes until after completing the above Step 3 (Primary Node)! The update must be performed sequentially, not in parallel on all of the server nodes in the cluster. Update the primary node first, then the secondary node and then any subsequent nodes, one at a time, in accordance with the below instructions.
    4. Perform the following steps to upgrade WebSphere Portal on each secondary node:
      Note: Review the steps in 1.b for the secondary node you are about to upgrade, if you have not done so previously
      1. Stop IP traffic to the node you are upgrading:
        • If you are using the IP sprayers for load balancing, reconfigure the IP sprayers to stop routing new requests to the node.
        • If you are using the Web server plug-in for load balancing, perform the following steps to stop traffic to the node:
          1. In the Deployment Manager administrative console, click Servers>Clusters>cluster_name>Cluster members to obtain a view of the collection of cluster members.
          2. Locate the cluster member you are upgrading and change the value in the Configured weight column to zero. NOTE: Record the previous value to restore the setting when the upgrade is complete.
          3. Click Apply to apply the change.
          4. If automatic plug-in generation and propagation is disabled, manually generate and/or propagate the plugin-cfg.xml file to the Web servers.
      2. If you have a local Web Server on the node you are upgrading, stop the local Web server.
      3. Perform the following steps to run the installation:
        1. Log on with the WebSphere Administrative user ID, open a command prompt in the <WAS_profile_root>/bin directory and enter the command ./setupCmdLine.sh to set up the Java environment.
        2. Enter the ./serverStatus.sh -all -user username -password password command to check the status of all active application servers; NOTE: If security is not enabled, exclude the -user and -password parameters from the command.
        3. Enter the ./stopServer.sh servername -user username -password password command to stop any active application servers; NOTE: If security is not enabled, exclude the -user and -password parameters from the command.
        4. If you have installed WebSphere Portal or WCM APARs interim fixes, use Portal Update Installer to uninstall them before applying the 6.0.1.7 PTFs. For information on the Portal Update Installer for z/OS see the following link: http://www.ibm.com/support/docview.wss?rs=688&uid=swg21326670
        5. If the node agent on the secondary is not started, then start it.
        6. Verify that Owner and Permissions for the service-level.properties and zPortalsPostInstall.properties files in the was_root/properties/service/product/Portals/ directory are set correctly (User ID requirement: UID=0):
          • Need to have the Owner:Group set to WSADMIN:WSCFG1; if this is not set correctly, run chown WSADMIN:WSCFG1 *.properties in the was_root/properties/service/product/Portals/ directory, where WSADMIN is the WebSphere Administrative user ID and WSCFG1 is the WebSphere Administrative group.
          • The permission bits must be set for both properties files to rwxrwxr-x (775); if this is not set correctly, run the following commands:

            chmod 775 service-level.properties
            chmod 775 zPortalsPostInstall.properties
        7. Log on to a telnet session with the WebSphere Administrative user ID (for example: WSADMIN), change to the was_root/bin directory. Run the ./applyPTF.sh task to apply changes to your configuration file system.
        8. Start the WebSphere_Portal server.
        9. Change to the portal_server_root/config directory and run the ./WPSconfig.sh CONFIG-WP-PTF-6017 command.
      4. After the fix pack is installed, check the status of the node you are upgrading in the Deployment Manager administrative console. If the status is Not Synchronized, ensure that the node agent is running on the node and then perform the following steps:
        1. In the Deployment Manager administrative console, click System Administration>Nodes.
        2. For the node with a status of Not Synchronized, click Synchronize.
        3. After the synchronization is complete, wait at least 30 minutes before performing the next step because the ear expander is still running.
      5. Restart the WebSphere_Portal server on the secondary node.
      6. If you stopped a local Web server, restart it now.
      7. Verify that your system is operational by entering the server's URL in a browser and logging in to browse the content.
      8. Restore IP traffic to the node you upgraded:
        • If you are using the IP sprayers for load balancing, reconfigure the IP sprayers to restore traffic to the upgraded node.
        • If you are using the Web server plug-in for load balancing, perform the following steps to restore traffic to the upgraded node:
          1. In the Deployment Manager administrative console, click Servers>Clusters>cluster_name>Cluster members to obtain a view of the collection of cluster members.
          2. Locate the cluster member you upgraded and change the value in the Configured weight column back to the original value.
          3. Click Apply to apply the change.
          4. If automatic plug-in generation and propagation is disabled, manually generate and/or propagate the plugin-cfg.xml file to the Web servers.
      9. Perform the following steps on every secondary node to create the URL provider for the IBM Eclipse Help System (IEHS) handler:
        1. Open the Administrative Console.
        2. Select Resources > URL Providers.
        3. Select the correct secondary node.
        4. Click New.
        5. Enter the following values:
          • Name: IEHS_Handler
          • Class path: <empty>
          • Stream handler class name: org.eclipse.osgi.framework.internal.protocol.reference.Handler
          • Protocol: reference
        6. Save your changes.
    5. Perform the following post-cluster install upgrade steps:
      1. The fix pack includes updates to screens originally shipped with Version 6.0. These updates are placed in the portal_server_root/fixes directory during the Fix Pack upgrade process to avoid replacing any custom changes. Manually merge the updated files in the wps.ear file. The following screens were changed:

        NOTE: You do not have to manually merge the files if you added the CopyWpsEarScreens=true line to the wpconfig.properties file.

        /screens/chtml/Error.jsp
        /screens/chtml/ErrorLoginRetreiveUser.jsp
        /screens/chtml/ErrorNotAuthorized.jsp
        /screens/chtml/ErrorNoteLoggedIn.jsp
        /screens/chtml/Help.jsp
        /screens/chtml/Home.jsp
        /screens/chtml/Login.jsp
        /screens/chtml/SelectPage.jsp
        /screens/html/BidiInclude.jsp
        /screens/html/Congrats.jsp
        /screens/html/Error.jsp
        /screens/html/ErrorLoginRetrieveUser.jsp
        /screens/html/ErrorNotAuthorized.jsp
        /screens/html/ErrorNotLoggedIn.jsp
        /screens/html/ErrorSessionTimeout.jsp
        /screens/html/ForgetPassword.jsp
        /screens/html/Home.jsp
        /screens/html/Login.jsp
        /screens/html/RegistrationError.jsp
        /screens/html/UserProfileConf.jsp
        /screens/html/UserProfileForm.jsp
        /screens/wml/Error.jsp
        /screens/wml/ErrorLoginRetrieveUser.jsp
        /screens/wml/ErrorNotAuthorized.jsp
        /screens/wml/ErrorNotLoggedIn.jsp
        /screens/wml/Home.jsp
        /screens/wml/Login.jsp
        /screens/wml/SelectPage.jsp

        The original shipped screens are contained within the wps.ear Enterprise Application. To merge the changes, perform the first two steps in the Deploying customized themes and skins file. Then merge the supplied changes using manual inspections or a third party tool that allows you to compare and merge files. After the changes are merged, repackage the updated files into the wps.ear file and redeploy it.

        NOTE:
        If you did not customize your screens, you can replace each original file with the new ones supplied in the files directory.
      2. Perform the following steps if you are using IBM Web Content Manager and you created content on a 6.0.0.0 server:
        1. Create a backup of your 6.0 database.
        2. Redeploy your customization, including JSPs, to the Web Content Manager enterprise application and the local rendering portlet.
        3. Refresh all existing 6.0 items by opening and saving each item; enter the following URL: http://hostname.yourcompany.com:port_number/wps/wcm/connect?MOD=RefreshAllItems&library=libraryname
        4. Enter the following URL to preserve the last saved date of each item: http://hostname.yourcompany.com:port_number/wps/wcm/connect?MOD=RefreshAllItems&library=libraryname&preserve_dates=true
      3. Optional: You can follow the instructions on the Administrator's Self-Help pages for IBM WebSphere Portal version 6.0 to download, extract, and configure the Administrator self help pages.
      4. Optional: If you are using XMLAccess remotely you will need to replace your current files wp.xml.client.jar and wp.base.jar on the client machine with the ones from your Portal installation after 6.0.1.7 is applied when coming from a release prior 6.0.1.4, as the protocol between xmlaccess client and the server has changed.
      5. Optional: The WebSphere Portal 6.0.1.7 fix pack does not update any of the business portlets or Web Clipping portlet as these are served from the IBM WebSphere Portal Business Solutions catalog. If this fix pack is updating a fresh installation of WebSphere Portal, you should download the latest available portlets and portlet applications from the Portal Catalog. If you already have the version of the business portlets or Web Clipping portlet you need or if you are not using these functions at all, no additional steps are necessary. Note: APARs installed as part of Web Clipping are uninstalled and need to be reinstalled after installing the fix pack.
      6. If you are using WCM and have configured the variables (i.e. WCM_HOST and WCM_PORT for external web server) you need to redo this configuration for the involved servers in the cluster. See the following link for details on the variables: http://publib.boulder.ibm.com/infocenter/wpdoc/v6r0/index.jsp?topic=/com.ibm.wp.zos.doc/wcm/wcm_config_wasvariables.html
      7. Perform the following steps to re-enable automatic synchronization on all nodes:
        1. In the Deployment Manager administrative console, click System Administration>Node Agents.
        2. Click nodeagent on the required node.
        3. Click File Synchronization Service on the Configuration tab.
        4. Check the Automatic Synchronization check box to enable the automatic synchronization feature and then click OK.
        5. Repeat these steps for all other nodes.
        6. Click Save to save the configuration changes to the master repository.
        7. Select System Administration > Node agents in the navigation tree.
        8. Select all nodeagents and click Restart
      8. If you upgraded from a level below 6.0.1.3, perform a reorg/runstats check per the instructions for your database system. See Database performance for information.



    Back to top




    Steps for uninstalling
    Fix Pack 6.0.1.7
    Perform the following steps to uninstall a fix pack from a clustered environment:

    IMPORTANT: Uninstalling the cumulative fix might result in an unsupported configuration. If uninstalling because of a failed upgrade, fix what is causing the problem and rerun the installation task.

    NOTE : Changing the server context root after upgrading is an unsupported uninstall path. To uninstall after changing the context root, you must first change the server context root back to the values of the previous version.

    NOTE 1: When instructed to stop or start the portal server, stop or start all server instances on the node.

    NOTE 2: After uninstalling this fix pack, two defined resource environment providers will remain. These services:
    WP DeploymentService
    WP PortletContainerService
    cannot be uninstalled, they remain in the resource.xml of the node scope.

    1. Perform the following steps on all nodes before you uninstall the Version 6.0.1.7 cumulative fix:
      1. Ensure that you have enough free space allocated for your operating system in the appropriate directories. See supported hardware and software requirements for information.
      2. If you have installed WebSphere Portal or WCM APARs interim fixes, use Portal Update Installer to uninstall them before uninstalling the 6.0.1.7 PTFs. For information on the Portal Update Installer for z/OS see the following link: http://www.ibm.com/support/docview.wss?rs=688&uid=swg21326670
      3. Verify that the information in the wpconfig.properties, wpconfig_dbdomain.properties, wpconfig_dbtype.properties, and wpconfig_sourceDB.properties files are correct on each node in the cluster.
        • Enter a value for the PortalAdminPwd and WasPassword parameters in the wpconfig.properties file.
        • Set WpsHostPort and XmlAccessPort to the same value in the wpconfig.properties file. NOTE: If you are using Microsoft Internet Protocol (IP) Version 6 and you have specified the WpsHostName property as an Internet Protocol address, normalize the Internet Protocol address by placing square brackets around the IP address as follows: WpsHostName=[my.IPV6.IP.address].
        • If using a database other than the default, grant permissions to databases within the framework by setting the DbUser (database name) and DbPassword (database password) parameters in the wpconfig_dbdomain.properties file.
      4. If you removed manually created indexes as indicated by the fix pack installation section above you will need to recreate them.
    2. Perform the following steps to disable automatic synchronization on all nodes:
      1. In the Deployment Manager administrative console, click System Administration>Node Agents.
      2. Click nodeagent on the required node.
      3. Click File Synchronization Service on the Configuration tab.
      4. Uncheck the Automatic Synchronization check box to disable the automatic synchronization feature and then click OK.
      5. Repeat these steps for all other nodes to be uninstalled.
      6. Click Save to save the configuration changes to the master repository.
      7. Select System Administration > Nodes in the navigation tree
      8. Select all nodes that are not synchronized, and click on Synchronize
      9. Select System Administration > Node agents in the navigation tree
      10. For the primary node, select the nodeagent and click Restart
      11. Select the nodeagents of all secondary nodes and click Stop
    3. Perform the following steps to uninstall the fix pack on the primary node:
      1. Stop IP traffic to the node where you are uninstalling:
        • If you are using the IP sprayers for load balancing, reconfigure the IP sprayers to stop routing new requests to the node.
        • If you are using the Web server plug-in for load balancing, perform the following steps to stop traffic to the node:
          1. In the Deployment Manager administrative console, click Servers>Clusters>cluster_name>Cluster members to obtain a view of the collection of cluster members.
          2. Locate the cluster member you are uninstalling and change the value in the Configured weight column to zero. NOTE: Record the previous value to restore the setting when the upgrade is complete.
          3. Click Apply to apply the change.
          4. If automatic plug-in generation and propagation is disabled, manually generate and/or propagate the plugin-cfg.xml file to the Web servers.
      2. If you have a local Web Server on the node where you are uninstalling, stop the local Web server.
      3. Perform the following steps to run the uninstallation:
        1. Log on with the WebSphere Administrative user ID, open a command prompt in the <WAS_profile_root>/bin directory and enter the command ./setupCmdLine.sh to set up the Java environment.
        2. Enter the ./serverStatus.sh -all -user username -password password command to check the status of all active application servers; NOTE: If security is not enabled, exclude the -user and -password parameters from the command.
        3. Enter the ./stopServer.sh servername -user username -password password command to stop any active application servers; NOTE: If security is not enabled, exclude the -user and -password parameters from the command.
        4. Change to the <was_root>/bin directory and run backoutPTF.sh Portals <service level> command, where <service level> is the value saved from step 1.g of the section 'Steps for installing Fix Pack 6.0.1.7 '. For example, the command executed is 'backoutPTF.sh Portals wp6016_01'.

          This will back out changes made to your configuration file system by ./applyPTF.sh when the fix pack was installed.
        5. Remove the 6.0.1.7 PTFs from SMP/E:

          UA52250
          UA52251
          UA52252
          UA52258
          UA52259
          UA52260
          UA52261
          UA52287
          UA52304
          UA52305
          UA52306
          UA52307
          UA52308
          UA52309

        6. Change to the portal_server_root/config directory and run the ./WPSconfig.sh UNCONFIG-WP-PTF-6017 command.
      4. If you are running an external Web server, such as IBM HTTP server, and you are using the WebSphere Application Server automatic generation and propagation of the plugin, then just restart the Web server. If you are not using the automatic generation and propagation, then perform the following steps:
        1. Regenerate the Web server plugin.
        2. Copy the plugin-cfg.xml file to the Plugin directory.
        3. Restart the Web server.
      5. If you previously customized any configuration files in the portal_server_root/config directory, check to see if uninstalling the fix pack affected those files by restoring a version of the files that was saved when the cumulative fix was originally installed. If it did affect the files, you will need to perform the same customization on the restored version of each file.
      6. After the fix pack is uninstalled, check the status of the node where you are uninstalling in the Deployment Manager administrative console. If the status is Not Synchronized, ensure that the node agent is running on the node and then perform the following steps:
        1. In the Deployment Manager administrative console, click System Administration>Nodes.
        2. For the node with a status of Not Synchronized, click Synchronize.
        3. After the synchronization is complete, wait at least 30 minutes before performing the next step because the ear expander is still running.
      7. Restart the WebSphere_Portal server on the primary node.
      8. Delete the following two xml files if present after uninstall:
        • <wp_home>/config/templates/createCompositeAppNode.xml
        • <wp_home>/config/work/createCompositeAppNode.xml
      9. Run the ./WPSconfig.sh finalize-portlets-fixpack -DPortalAdminPwd=password task to activate the portlets from the previous release.
      10. If you stopped a local Web server, restart it now.
      11. Verify that your system is operational by entering the server's URL in a browser and logging in to browse the content.
      12. Restore IP traffic to the node where you uninstalled:
        • If you are using the IP sprayers for load balancing, reconfigure the IP sprayers to restore traffic to the upgraded node.
        • If you are using the Web server plug-in for load balancing, perform the following steps to restore traffic to the upgraded node:
          1. In the Deployment Manager administrative console, click Servers>Clusters>cluster_name>Cluster members to obtain a view of the collection of cluster members.
          2. Locate the cluster member you upgraded and change the value in the Configured weight column back to the original value.
          3. Click Apply to apply the change.
          4. If automatic plug-in generation and propagation is disabled, manually generate and/or propagate the plugin-cfg.xml file to the Web servers.
    4. Perform the following steps to uninstall the fix pack on the secondary node; NOTE: Repeat this step for each secondary node, one at a time:
      1. Stop IP traffic to the node where you are uninstalling:
        • If you are using the IP sprayers for load balancing, reconfigure the IP sprayers to stop routing new requests to the node.
        • If you are using the Web server plug-in for load balancing, perform the following steps to stop traffic to the node:
          1. In the Deployment Manager administrative console, click Servers>Clusters>cluster_name>Cluster members to obtain a view of the collection of cluster members.
          2. Locate the cluster member you are uninstalling and change the value in the Configured weight column to zero. NOTE: Record the previous value to restore the setting when the upgrade is complete.
          3. Click Apply to apply the change.
          4. If automatic plug-in generation and propagation is disabled, manually generate and/or propagate the plugin-cfg.xml file to the Web servers.
      2. If you have a local Web Server on the node where you are uninstalling, stop the local Web server.
      3. Perform the following steps to run the uninstallation:
        1. Log on with the WebSphere Administrative user ID, open a command prompt in the <WAS_profile_root>/bin directory and enter the command ./setupCmdLine.sh to set up the Java environment.
        2. Enter the ./serverStatus.sh -all -user username -password password command to check the status of all active application servers; NOTE: If security is not enabled, exclude the -user and -password parameters from the command.
        3. Enter the ./stopServer.sh servername -user username -password password command to stop any active application servers; NOTE: If security is not enabled, exclude the -user and -password parameters from the command.
        4. If the node agent on the secondary is not started, then start it
        5. Change to the <was_root>/bin directory and run backoutPTF.sh Portals <service level> command, where <service level> is the value saved from step 1.g of the section 'Steps for installing Fix Pack 6.0.1.7 '. For example, the command executed is 'backoutPTF.sh Portals wp6016_01'.

          This will back out changes made to your configuration file system by ./applyPTF.sh when the fix pack was installed.
        6. Remove the 6017 PTFs from SMP/E:

          UA52250
          UA52251
          UA52252
          UA52258
          UA52259
          UA52260
          UA52261
          UA52287
          UA52304
          UA52305
          UA52306
          UA52307
          UA52308
          UA52309

        7. Change to the portal_server_root/config directory and run the ./WPSconfig.sh UNCONFIG-WP-PTF-6017 command.
      4. If you previously customized any configuration files in the portal_server_root/config directory, check to see if uninstalling the fix pack affected those files by restoring a version of the files that was saved when the cumulative fix was originally installed. If it did affect the files, you will need to perform the same customization on the restored version of each file.
      5. After the fix pack is uninstalled, check the status of the node where you are uninstalling in the Deployment Manager administrative console. If the status is Not Synchronized, ensure that the node agent is running on the node and then perform the following steps:
        1. In the Deployment Manager administrative console, click System Administration>Nodes.
        2. For the node with a status of Not Synchronized, click Synchronize.
        3. After the synchronization is complete, wait at least 30 minutes before performing the next step because the ear expander is still running.
        4. Restart the WebSphere_Portal server on the secondary node.
        5. If you stopped a local Web server, restart it now.
        6. Verify that your system is operational by entering the server's URL in a browser and logging in to browse the content.
      6. Delete the following two xml files if present after uninstall:
        • <wp_home>/config/templates/createCompositeAppNode.xml
        • <wp_home>/config/work/createCompositeAppNode.xml
      7. Restore IP traffic to the node where you uninstalled:
        • If you are using the IP sprayers for load balancing, reconfigure the IP sprayers to restore traffic to the upgraded node.
        • If you are using the Web server plug-in for load balancing, perform the following steps to restore traffic to the upgraded node:
          1. In the Deployment Manager administrative console, click Servers>Clusters>cluster_name>Cluster members to obtain a view of the collection of cluster members.
          2. Locate the cluster member you upgraded and change the value in the Configured weight column back to the original value.
          3. Click Apply to apply the change.
          4. If automatic plug-in generation and propagation is disabled, manually generate and/or propagate the plugin-cfg.xml file to the Web servers.
    5. Perform the following steps to re-enable automatic synchronization on all nodes:
      1. In the Deployment Manager administrative console, click System Administration>Node Agents
      2. Click nodeagent on the required node.
      3. Click File Synchronization Service on the Configuration tab.
      4. Check the Automatic Synchronization check box to enable the automatic synchronization feature and then click OK.
      5. Repeat these steps for all other nodes.
      6. Click Save to save the configuration changes to the master repository.
      7. Select System Administration > Node agents in the navigation tree.
      8. Select all nodeagents and click Restart.


    Back to top




    Known issues

    Problem: Incorrect keywords and categories returned for a multi-realm user. This problem is intermittent.
    Solution: This is a current limitation.

    Problem: Remote Search does not work if security is enabled and the remote search is done via EJB (with WebScannerEjbEar.ear).
    Solution: This is a known limitation for now and a fix is investigated. As a workaround you can use remote search via SOAP (with WebScannerSoap.ear) in security setups.

    Problem: ClassCastException occurs when creating a content based on an authoring template that contains a number component which has "Allow decimal places" selected.
    A number component is created on an authoring template with a minimum and maximum value defined. Select "Allow decimal places" radio button but do not specify anything or "Decimal places:". Creating a content using this template with a value in this component will result in a ClassCastException.
    Solution: Resave the authoring template.

    Problem: You encounter Personalization issues when the security is disabled.
    Solution: It is working as designed.

    Problem: When installing the WebSphere Process Server Client Version 6.0.2.2 or later, Process Integration with Portal might fail.
    Solution: When installing the WebSphere Process Server Client Version 6.0.2.2 or later, make sure to follow these steps to make the Client work with Portal 6.0.1.7:
    1. Copy the file WBI.product located at $WPS_INST/properties/version to AppServer/properties/version.
    2. Edit file at $AppServer/properties/version/WBI.product.
    3. Replace the line <id>WBI</id> with this line: <id>WPSCLIENT</id>.
    4. Save your changes and restart WebSphere Portal.

    Problem: Cannot export Template or download file or launch people picker panel for send link from Document Manager after successful upgrade to 6.0.1.7.
    Solution: Verify and reconfigure as necessary the Web mappings as documented in technote #1292379.

    Problem: After upgrading to the 6.0.1.7 fix pack, the following message appears in the Portal Server job log:
    [10/31/08 12:34:56:789 EDT] 0000002c lightpersist  W   CLYAF0055W: Could not locate schema value for datasource workplace.sync.schema in JNDI.
    Solution: This message is simply a Warning message and causes no functional problems. It can be safely ignored.

    Problem: User should not appear in Group after creating, adding to the group, deleting and then recreating the User (but not adding it to the Group again), but it does appear in the Group again on a federated (clustered) node.
    Solution: 1. Stop the portal server.
    2. Run the job EJPSCHOU to check out the WMM configuration,
    the checked out files will be located under <WP_root>/wmm
    3. Update the wmm.xml, add the updateGroupMembership="true"
    parameter to the wmm.xml ldapRepository stanza.
    For example:
    <ldapRepository name="wmmLDAP" UUID="LDAP1" adapterClassName="com.ibm.ws.wmm.ldap.ibmdir.IBMDirectoryAdapterImpl" supportDynamicAttributes="false" configurationFile="wmmLDAPServerAttributes.xml" wmmGenerateExtId="false" supportGetPersonByAccountName="true" profileRepositoryForGroups="LDAP1" supportTransactions="false" adminId="cn=root" adminPassword="fk1IMnQh5jM=" ldapHost="ldaphostname.yourcompany.com " ldapPort="2389" ldapTimeOut="6000" ldapAuthentication="SIMPLE" ldapType="0" updateGroupMembership="true" sslEnabled="false" sslTrustStore="<WAS_root>\etc\DummyServerTrustFile.jks" dirContextsMaxSize="20" dirContextsMinSize="5" dirContextTimeToLive="-1" cacheGroups="false" groupsCacheTimeOut="600" cacheAttributes="true" attributesCacheSize="2000" attributesCacheTimeOut="600" cacheNames="true" namesCacheSize="2000" namesCacheTimeOut="600">
    4. Run the job EJPSCHIN to check in the WMM configuration
    5. Start the portal server.

    Problem: Even if the JCR index exists under <WP_root>/jcr/search, the documents in the Document Manager (PDM) can not be searched.
    Solution: In this case, in order to get the PDM search working correctly, we need to define some environment variables in the Application Server console:
    1. Login to the Application Server administrative console
    2. Go to "Environment->WebSphere Variables", select the cell scope
    3. Define the variables as shown below, then save and exit:
    -------------------------------------------------------------------------
    | Variables Name |   Variables Value       | Comment                    |
    -------------------------------------------------------------------------
    |LD_LIBRARY_PATH |<WP_root>/shared/app/    | /usr/lpp/tcpip/X11R6/lib:  |
    |                |oiexport:/usr/lpp/tcpip/ | /usr/lpp/tcpip/X11R66/lib  |
    |                |X11R6/lib:/usr/lpp/tcpip/| is the X library path, on  |
    |                |X11R66/lib               | different machine , perhaps|
    |                |                         | the value of X library path|
    |                |                         | is different               |
    -------------------------------------------------------------------------
    |LIBPATH         |<WP_root>/shared/app/    |/usr/lpp/tcpip/X11R6/lib:   |
    |                |oiexport:/usr/lpp/tcpip/ |/usr/lpp/tcpip/X11R66/lib   |
    |                |X11R6/lib:/usr/lpp/tcpip/|is the X library path, on   |
    |                |X11R66/lib               |different machine , perhaps |
    |                |                         |the value of X library path |
    |                |                         |is different                |
    -------------------------------------------------------------------------
    |DISPLAY         |output of the command    | See step 5 below for an    |
    |                | echo $DISPLAY           | additional setting.        |
    -------------------------------------------------------------------------

    4. Connect to a z/OS terminal, login with the WAS user
    5. Define the DISPLAY in this terminal, restart the portal server.
    • Before you define DISPLAY, you should connect to a non-Windows machine,
    run the command " echo $DISPLAY," the output of this command is the value
    of DISPLAY on z/OS. Then run command " xhost +" on the non-Windows machine.
    6. After defining DISPLAY, run " ikeyman.sh" to verify it works properly
    7. Stop the portal server, delete JCR index, and start portal server again.
    8. Create or import some new document to cause the index to be created
    9. Search for this document , it should work now

    Problem: Documents with pictures in Document Manager (PDM) can not be previewed.
    Solution: If you want to view an image or presentation file, you would need to configure the remote Document Conversion Server (DCS) from a Microsoft Windows server. If remote DCS is configured from non-Windows platform or just using the local DCS, any presentation file and other documents including an image or images will not be previewed and clicking the document title will instead show a message such as:
    "The document contents could not be previewed .You must download the document to view the content."

    Problem:
    1. Follow the readme of 6.0.1.7 z/OS to apply the ptf.
    2. When running ./WPSconfig.sh CONFIG-WP-PTF-6017, this step might be failed because of
    the timeout value of was.notification.timeout is limited, the default is 300
    ExtendedMessage: EJPPH0044E: Redeploy of Web Module wps.tpl.transformation.webmod from WAR file /PortalServer/V6R0M1/Portal/deploy
    ed/tpl_transformation.war failed (display name: TplApp_PA_ij28w3z).com.ibm.wps.pe.mgr.exceptions.AppServerWarUpdateException: EJPPH0
    056E: The installation of portlet application /PortalServer/V6R0M1/Portal/deployed/tpl_transformation.war did not complete successfu
    lly. Please check the WAS log files for a possible explanation.
    at com.ibm.wps.pe.mgr.appserveradmin.WAS5Admin.redeployImpl(WAS5Admin.java:1597)
    at com.ibm.wps.pe.mgr.appserveradmin.WAS5Admin.access$500(WAS5Admin.java:77)
    at com.ibm.wps.pe.mgr.appserveradmin.WAS5Admin$6.run(WAS5Admin.java:1473)
    at com.ibm.ws.security.auth.zOSContextManagerImpl.runAs(zOSContextManagerImpl.java:3323)
    at com.ibm.ws.security.auth.zOSContextManagerImpl.runAsSystem(zOSContextManagerImpl.java:3216)
    at com.ibm.wps.pe.mgr.appserveradmin.WAS5Admin.redeploy(WAS5Admin.java:1471)
    ...
    WebSphere/V6R0M1/AppServer/profiles/default/logs/ffdc/cl239dm_nd239_We
    bSphere_Portal_BBOS002S_STC00058_0000011400000001_08.11.24_10.01.10_0.txt
    Trace: 2008/11/24 10:01:10.623 01 t=8C6AD0 c=3.1 key=P8 (13007002)
    ThreadId: 000000a3
    FunctionName: XMLEngine
    SourceId: com.ibm.wps.command.xml.Engine
    Category: SEVERE
    ExtendedMessage: EJPFB0002E: Exception occurred.com.ibm.wps.command.xml.XmlCommandException: EJPXA0043E: An error occurred while c
    reating or updating the resource. [web-app 1_3A82P3S11034502U0VGTOM3OI4]
    at com.ibm.wps.command.xml.UpdateEngine.execItem(UpdateEngine.java:260)
    at com.ibm.wps.command.xml.UpdateEngine.processItem(UpdateEngine.java:189)
    ...
    at com.ibm.xmem.channel.ws390.XMemConnLink.ready(XMemConnLink.java:585)
    at com.ibm.xmem.ws390.XMemSRBridge.httpinvoke(XMemSRBridge.java:105)
    at com.ibm.ws390.orb.ServerRegionBridge.httpinvoke(Unknown Source)
    at com.ibm.ws390.orb.ORBEJSBridge.httpinvoke(ORBEJSBridge.java:287)
    at com.ibm.ws390.orb.parameters.HTTPInvoke.HTTPInvokeParmSetter(HTTPInvoke.java:75)
    at com.ibm.ws390.orb.CommonBridge.nativeRunApplicationThread(Native Method)
    at com.ibm.ws390.orb.CommonBridge.runApplicationThread(Unknown Source)
    at com.ibm.ws.util.ThreadPool$ZOSWorker.run(ThreadPool.java:1652)
    Caused by: com.ibm.wps.command.CommandFailedException: EJPPD0015E: Portlet application manager failed when user xmlaccess scripting
    user executed command UpdateWebApplication.
    WrappedException is: com.ibm.wps.pe.mgr.exceptions.AppServerWarUpdateException: EJPPH0044E: Redeploy of Web Module wps.tpl.transform
    ation.webmod from WAR file /PortalServer/V6R0M1/Portal/deployed/tpl_transformation.war failed (display name: TplApp_PA_ij28w3z).
    at com.ibm.wps.command.applications.AbstractApplicationsCommand.throwAppMgrException(AbstractApplicationsCommand.java:584)
    at com.ibm.wps.command.applications.UpdateWebApplicationCommand.execute(UpdateWebApplicationComman
    Solution:
    1. Set the was.notification.timeout value to 3000 or more.
    was.notification.timeout is in $Portal_Root/config/properties/DeploymentService.properties
    2. Run ./WPSconfig.sh update-properties
    3. If portal is running, re-restart it to pick up the new value of was.notification.timeout
    4. Re-run the ./WPSconfig.sh CONFIG-WP-PTF-6017

    Problem:
    Help page can't be opened on secondary node.

    Solution:
    Perform the following steps on every secondary node to create the URL provider for the IBM Eclipse Help System (IEHS) handler.
    1. Open the Administrative Console.
    2. Select Resources > URL Providers.
    3. Select the correct secondary node.
    4. Click New.
    5. Enter the following values:
    Name: IEHS_Handler
    Class path: <empty>
    Stream handler class name: org.eclipse.osgi.framework.internal.protocol.reference.Handler
    Protocol: reference
    6. Save your changes.
    7.Restart the IEHS application

    Problem: Duplicate files generated when saving private draft in Portal Document Manager with Firefox 3 browser.
    Solution: The problem is caused by a bug in Firefox 3. Upgrade to Firefox 3.0.6 or above can fix it.

    Problem: When Java 2 Security is enabled, the "Web Clipping" portlet and the "User and Group" portlet do not work correctly.
    Solution:To resolve this issue, use the following steps:
    1) Access the IBM WebSphere Portal Business Solutions catalog at the following link:
    http://www.ibm.com/software/brandcatalog/portal/portal
    2) Search the catalog for 1wp10003p, which is the IBM Web Clipping Portlet
    3) Download the package and follow the installation instructions in the package's readme file

    Problem: After uninstalling the 6.0.1.7 fix pack, the following message appears in the Portal Server job log:
    [6/4/09 3:52:21:507 EDT] 0000008f JCRCFLLoggerI E com.ibm.icm.ts.tss.JCRCFLLoggerImpl com.ibm.icm.ts.tss.app.IndexMaintainer.processPendingUpdates [java.lang.ThreadGroup[na
    me=icmciWorkManager: icmjcrear,maxpri=10]]: ** ABORTING processing of events for workspace 1 due to fatal text engine exception com.ibm.icm.ts.tss.FatalTextEngineException:
    Error opening Juru index: 1.
    at com.ibm.icm.ts.tss.JuruIndexImpl$Manager.loadIndex(JuruIndexImpl.java:896)
    at com.ibm.icm.ts.tss.JuruIndexImpl$Manager.index(JuruIndexImpl.java:803)
    at com.ibm.icm.ts.tss.app.IndexMaintainer.processPendingUpdates(IndexMaintainer.java:238)
    at com.ibm.icm.ts.tss.app.IndexMaintainer.runIndexMaintenance(IndexMaintainer.java:149)
    at com.ibm.icm.ts.tss.app.IndexMaintainer.checkForUpdates(IndexMaintainer.java:118)
    Solution: You will need to rebuild your search indexes after uninstalling the 6.0.1.7:
    1. Stop your server.
    2. Delete all the index directories under PortalServer/jcr/search
    3. Restart the portal server
    4. The search indexes should be rebuilt the next time the index maintenance interval is reached.
    5. If the index directory is not built, edit a document and save it, and try manually rebuilding the search index.

    Problem : After installing IBM® WebSphere® Portal, Enable edition for z/OS™, version 6.0.1.7, you may encounter NameNotFound exceptions in the Portal Server job log and some aspects of the Personalization (PZN) functionality are not working as expected.
    Solution: To resolve this issue, install fix OA31759 and follow the readme instructions that comes with OA31759.

    Problem: Receiving SQLCODE = -204 and/or -514 for the Likeminds Database schema on the secondary node of a cluster.
    Solution: Copying the primary node's likeminds schema value into the secondary node's. The property name is 'likeminds.schema'. It resides in <wps_home>/shared/app/config/services/LikeMindsService.properties.

    Problem: After changing the context root, exporting page got a 403 error when using Manager Pages Portlet.
    Solution: To resolve this issue, install fix PM06169 and follow the readme instructions that comes with PM06169.

    Problem: After uninstalling the 6.0.1.7 fixpack,user couldn't create Virtual Portal successfully.
    Solution: Before creating Virtual Portal,User should run the following command from the <protal server root>/config directory: ./WPSconfig.sh init create-virtual-portal


    Back to top





    Change History
    Initial Release 19 September 2008
    Back to top




    Additional information

    You can find additional information on the WebSphere Portal for z/OS support page.

    Back to top




    Trademarks and service marks

    For trademark attribution, visit the IBM Terms of Use Web site.

    Back to top



    Rate this page:

    (0 users)Average rating

    Add comments

    Document information


    More support for:

    WebSphere Portal End of Support Products
    WebSphere Portal

    Software version:

    6.0.1.7

    Operating system(s):

    z/OS

    Software edition:

    Enable

    Reference #:

    7017529

    Modified date:

    2010-02-08

    Translate my page

    Machine Translation

    Content navigation