Download
Abstract
IBM SmartCloud Orchestrator 2.3.0.1_iFix007 has been made generally available and contains fixes to version 2.3.0.1 including all predecessor fixes.
Download Description
Sections | Description | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
The Change history section provides an overview on what is new in this release with a description of any new functions or enhancements when applicable. |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||
The How critical is this fix section provides information related to the impact of this release to allow you to assess how your environment may be affected. |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||
The Prerequisites section provides important information to review prior to the installation of this release. |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||
The Download package section provides the direct link to obtain the download package for installation in your environment. |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||
The Installation instructions section provides the installation instructions necessary to apply this release into your environment. |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||
The Known side effects section contains a link to the known problems (open defects) identified at the time of this release. |
IBM SmartCloud Orchestrator Fix Pack 1 (2.3.0.1) for 2.3 must be installed before applying this iFix. IMPORTANT: Before applying this iFix, ensure that all the vCenter clusters in the VMware regions are set DRS enabled, which means that the DRS function is opened and set to automatically; otherwise, IBM SmartCloud Orchestrator will fail to manage the VMs previously deployed in that cluster. |
Review the Software prerequisites page in the IBM Knowledge Center to ensure your environment meets the minimum hypervisor and operating system requirements, especially if you are upgrading from a previous release of IBM Cloud Orchestrator. |
Installation Instructions
Tab navigation
- Introduction- selected tab,
- Standard upgrade
- Manual upgrade
Select the Standard tab for upgrade instructions using the iFix installation script.
Select the Manual tab for the instructions on how to manually upgrade each component.
Important Notice: Ensure your environment has met all the requirements in the Prerequisites section above before applying this iFix.
1) Stop IBM SmartCloud Orchestrator on Central Server 1
Change to directory: /iaas/scorchestrator/
Stop SmartCloud Orchestrator: ./SCOrchestrator.py --stop
===>>> Stopping Smart Cloud Orchestrator
...
===>>> Stopping Smart Cloud Orchestrator complete
Wait until all components have stopped.
2) Backup your SCO installation
Backup the Central Servers and Region Server virtual machines
For VMware hosted virtual machines, review the section Taking Snaphots in the VMware vSphere 5.1 Documentation for information about taking snapshots of the virtual machines.
3) Copy and extract 2.3.0.1-CSI-ISCO-IF0007.tar on Central Server 1
Extract 2.3.0.1-CSI-ISCO-IF0007.tar in a directory of your choice <your_dir> by running the following command:
Note: Do not place the package under /root path
tar -xvf 2.3.0.1-CSI-ISCO-IF0007.tar
The following files are extracted:
|-- helper.py
|-- ifix7.py
|-- SmartCloud_Orchestrator_2.3.0.1_IF0007.fxtag
|-- installfiles
| |-- iwd
| | |-- disable-ssl3-2301.odt
| | |-- helper.py -> ../../helper.py
| | |-- iwd-ifix.py
| | |-- iwd-ifix-template.config
| | |-- iwd-node-ifix-install.tar
| | |-- WorkloadDeployer-2.3.0.1-IFIX7-efixes.md5
| | |-- WorkloadDeployer-2.3.0.1-IFIX7-efixes.tgz
| | |-- WorkloadDeployer-2.3.0.1-IFIX7-Install.README
| | |-- WorkloadDeployer-2.3.0.1-IFIX7.md5
| | `-- WorkloadDeployer-2.3.0.1-IFIX7.tar
| |-- java
| | |-- F20150520-2124_sce310_jre_linux_installer.bin
| | |-- helper.py -> ../../helper.py
| | |-- ibm-java-i386-sdk-6.0-16.3.i386.rpm
| | |-- ibm-java-x86_64-sdk-6.0-16.3.x86_64.rpm
| | |-- java664sr16fp3.py
| | `-- sce310java664sr16fp3.py
| |-- openstack
| | |-- CN_backup_list
| | |-- CS2_backup_list
| | |-- IFIX7_rpm_list
| | |-- nova_IFIX7_pyt.patch
| | |-- nova_IFIX7_usr.patch
| | |-- nova_patch_backup_list
| | |-- openstack.tar
| | |-- RSKV_backup_list
| | `-- RSVM_backup_list
| |-- sce
| | |-- 3.1.0.4-IBM-SCE-IF008-201505150421.zip
| | |-- helper.py -> ../../helper.py
| | `-- sce3104if8.py
| |-- scotoolkits
| | |-- 00_SCOrchestrator_Toolkit_2301_20150305.twx
| | |-- 10_SCOrchestrator_Scripting_Utilities_Toolkit_2301_20150305.twx
| | |-- 20_SCOrchestrator_Support_IaaS_Toolkit_2301_20150305.twx
| | |-- 30_SCOrchestrator_Support_vSys_Toolkit_2301_20150305.twx
| | |-- 80_TivSAM_Integration_Toolkit_2301_20150305.twx
| | |-- 99_Sample_Support_IaaS_ProcessApp_2301_20150305.twx
| | |-- 99_Sample_Support_vSys_ProcessApp_2301_20150305.twx
| | |-- 99_TSAMITK_SampleApp_2301_20150305.twx
| | |-- com.ibm.orchestrator.vmm.adapter.keystone.jar
| | |-- helper.py -> ../../helper.py
| | |-- importSCOToolkitsForZZ00253.py
| | `-- import_toolkits.py
| |-- smartcloud
| | |-- helper.py -> ../../helper.py
| | |-- smartcloud-2013.1-1.1.4.ibm.201506160318.noarch.rpm
| | `-- smartcloud-201506160318.py
| |-- vil
| | |-- ImageLibraryIaaSTAI.jar
| | |-- URLConnectionUtility$1.class
| | |-- URLConnectionUtility$2.class
| | |-- URLConnectionUtility.class
| | |-- VILClient$1.class
| | |-- VILClient$2.class
| | |-- VILClient.class
| | |-- VMControlRestClient$1.class
| | |-- VMControlRestClient$2.class
| | `-- VMControlRestClient.class
| |-- was80
| | |-- 8.0.0.0-WS-WASJavaSDK-LinuxX32-IFPI38186.zip
| | |-- 8.0.0.9-ws-was-ifpi36563.zip
| | |-- 8.0.0-WS-WAS-FP0000009-part1.zip
| | |-- 8.0.0-WS-WAS-FP0000009-part2.zip
| | |-- helper.py -> ../../helper.py
| | `-- was80-for-vil-upgrade.py
| `-- was85
| |-- 8.5.0.0-WS-WASJavaSDK-LinuxX64-IFPI37009.zip
| |-- helper.py -> ../../helper.py
| `-- was85-for-bpm-upgrade.py
|
|-- 2.3.0.1-CSI-ISCO-IF0006
| |-- 2.3.0.1-CSI-ISCO-IF0005
| | |-- 0001-enable-multiple-keystone-all-worker-processes.patch
| | |-- 2.3.0.1-CSI-ISCO-IF0004
| | | |-- 2.3.0.1-CSI-ISCO-IF0003
| | | | |-- helper.py
| | | | |-- ifix3.py
| | | | |-- scp-pdcollect
| | | | | |-- Components_nonroot.xml
| | | | | |-- Components.xml
| | | | | |-- Environment.xml
| | | | | `-- PDCOLLECT_Central_Server_Template.xml
| | | | |-- SE59389
| | | | | |-- se59389_fix.py
| | | | | `-- se59389_fix.tar.gz
| | | | |-- SmartCloud_Orchestrator_2.3.0.1_IF0003.fxtag
| | | | |-- ZZ00201
| | | | | |-- iwd0002_0006checksum.txt
| | | | | |-- iwd0002_0006.py
| | | | | `-- opt
| | | | | `-- ibm
| | | | | `-- rainmaker
| | | | | `-- purescale.app
| | | | | `-- private
| | | | | `-- expanded
| | | | | `-- ibm
| | | | | `-- maestro.util-4.0.0.1
| | | | | `-- lib
| | | | | `-- maestro.util.jar
| | | | |-- ZZ00234
| | | | | `-- automation.groovy
| | | | |-- ZZ00240
| | | | | `-- 00_SCOrchestrator_Toolkit_2301_20140403_ifix1.twx
| | | | |-- ZZ00242
| | | | | `-- config_network.sh
| | | | `-- ZZ00244_ZZ00246
| | | | `-- plugin.com.ibm.orchestrator.rest-1.0.1.1.jar
| | | |-- 3.1.0.4-IBM-SCE-FP004-201407010432.zip
| | | |-- 3.1.0.4-IBM-SCE-IF001-201407170051.zip
| | | |-- helper.py
| | | |-- ibm-java-i386-sdk-6.0-16.0.i386.rpm
| | | |-- ibm-java-x86_64-sdk-6.0-16.0.x86_64.rpm
| | | |-- ICCT_Install_2.3.0.1-17.zip
| | | |-- ifix4.py
| | | |-- il_install_package_23025.zip
| | | |-- il_proxy_install_package_23025.zip
| | | |-- install_script
| | | | |-- create_new_repo.py
| | | | |-- helper.py
| | | | |-- iwd_fix.py
| | | | |-- java632sr16fp1.py
| | | | |-- java664sr16fp1.py
| | | | |-- sce310fp4fix.py
| | | | |-- sce310fp4.py
| | | | |-- sce310java664sr16fp1.py
| | | | |-- update_pack.py
| | | | |-- vilp23025.py
| | | | |-- vils23025.py
| | | | |-- was80javasdk32-ifpi19109.py
| | | | `-- was85javasdk64-ifpi19108.py
| | | |-- openstack_noarch
| | | | |-- babel-0.9.6-3.001.ibm.noarch.rpm
| | | | |-- boost-build-1.47.0-7.ibm.noarch.rpm
| | | | |-- ibm-simpletoken-authenticator-middleware-2013.1.5.1-201406190413.ibm.7.noarch.rpm
| | | | |-- new_noarch.txt
| | | | |-- novnc-0.4-6.002.ibm.noarch.rpm
| | | | |-- openstack-cinder-2013.1.5.1-201406190415.ibm.11.noarch.rpm
| | | | |-- openstack-cinder-doc-2013.1.5.1-201406190415.ibm.11.noarch.rpm
| | | | |-- openstack-glance-2013.1.5.1-201406190417.ibm.11.noarch.rpm
| | | | |-- openstack-glance-doc-2013.1.5.1-201406190417.ibm.11.noarch.rpm
| | | | |-- openstack-keystone-2013.1.5.1-201406190418.ibm.12.noarch.rpm
| | | | |-- openstack-keystone-doc-2013.1.5.1-201406190418.ibm.12.noarch.rpm
| | | | |-- openstack-nova-2013.1.5.1-201406190429.ibm.21.noarch.rpm
| | | | |-- openstack-nova-api-2013.1.5.1-201406190429.ibm.21.noarch.rpm
| | | | |-- openstack-nova-cells-2013.1.5.1-201406190429.ibm.21.noarch.rpm
| | | | |-- openstack-nova-cert-2013.1.5.1-201406190429.ibm.21.noarch.rpm
| | | | |-- openstack-nova-common-2013.1.5.1-201406190429.ibm.21.noarch.rpm
| | | | |-- openstack-nova-compute-2013.1.5.1-201406190429.ibm.21.noarch.rpm
| | | | |-- openstack-nova-conductor-2013.1.5.1-201406190429.ibm.21.noarch.rpm
| | | | |-- openstack-nova-console-2013.1.5.1-201406190429.ibm.21.noarch.rpm
| | | | |-- openstack-nova-doc-2013.1.5.1-201406190429.ibm.21.noarch.rpm
| | | | |-- openstack-nova-ibm-ego-resource-optimization-2013.1.4.1-201311072204.ibm.el6.1.noarch.rpm
| | | | |-- openstack-nova-ibm-ego-resource-optimization-common-2013.1.4.1-201311072204.ibm.el6.1.noarch.rpm
| | | | |-- openstack-nova-ibm-notification-2013.1.4.1-201311072204.ibm.el6.1.noarch.rpm
| | | | |-- openstack-nova-network-2013.1.5.1-201406190429.ibm.21.noarch.rpm
| | | | |-- openstack-nova-novncproxy-0.4-6.002.ibm.noarch.rpm
| | | | |-- openstack-nova-objectstore-2013.1.5.1-201406190429.ibm.21.noarch.rpm
| | | | |-- openstack-nova-scheduler-2013.1.5.1-201406190429.ibm.21.noarch.rpm
| | | | |-- openstack-quantum-2013.1.5.1-201406190431.ibm.9.noarch.rpm
| | | | |-- openstack-quantum-bigswitch-2013.1.5.1-201406190431.ibm.9.noarch.rpm
| | | | |-- openstack-quantum-brocade-2013.1.5.1-201406190431.ibm.9.noarch.rpm
| | | | |-- openstack-quantum-cisco-2013.1.5.1-201406190431.ibm.9.noarch.rpm
| | | | |-- openstack-quantum-hyperv-2013.1.5.1-201406190431.ibm.9.noarch.rpm
| | | | |-- openstack-quantum-linuxbridge-2013.1.5.1-201406190431.ibm.9.noarch.rpm
| | | | |-- openstack-quantum-metaplugin-2013.1.5.1-201406190431.ibm.9.noarch.rpm
| | | | |-- openstack-quantum-midonet-2013.1.5.1-201406190431.ibm.9.noarch.rpm
| | | | |-- openstack-quantum-nec-2013.1.5.1-201406190431.ibm.9.noarch.rpm
| | | | |-- openstack-quantum-nicira-2013.1.5.1-201406190431.ibm.9.noarch.rpm
| | | | |-- openstack-quantum-openvswitch-2013.1.5.1-201406190431.ibm.9.noarch.rpm
| | | | |-- openstack-quantum-plumgrid-2013.1.5.1-201406190431.ibm.9.noarch.rpm
| | | | |-- openstack-quantum-ryu-2013.1.5.1-201406190431.ibm.9.noarch.rpm
| | | | |-- openstack-utils-2013.1.5.1-201406242221.ibm.4.noarch.rpm
| | | | |-- pyparsing-1.5.7-2.ibm.noarch.rpm
| | | | |-- pyparsing-doc-1.5.7-2.ibm.noarch.rpm
| | | | |-- python-alembic-0.4.2-4.ibm.noarch.rpm
| | | | |-- python-amqplib-0.6.1-3.002.ibm.noarch.rpm
| | | | |-- python-anyjson-0.2.4-4.003.ibm.noarch.rpm
| | | | |-- python-argparse-1.2.1-3.ibm.noarch.rpm
| | | | |-- python-babel-0.9.6-3.001.ibm.noarch.rpm
| | | | |-- python-boto-2.5.2-2.ibm.noarch.rpm
| | | | |-- python-cinder-2013.1.5.1-201406190415.ibm.11.noarch.rpm
| | | | |-- python-cinderclient-1.0.3-2.ibm.noarch.rpm
| | | | |-- python-cliff-1.3.2-1.ibm.noarch.rpm
| | | | |-- python-cmd2-0.6.4-3.ibm.noarch.rpm
| | | | |-- python-daemon-1.5.2-2.ibm.001.noarch.rpm
| | | | |-- python-eventlet-0.9.17-2.ibm.noarch.rpm
| | | | |-- python-eventlet-doc-0.9.17-2.ibm.noarch.rpm
| | | | |-- python-gflags-1.4-4.003.ibm.noarch.rpm
| | | | |-- python-glance-2013.1.5.1-201406190417.ibm.11.noarch.rpm
| | | | |-- python-glanceclient-0.9.0-4.ibm.el6.noarch.rpm
| | | | |-- python-httplib2-0.7.4-7.ibm.noarch.rpm
| | | | |-- python-ibm-db-sa-0.3.0-6.ibm.el6.noarch.rpm
| | | | |-- python-iso8601-0.1.4-3.ibm.noarch.rpm
| | | | |-- python-jsonpatch-0.10-2.ibm.noarch.rpm
| | | | |-- python-jsonpointer-0.5-2.ibm.noarch.rpm
| | | | |-- python-jsonschema-0.7-2.ibm.noarch.rpm
| | | | |-- python-keystone-2013.1.5.1-201406190418.ibm.12.noarch.rpm
| | | | |-- python-keystoneclient-0.2.3-75.ibm.noarch.rpm
| | | | |-- python-keystoneclient-doc-0.2.3-75.ibm.noarch.rpm
| | | | |-- python-kombu-1.0.4-2.002.ibm.noarch.rpm
| | | | |-- python-lockfile-0.8-4.ibm.002.noarch.rpm
| | | | |-- python-migrate-0.7.2-9.ibm.noarch.rpm
| | | | |-- python-nova-2013.1.5.1-201406190429.ibm.21.noarch.rpm
| | | | |-- python-novaclient-2.13-2.ibm.noarch.rpm
| | | | |-- python-novaclient-doc-2.13-2.ibm.noarch.rpm
| | | | |-- python-novaclient-ibm-ego-resource-optimization-2013.1.4.1-201311072204.ibm.el6.1.noarch.rpm
| | | | |-- python-nova-ibm-ego-resource-optimization-2013.1.4.1-201311072204.ibm.el6.1.noarch.rpm
| | | | |-- python-ordereddict-1.1-3.001.ibm.noarch.rpm
| | | | |-- python-oslo-config-1.1.1-201308151039.ibm.noarch.rpm
| | | | |-- python-oslo-config-doc-1.1.1-201308151039.ibm.noarch.rpm
| | | | |-- python-paramiko-1.8.0-3.ibm.el6.noarch.rpm
| | | | |-- python-passlib-1.5.3-2.ibm.noarch.rpm
| | | | |-- python-paste-deploy-1.5.0-5.ibm.001.noarch.rpm
| | | | |-- python-prettytable-0.6.1-2.ibm.noarch.rpm
| | | | |-- python-pyasn1-0.0.12a-2.ibm.noarch.rpm
| | | | |-- python-pyudev-0.15-4.001.ibm.noarch.rpm
| | | | |-- python-qpid-0.18-4.ibm.el6.noarch.rpm
| | | | |-- python-quantum-2013.1.5.1-201406190431.ibm.9.noarch.rpm
| | | | |-- python-quantumclient-2.2.1-5.ibm.noarch.rpm
| | | | |-- python-requests-0.14.1-2.002.ibm.noarch.rpm
| | | | |-- python-routes-1.12.3-5.001.ibm.noarch.rpm
| | | | |-- python-stevedore-0.8-2.ibm.noarch.rpm
| | | | |-- python-swiftclient-1.2.0-4.ibm.noarch.rpm
| | | | |-- python-swiftclient-doc-1.2.0-4.ibm.noarch.rpm
| | | | |-- python-tablib-0.9.11.20120702git752443f-6.ibm.noarch.rpm
| | | | |-- python-warlock-0.7.0-2.ibm.noarch.rpm
| | | | |-- python-webob-1.2.3-2.ibm.noarch.rpm
| | | | |-- python-websockify-0.2.0-3.001.ibm.noarch.rpm
| | | | |-- python-wsgiref-0.1.2-10.001.ibm.noarch.rpm
| | | | |-- qpid-cpp-client-devel-docs-0.18-6.001.ibm.noarch.rpm
| | | | `-- qpid-tools-0.18-6.001.ibm.noarch.rpm
| | | |-- openstack.tar
| | | |-- openstack_x86
| | | | |-- boost-1.47.0-7.ibm.x86_64.rpm
| | | | |-- boost-chrono-1.47.0-7.ibm.x86_64.rpm
| | | | |-- boost-date-time-1.47.0-7.ibm.x86_64.rpm
| | | | |-- boost-debuginfo-1.47.0-7.ibm.x86_64.rpm
| | | | |-- boost-devel-1.47.0-7.ibm.x86_64.rpm
| | | | |-- boost-doc-1.47.0-7.ibm.x86_64.rpm
| | | | |-- boost-examples-1.47.0-7.ibm.x86_64.rpm
| | | | |-- boost-filesystem-1.47.0-7.ibm.x86_64.rpm
| | | | |-- boost-graph-1.47.0-7.ibm.x86_64.rpm
| | | | |-- boost-graph-mpich2-1.47.0-7.ibm.x86_64.rpm
| | | | |-- boost-graph-openmpi-1.47.0-7.ibm.x86_64.rpm
| | | | |-- boost-iostreams-1.47.0-7.ibm.x86_64.rpm
| | | | |-- boost-jam-1.47.0-7.ibm.x86_64.rpm
| | | | |-- boost-math-1.47.0-7.ibm.x86_64.rpm
| | | | |-- boost-mpich2-1.47.0-7.ibm.x86_64.rpm
| | | | |-- boost-mpich2-devel-1.47.0-7.ibm.x86_64.rpm
| | | | |-- boost-mpich2-python-1.47.0-7.ibm.x86_64.rpm
| | | | |-- boost-openmpi-1.47.0-7.ibm.x86_64.rpm
| | | | |-- boost-openmpi-devel-1.47.0-7.ibm.x86_64.rpm
| | | | |-- boost-openmpi-python-1.47.0-7.ibm.x86_64.rpm
| | | | |-- boost-program-options-1.47.0-7.ibm.x86_64.rpm
| | | | |-- boost-python-1.47.0-7.ibm.x86_64.rpm
| | | | |-- boost-random-1.47.0-7.ibm.x86_64.rpm
| | | | |-- boost-regex-1.47.0-7.ibm.x86_64.rpm
| | | | |-- boost-serialization-1.47.0-7.ibm.x86_64.rpm
| | | | |-- boost-signals-1.47.0-7.ibm.x86_64.rpm
| | | | |-- boost-static-1.47.0-7.ibm.x86_64.rpm
| | | | |-- boost-system-1.47.0-7.ibm.x86_64.rpm
| | | | |-- boost-test-1.47.0-7.ibm.x86_64.rpm
| | | | |-- boost-thread-1.47.0-7.ibm.x86_64.rpm
| | | | |-- boost-wave-1.47.0-7.ibm.x86_64.rpm
| | | | |-- dnsmasq-2.59-5.ibm.003.x86_64.rpm
| | | | |-- dnsmasq-debuginfo-2.59-5.ibm.003.x86_64.rpm
| | | | |-- dnsmasq-utils-2.59-5.ibm.003.x86_64.rpm
| | | | |-- iaasgateway-2013.1-1.1.4.ibm.201406242305.noarch.rpm
| | | | |-- kmod-openvswitch-1.4.2-3.ibm.el6.x86_64.rpm
| | | | |-- libyaml-0.1.4-4.ibm.x86_64.rpm
| | | | |-- libyaml-debuginfo-0.1.4-4.ibm.x86_64.rpm
| | | | |-- libyaml-devel-0.1.4-4.ibm.x86_64.rpm
| | | | |-- new_x86.txt
| | | | |-- openstack-nova-compute-prereqs-2013.1-201404032330.ibm.1.x86_64.rpm
| | | | |-- openvswitch-1.4.2-2.x86_64.rpm
| | | | |-- openvswitch-debuginfo-1.4.2-2.x86_64.rpm
| | | | |-- pysendfile-2.0.0-4.ibm.x86_64.rpm
| | | | |-- pysendfile-debuginfo-2.0.0-4.ibm.x86_64.rpm
| | | | |-- python-cheetah-2.4.4-4.001.ibm.x86_64.rpm
| | | | |-- python-cheetah-debuginfo-2.4.4-4.001.ibm.x86_64.rpm
| | | | |-- python-crypto-2.3-7.ibm.002.x86_64.rpm
| | | | |-- python-crypto-debuginfo-2.3-7.ibm.002.x86_64.rpm
| | | | |-- python-greenlet-0.3.4-11.001.ibm.x86_64.rpm
| | | | |-- python-greenlet-debuginfo-0.3.4-11.001.ibm.x86_64.rpm
| | | | |-- python-greenlet-devel-0.3.4-11.001.ibm.x86_64.rpm
| | | | |-- python-ibm-db-2.0.4.1-2.ibm.el6.x86_64.rpm
| | | | |-- python-ibm-db-debuginfo-2.0.4.1-2.ibm.el6.x86_64.rpm
| | | | |-- python-msgpack-0.1.13-3.ibm.x86_64.rpm
| | | | |-- python-msgpack-debuginfo-0.1.13-3.ibm.x86_64.rpm
| | | | |-- python-netifaces-0.5-2.ibm.x86_64.rpm
| | | | |-- python-netifaces-debuginfo-0.5-2.ibm.x86_64.rpm
| | | | |-- python-qpid-qmf-0.18-6.001.ibm.x86_64.rpm
| | | | |-- python-sqlalchemy-0.7.9-4.ibm.el6.x86_64.rpm
| | | | |-- python-sqlalchemy-debuginfo-0.7.9-4.ibm.el6.x86_64.rpm
| | | | |-- pyxattr-0.5.0-2.002.ibm.x86_64.rpm
| | | | |-- pyxattr-debuginfo-0.5.0-2.002.ibm.x86_64.rpm
| | | | |-- PyYAML-3.10-7.ibm.x86_64.rpm
| | | | |-- PyYAML-debuginfo-3.10-7.ibm.x86_64.rpm
| | | | |-- qpid-cpp-client-0.18-6.001.ibm.x86_64.rpm
| | | | |-- qpid-cpp-client-devel-0.18-6.001.ibm.x86_64.rpm
| | | | |-- qpid-cpp-client-rdma-0.18-6.001.ibm.x86_64.rpm
| | | | |-- qpid-cpp-client-ssl-0.18-6.001.ibm.x86_64.rpm
| | | | |-- qpid-cpp-debuginfo-0.18-6.001.ibm.x86_64.rpm
| | | | |-- qpid-cpp-server-0.18-6.001.ibm.x86_64.rpm
| | | | |-- qpid-cpp-server-ha-0.18-6.001.ibm.x86_64.rpm
| | | | |-- qpid-cpp-server-rdma-0.18-6.001.ibm.x86_64.rpm
| | | | |-- qpid-cpp-server-ssl-0.18-6.001.ibm.x86_64.rpm
| | | | |-- qpid-cpp-server-store-0.18-6.001.ibm.x86_64.rpm
| | | | |-- qpid-cpp-server-xml-0.18-6.001.ibm.x86_64.rpm
| | | | |-- qpid-qmf-0.18-6.001.ibm.x86_64.rpm
| | | | |-- qpid-qmf-devel-0.18-6.001.ibm.x86_64.rpm
| | | | `-- smartcloud-2013.1-1.1.4.ibm.201406240450.noarch.rpm
| | | |-- sce310_jre_linux_installer.bin
| | | |-- SmartCloud_Orchestrator_2.3.0.1_IF0004.fxtag
| | | `-- SP7_rpm_list
| | |-- 8.5.0.0-WS-BPMPCPD-IFJR48696.zip
| | |-- helper.py
| | |-- iaasgateway-2013.1-1.1.4.ibm.201409112157.noarch.rpm
| | |-- iaasgateway_http.cfg
| | |-- IFIX005_IWD_SCO2301_20140902-0433-971.tar
| | |-- ifix5.py
| | |-- install_script
| | | |-- 9mjhzl_iaasgateway.py
| | | |-- 9mjhzl_keystone.py
| | | |-- bpm_jr48696.py
| | | |-- helper.py
| | | |-- iwd_fix.py
| | | |-- psirt_1838.py
| | | |-- psirt_1876.py
| | | `-- zz00267.py
| | |-- keystone_zz00267.patch
| | |-- PSIRT1876.patch
| | |-- rtc-184008-keystone-11c387264-ifix-el6.tar.gz
| | |-- smartcloud-2013.1-1.1.3.ibm.201407310025.noarch.rpm
| | |-- smartcloud-2013.1-1.1.4.ibm.201407310043.noarch.rpm
| | `-- SmartCloud_Orchestrator_2.3.0.1_IF0005.fxtag
| |-- backups
| |-- helper.py -> installfiles/helper.py
| |-- ifix6.py
| |-- installfiles
| | |-- 02984.999.000.rtc-190749-nova-147e3d8da-ifix-el6.tgz
| | |-- 3.1.0.4-IBM-SCE-IF003-201410100432.zip
| | |-- 8.0.0.0-WS-WASJavaSDK-LinuxX32-IFPI20798.zip
| | |-- 8.5.0.0-WS-BPM-IFJR47778.zip
| | |-- 8.5.0.0-WS-BPM-IFJR47937.zip
| | |-- 8.5.0.0-WS-BPM-IFJR48541.zip
| | |-- 8.5.0.0-WS-BPM-IFJR48570.zip
| | |-- 8.5.0.0-WS-BPM-IFJR48704.zip
| | |-- 8.5.0.0-WS-BPM-IFJR49864.zip
| | |-- 8.5.0.0-WS-BPM-IFJR51596.zip
| | |-- 8.5.0.0-WS-WASJavaSDK-LinuxX64-IFPI20797.zip
| | |-- bpm8.5.0.0IFJR.py
| | |-- db2v101fp3a.py
| | |-- helper.py
| | |-- HybridCSB-API.war
| | |-- ibm-java-i386-sdk-6.0-16.1.i386.rpm
| | |-- ibm-java-x86_64-sdk-6.0-16.1.x86_64.rpm
| | |-- ICCT_Install_2.3.0.1-20.zip
| | |-- IFIX_IWD_SCO2301_IFIX06_20141119-1243-200.tar
| | |-- il_install_package_23027.zip
| | |-- il_proxy_install_package_23027.zip
| | |-- index.gt
| | |-- iwd_fix.py
| | |-- java632sr16fp1.py
| | |-- java664sr16fp1.py
| | |-- keystone_memcache.zip
| | |-- ldapauth.py
| | |-- n3.app_1.0.0.20141021-1433.jar
| | |-- n3.orchestrator.app_1.0.0.20141021-1433.jar
| | |-- nova-cloud-modify
| | |-- openstack-nova-201411241107.py
| | |-- plugin.com.ibm.orchestrator.BPMInvoker-1.0.1.1.jar
| | |-- plugin.com.ibm.orchestrator.rest-1.0.1.1.jar
| | |-- rtc-190961-nova-283ad24da-ifix-el6.tgz
| | |-- sce310fp4aparse60379.py
| | |-- sce310java664sr16fp1.py
| | |-- sce310_jre_linux_installer.bin
| | |-- smartcloud-2013.1-1.1.4.ibm.201410270110.noarch.rpm
| | |-- smartcloud-2013.1-1.1.4.ibm.201412010336.noarch.rpm
| | |-- smartcloud-201412010336.py
| | |-- vilp23027.py
| | |-- vils23027.py
| | |-- vmware.properties_ifix6
| | |-- was80java32-ifpi20798.py
| | `-- was85javasdk64-ifpi20797.py
| `-- SmartCloud_Orchestrator_2.3.0.1_IF0006.fxtag
`
4) Apply fix for the DB2 security vulnerabilities CVE-2013-6747 and CVE-2014-0963.
Note: This fix is already included in iFix 3 (2.3.0.1-CSI-ISCO-IF0003). Ignore this part if you have already applied the fix as part of the installation of 2.3.0.1-CSI-ISCO-IF0003 or a higher iFix version to your SCO environment.
Follow the steps described in "23) Apply fix for the DB2 security vulnerabilities CVE-2013-6747 and CVE-2014-0963" under "(1) Downloading the DB2 fix pack" on the Manual tab (above) to download that fix.
When downloaded, copy v10.1fp3a_linuxx64_server.tar.gz to the Central Server 1, in the ifix directory <your_dir>/2.3.0.1-CSI-ISCO-IF0007/2.3.0.1-CSI-ISCO-IF0006/installfiles.
Running the iFix installation script will check if the fix file is present and install if required.
Follow the steps in "23) Apply fix for the DB2 security vulnerabilities CVE-2013-6747 and CVE-2014-0963" under "(2) Installing the DB2 fix pack on Central Server 1 and each Region Servers (if shared DB is not used)" on the Manual tab (above) if you want to install that fix manually.
5) Download additional BPM fix for APAR JR51814 to be installed when applying fix for APAR ZZ00292 (Exposed Workflows in BPM REST causing 2min response)
Note: This is an optional step for applying fix for APAR ZZ00292 which is described in "33) Apply fix for APAR ZZ00292 (Exposed Workflows in BPM REST causing 2min respnse)" on the Manual tab (above)
This fix is already included in iFix6 (2.3.0.1-CSI-ISCO-IF0006). Ignore this part if you have already applied the fix as part of the installation of 2.3.0.1-CSI-ISCO-IF0006 or a higher iFix version to your SCO environment.
Login to "IBM Support: Fix Central" (FC) and download interim fix "8.5.0.0-WS-BPM-IFJR51814" for APAR JR51814 for BPM 8.5.0.0 if officially available from FC
Select "IBM Business Process Manager Standard" as product selector, "8.5.0.0" as installed version and "Linux 64-bit,x86_64" as platform.
Select "Browse for fixes" and from the list of fixes select/download interim fix: 8.5.0.0-WS-BPM-IFJR51814.
When downloaded, copy 8.5.0.0-WS-BPM-IFJR51814.zip to the Central Server 1, in the ifix directory <your_dir>/2.3.0.1-CSI-ISCO-IF0007/2.3.0.1-CSI-ISCO-IF0006/installfiles.
Running the iFix installation script will check if the fix file is present and install if required.
6) Prepare Workload Deployer InterimFix installation
Fill all parameters in Workload Deployer InterimFix installation config file iwd-ifix-template.config, in the ifix directory <your_dir>/2.3.0.1-CSI-ISCO-IF0007/installfiles/iwd
For details, review the InterimFix installation procedure for Workload Deployer component in IBM SmartCloud Orchestrator 2.3.0.1 (WorkloadDeployer-2.3.0.1-IFIX7-Install.README)
7) Backup VIL classes shipped with iFix7 on Central Server 2
Perform (2.1), (3.1) and (4.1) of step "Upgrade VIL classes shipped with iFix7 on Central Server 2" described on the Manual tab (above)
8) Ensure the public key from Central Server 1 is available on all KVM Compute nodes
Command to execute as root on Central Server 1 for all KVM Compute nodes:
- ssh-copy-id -i ~/.ssh/smartcloud.pub <IP-Of-KVM-Computenode>
9) Run the iFix installation script on Central Server 1
WARNING: Review the steps on the Manual tab (above) if you don't want to run iFix installation script on Central Server 1.
- From the directory where you unpacked the iFix package, run the iFix installation script by using the following command:
./ifix7.py --cs2=<cs2> --cs3=<cs3> --cs4=<cs4> --rs=<regionservers> --cn=<compute_node> --wasavil=<wasviladmin> --waspvil=<password> --wasabpm=<wasbpmadmin> --waspbpm=<password>
Example: ./ifix7.py --cs2=10.10.0.12 --cs3=10.10.0.13 --cs4=10.10.0.14 --rs=10.10.0.15,10.10.0.16 --cn=10.10.0.17 --wasavil=wasadmin --waspvil=passw0rd --wasabpm=admin --waspbpm=passw0rd
You can use ./ifix7.py -h to show the usage.
Usage: ifix7.py [options]
Options:
-h, --help show this help message and exit
--cs2=CS2 hostname/ip address of central server 2
--cs3=CS3 hostname/ip address of central server 3
--cs4=CS4 hostname/ip address of central server 4
--rs=RS (optional)list of hostnames/ip addresses of region
servers format server1,server2,server3,...
--cn=CN (optional)list of hostnames/ip addresses of compute
nodes format compute1,compute2,compute3,...
--wasavil=WASADMINVIL
virtual image library WAS administrator ID to be used
during the installation procedure
--waspvil=WASPASSWORDVIL
virtual image library WAS administrator password
--wasabpm=WASADMINBPM
business process manager WAS administrator ID to be
used during the installation procedure
--waspbpm=WASPASSWORDBPM
business process manager WAS administrator password
You can refer to log file ifix7.log in the script path for more detail information of the installation process.
Other log information:
VIL install/upgrade log: /tmp/fresh_install_vil.log, upgrade_vil.log on Central Server 2
VIL Proxy install/upgrade log: /tmp/fresh_install_proxy.log, upgrade_proxy.log on Region Server
IWD fix install log: /tmp/iwdifix7.log Central Server 3
Note: If you have ifix4 already applied manually in your environment, manually create a file '/home/db2inst1/.db2ifix0004' on Central Server 1 before running script ifix7.py.
10) Finish Workload Deployer InterimFix installation on Central Server 3
Restore configuration from backup made by the iFix installation script and restart Workload Deployer
The iFix installatin script creates the backup folders /opt/ibm/maestro_ifix7bkp_<YYYY-MM-DD-HH-MM-SS> and /opt/ibm/rainmaker_ifix7bkp_<YYYY-MM-DD-HH-MM-SS> on Central Server 3.
For details review InterimFix installation procedure for Workload Deployer component in IBM SmartCloud Orchestrator 2.3.0.1 (WorkloadDeployer-2.3.0.1-IFIX7-Install.README)
11) Change protocol to TLSv1.2 for SCUI on Central Server 3
Refer to step 44 on the Manual tab (above) for details on this step.
12) Perform additional steps not covered by the iFix installation script before starting SmartCloud Orchestrator
Note: Review the Manual tab (above) for details on the listed steps.
The listed fixes are already included in 2.3.0.1-CSI-ISCO-IF0006 or a lower iFix version.
Ignore this part if you have already applied these fixes as part of the installation of 2.3.0.1-CSI-ISCO-IF0006 or a lower iFix version to your SCO environment.
- Apply fix for Iaasgateway cluster support
- Enable IWD notifications
13) Start IBM SmartCloud Orchestrator on Central Server 1
- Stop IBM SmartCloud Orchestrator on Central Server 1 (to enable a clean start in the following step)
Change to directory: /iaas/scorchestrator/
Stop IBM SmartCloud Orchestrator: ./SCOrchestrator.py --stop
===>>> Stopping Smart Cloud Orchestrator
...
===>>> Stopping Smart Cloud Orchestrator complete
Wait until all components have stopped
- Start IBM SmartCloud Orchestrator on Central Server 1
Change to directory: /iaas/scorchestrator/
Start IBM SmartCloud Orchestrator: ./SCOrchestrator.py --start
Wait until all components have started
14) Disable SSLv3 protocol in already deployed instances
To mitigate known security vulnerabilities in already deployed instances, disable the SSLv3 protocol for these instances.
For details, refer to disabling SSLv3 protocol procedure for Workload Deployer component (disable-ssl3-2301.odt)
15) Perform additional steps not covered by the iFix installation script after having started IBM SmartCloud Orchestrator
Note: Review the Manual tab (above) for details on the listed steps.
The listed fixes are already included in 2.3.0.1-CSI-ISCO-IF0006 or a lower iFix version.
Ignore this part if you have already applied these fixes as part of the installation of 2.3.0.1-CSI-ISCO-IF0006 or a lower iFix version to your SCO environment.
- Apply fix for APAR ZZ00256
- Apply fix for APAR SE58688 - KVM nodes going offline
- Apply fix for ZZ00242
- Apply fix for APAR SE59801
- Upgrading the Image Construction and Composition Tool (ICCT)
*********************************************************************************************
WARNING: If you ran the script ifix7.py as outlined on the Standard tab and it completed successfully, then most of the steps in this section can be skipped. You will need to refer to the instructions on the Standard tab for additional manual steps which still need to be executed after running the script ifix7.py.
Note: Be sure to execute all the steps described in this section to ensure that any of the fixes in the pile (i.e. iFix3, iFix4, iFix5, iFix6 and iFix7) are included.
*********************************************************************************************
1) Inventory Tagging
To make it easier for support to find out which IBM SmartCloud Orchestrator fixes are installed on your system, copy some files to /opt/ibm/SmartCloud_Orchestrator/properties/version/ on Central Server3 (if not present on Central Server 3 create the directory).
scp *.fxtag \
Central Server 3:/opt/ibm/SmartCloud_Orchestrator/properties/version/
Note: For iFix7, copy the provided FXTAG file SmartCloud_Orchestrator_2.3.0.1_IF0007.fxtag
2) Update pdcollect files on Central Server 1.
Note: This fix is already included in iFix3 (2.3.0.1-CSI-ISCO-IF0003). Ignore this part if you have already applied the fix as part of the installation of 2.3.0.1-CSI-ISCO-IF0003 or a higher iFix version to your SCO environment.
(1) Backup below files in '/iaas/pdcollect' on Central Server 1 (If you are using no-root pdcollect, find the files in /home/<yourmechid>)
Components_nonroot.xml
Components.xml
Environment.xml
PDCOLLECT_Central_Server_Template.xml
(2) Update the files in /iaas/pdcollect with the ones in folder '2.3.0.1-CSI-ISCO-IF0006/2.3.0.1-CSI-ISCO-IF0005/2.3.0.1-CSI-ISCO-IF0004/2.3.0.1-CSI-ISCO-IF0003/scp-pdcollect/'
3) Upgrade Virtual Image Library (VIL) and VIL distributed proxy components with BL 23027 on Central Server 2 and each Region server
Note: This fix is already included in iFix6 (2.3.0.1-CSI-ISCO-IF0006). Ignore this part if you have already applied the fix as part of the installation of 2.3.0.1-CSI-ISCO-IF0006 or a higher iFix version to your SCO environment.
- Ensure VIL service and VIL proxy service are all stopped
(1) on Central Server 2, check vil service status by running the following command:
service vil status
If vil service is not stopped, run the following command to stop it.
service vil stop
(2) on each Region server, check vilProxy service status by running the following command:
service vilProxy status
If vilProxy service is not stopped, run the following command to stop it.
service vilProxy stop
- Upgrade VIL proxy on each Region Server where it is installed:
(1) copy il_proxy_install_package_23027.zip to the Region Server, in a directory of your choice.
(2) unpack il_proxy_install_package_23027.zip into a temporary directory <your_path>:
unzip il_proxy_install_package_23027.zip -d <your_path>
(3) navigate to <your_path> and upgrade Virtual Image Library proxy by running the following script:
cd <your_path>
./install_vil.sh -proxy -vilServer <hostnameFqdn>
Where
vilServer: Specifies the fully qualified host name or the IP address of the Virtual
Image Library server (Central Server 2).
If you specify the fully qualified host name of the Virtual Image Library
server, the proxy must be able to resolve that host name. If it cannot,
add the IP address and host name to /etc/hosts before installing the
proxy.
- Upgrade VIL server and proxy on Central Server 2:
(1) copy il_install_package_23027.zip to Central Server 2, in a directory of your choice.
(2) unpack il_install_package_23027.zip into a temporary directory <your_path>:
unzip il_install_package_23027.zip -d <your_path>
(3) navigate to <your_path> and upgrade Virtual Image Library by running the following script:
cd <your_path>
./install_vil.sh -u wasadmin -p $smartcloud_password
4) Update the Java 64-bit installation on Central Server 1
- Check the Java current version:
/opt/ibm/java-x86_64-60/bin/java -fullversion
- Update to Java 6 SR16 FP3 if not already installed:
yum localupdate ibm-java-x86_64-sdk-6.0-16.3.x86_64.rpm
...
Updated:
ibm-java-x86_64-sdk.x86_64 0:6.0-16.3
Complete!
- Check that the updated version is Java 6 SR16 FP3:
/opt/ibm/java-x86_64-60/bin/java -fullversion
java full version "JRE 1.6.0 IBM Linux build pxa6460sr16fp3ifx-20150407_01 (SR16 FP3)"
5) Disable RC4 ciphers for IBM Java on Central Server 1
- Change to the JRE security directory:
cd /opt/ibm/java-x86_64-60/jre/lib/security/
- Create a backup copy of the "java.security" file:
cp --preserve java.security java.security_bak
- Edit the "java.security" file, and add or edit the "jdk.tls.disabledAlgorithms" property to disable RC4:
vi java.security
jdk.tls.disabledAlgorithms=SSLv3, RC4
Note: To disable RC4, the text "RC4" must be included in the list of disabled ciphers that is defined by the jdk.tls.disabledAlgorithms property.
6) Update the Java 64-bit installation on Central Server 2 (used by Public Cloud Gateway)
- Check the Java current version:
/opt/ibm/java-x86_64-60/bin/java -fullversion
- Copy ibm-java-x86_64-sdk-6.0-16.3.x86_64.rpm to Central Server 2,
in a directory of your choice.
- Update to Java 6 SR16 FP3 if not already installed:
yum localupdate ibm-java-x86_64-sdk-6.0-16.3.x86_64.rpm
...
Updated:
ibm-java-x86_64-sdk.x86_64 0:6.0-16.3
Complete!
- Check that the updated version is Java 6 SR16 FP3:
/opt/ibm/java-x86_64-60/bin/java -fullversion
java full version "JRE 1.6.0 IBM Linux build pxa6460sr16fp3ifx-20150407_01 (SR16 FP3)"
7) Disable RC4 ciphers for IBM Java on Central Server 2
- Change to the JRE security directory:
cd /opt/ibm/java-x86_64-60/jre/lib/security/
- Create a backup copy of the "java.security" file:
cp --preserve java.security java.security_bak
- Edit the "java.security" file, and add or edit the "jdk.tls.disabledAlgorithms" property to disable RC4:
vi java.security
jdk.tls.disabledAlgorithms=SSLv3, RC4
Note: To disable RC4, the text "RC4" must be included in the list of disabled ciphers that is defined by the jdk.tls.disabledAlgorithms property.
8) Install WAS fixpack 8.0.0.9 for VIL on Central Server 2
- Copy 8.0.0-WS-WAS-FP0000009-part1.zip and 8.0.0-WS-WAS-FP0000009-part2.zip to the Central Server 2, in a directory of your choice (<your_dir>).
- Unzip both fixpack parts
unzip 8.0.0-WS-WAS-FP0000009-part1.zip
unzip 8.0.0-WS-WAS-FP0000009-part2.zip
- Stop VIL server if not already stopped, by running the following command:
service vil stop
- Install the fixpack com.ibm.websphere.EXPRESS.v80_8.0.9.20140530_2152, by running the following command:
/opt/IBM/InstallationManager/eclipse/tools/imcl install com.ibm.websphere.EXPRESS.v80_8.0.9.20140530_2152 -installationDirectory /opt/IBM/WebSphere/AppServer/
-repositories <your_dir> -acceptLicense
- If the installation completes successfully, the following message is displayed:
"Installed com.ibm.websphere.EXPRESS.v80_8.0.9.20140530_2152 to the /opt/IBM/WebSphere/AppServer directory."
- Verify that the interim fix has been installed, checking that it is properly listed among the output of the following command:
/opt/IBM/InstallationManager/eclipse/tools/imcl listInstalledPackages
...
8.0.0.9-WS-WAS-IFPI36563_8.0.9.20150310_2107
9) Install WAS interim fix IFPI38186 and IFPI36563 for VIL on Central Server 2
- Copy 8.0.0.0-WS-WASJavaSDK-LinuxX32-IFPI38186.zip and 8.0.0.9-ws-was-ifpi36563.zip to the Central Server 2, in a directory of your choice (<your_dir>).
- Stop VIL server if not already stopped, by running the following command:
service vil stop
- Install the interim fix IFPI38186, by running the following command:
/opt/IBM/InstallationManager/eclipse/tools/imcl install 8.0.0.0-WS-WASJavaSDK-LinuxX32-IFPI38186_8.0.0.20150401_1320 -installationDirectory /opt/IBM/WebSphere/AppServer/
-repositories <your_dir>/8.0.0.0-WS-WASJavaSDK-LinuxX32-IFPI38186.zip
- If the installation completes successfully, the following message is displayed:
"Installed 8.0.0.0-WS-WASJavaSDK-LinuxX32-IFPI38186_8.0.0.20150401_1320 to the /opt/IBM/WebSphere/AppServer directory."
- If the installation fails with the following message:
ERROR: An interim fix for the Java SDK is installed already. Uninstall interim fix 8.0.0.0-WS-WASJavaSDK-LinuxX32-<interim_fix_code> before installing a different Java SDK interim fix."
then the interim fix <interim_fix_code> must first be uninstalled, by running the following command:
/opt/IBM/InstallationManager/eclipse/tools/imcl uninstall 8.0.0.0-WS-WASJavaSDK-LinuxX32-<interim_fix_code> -installationDirectory /opt/IBM/WebSphere/AppServer/
and then the interim fix IFPI38186 can be installed as described above.
- Install the interim fix IFPI36563, by running the following command:
/opt/IBM/InstallationManager/eclipse/tools/imcl install 8.0.0.9-WS-WAS-IFPI36563_8.0.9.20150310_2107 -installationDirectory /opt/IBM/WebSphere/AppServer/
-repositories <your_dir>/8.0.0.9-ws-was-ifpi36563.zip
- If the installation completes successfully, the following message is displayed:
"Installed 8.0.0.9-WS-WAS-IFPI36563_8.0.9.20150310_2107 to the /opt/IBM/WebSphere/AppServer directory."
- Verify that the interim fixes have been installed, checking that it is properly listed among the output of the following command:
/opt/IBM/InstallationManager/eclipse/tools/imcl listInstalledPackages
10) Update the Java 64-bit installation on Central Server 3 (used by SCUI)
- Check the Java current version:
/opt/ibm/java-x86_64-60/bin/java -fullversion
- Copy ibm-java-x86_64-sdk-6.0-16.3.x86_64.rpm to Central Server 3,
in a directory of your choice.
- Update to Java 6 SR16 FP3 if not already installed:
yum localupdate ibm-java-x86_64-sdk-6.0-16.3.x86_64.rpm
...
Updated:
ibm-java-x86_64-sdk.x86_64 0:6.0-16.3
Complete!
- Check that the updated version is Java 6 SR16 FP3:
/opt/ibm/java-i386-60/bin/java -fullversion
java full version "JRE 1.6.0 IBM Linux build pxa6460sr16fp3ifx-20150407_01 (SR16 FP3)"
11) Disable RC4 ciphers for IBM Java on Central Server 3
- Change to the JRE security directory:
cd /opt/ibm/java-x86_64-60/jre/lib/security/
- Create a backup copy of the "java.security" file:
cp --preserve java.security java.security_bak
- Edit the "java.security" file, and add or edit the "jdk.tls.disabledAlgorithms" property to disable RC4:
vi java.security
jdk.tls.disabledAlgorithms=SSLv3, RC4
Note: To disable RC4, the text "RC4" must be included in the list of disabled ciphers that is defined by the jdk.tls.disabledAlgorithms property.
12) Apply WAS interim fix IFPI37009 for BPM on Central Server 4
- Copy 8.5.0.0-WS-WASJavaSDK-LinuxX64-IFPI37009.zip to the Central Server 4, in a directory of your choice (<your_dir>).
- Stop BPM if not already stopped, by running the following command (wait for services to stop and confirm with service bpm status afterwards):
service bpm stop
- Install the interim fix IFPI37009, by running the following command:
/opt/IBM/InstallationManager/eclipse/tools/imcl install 8.5.0.0-WS-WASJavaSDK-LinuxX64-IFPI37009 -installationDirectory /opt/ibm/BPM/v8.5
-repositories <your_dir>/8.5.0.0-WS-WASJavaSDK-LinuxX64-IFPI37009.zip
- If the installation completes successfully, the following message is displayed:
"Installed 8.5.0.0-WS-WASJavaSDK-LinuxX64-IFPI20797_8.5.0.20140730_1058 to the /opt/ibm/BPM/v8.5 directory."
Upgrade of WAS85 with fixpack/patches 8.5.0.0-WS-WASJavaSDK-LinuxX64-IFPI37009 for BPM on host: <hostname> succeeded ...
- If the installation fails with the following message:
ERROR: An interim fix for the Java SDK is installed already. Uninstall interim fix 8.5.0.0-WS-WASJavaSDK-LinuxX64-<interim_fix_code> before installing a different Java SDK interim fix.
then the interim fix <interim_fix_code> must first be uninstalled, by running the following command:
/opt/IBM/InstallationManager/eclipse/tools/imcl uninstall 8.5.0.0-WS-WASJavaSDK-LinuxX64-<interim_fix_code> -installationDirectory /opt/ibm/BPM/v8.5
and then the interim fix IFPI37009 can be installed as described above.
- Verify that the interim fix has been installed, checking that it is properly listed among the output of the following command:
/opt/IBM/InstallationManager/eclipse/tools/imcl listInstalledPackages
13) Update the 64-bit Java installation used by SmartCloud Entry on each VMware Region Server
- Ensure SmartCloud Entry is stopped on each VMware Region Server by running the following command:
service sce status
IBM SmartCloud Entry is stopped
- Copy F20150520-2124_sce310_jre_linux_installer.bin to the VMware Region Server, in a directory of your choice.
- Update to Java 6 SR16 FP3 from the F20150520-2124_sce310_jre_linux_installer.bin downloaded directory:
./F20150520-2124_sce310_jre_linux_installer.bin
Preparing to install...
Extracting the JRE from the installer archive...
Unpacking the JRE...
Extracting the installation resources from the installer archive...
Configuring the installer for this system's environment...
Launching installer...
===============================================================================
Choose Locale...
----------------
1- Deutsch
->2- English
...
CHOOSE LOCALE BY NUMBER:
===============================================================================
(created with InstallAnywhere)
-------------------------------------------------------------------------------
Preparing CONSOLE Mode Installation...
===============================================================================
Introduction
------------
InstallAnywhere will guide you through the JRE (Java Runtime Environment) update for IBM SmartCloud Entry.
The JRE version of this update is 1.6.0-SR16 FP3.
It is strongly recommended that you quit all programs before continuing with this installation.
Respond to each prompt to proceed to the next step in the installation. If you want to change something on a previous step, type 'back'.
You may cancel this installation at any time by typing 'quit'.
PRESS <ENTER> TO CONTINUE:
===============================================================================
License Agreement
-----------------
Installation and use of IBM SmartCloud Entry JRE Update requires acceptance of the following License Agreement:
International Program License Agreement
...
DO YOU ACCEPT THE TERMS OF THIS LICENSE AGREEMENT? (Y/N): Y
===============================================================================
IBM SmartCloud Entry Install Locations
--------------------------------------
The following IBM SmartCloud Entry installation locations require the JRE version to be updated to version 1.6.0-SR16 FP3.
Select the install location to proceed with the JRE update.
->1- /opt/ibm/SCE31
ENTER THE NUMBER FOR YOUR CHOICE, OR PRESS <ENTER> TO ACCEPT THE DEFAULT::
===============================================================================
Pre-Installation Summary
------------------------
Review the following information before continuing:
Installation folder:
/opt/ibm/SCE31
Product selected for update:
IBM SmartCloud Entry 3.1
Current JRE version:
1.6.0-SR16 FP1
Update JRE verson:
1.6.0-SR16 FP3
PRESS <ENTER> TO CONTINUE:
A newer file named "copyright" already exists at "/opt/ibm/SCE31".
Do you want to overwrite the existing file?
-> Answer 1
===============================================================================
Ready To Install
----------------
InstallAnywhere is now ready to update the JRE version of IBM SmartCloud Entry 3.1 at the following installation location:
/opt/ibm/SCE31
PRESS <ENTER> TO INSTALL:
===============================================================================
Installing...
-------------
[==================|==================|==================|==================]
[------------------|------------------|------------------|------------------]
===============================================================================
Installation Complete
---------------------
Congratulations. The IBM SmartCloud Entry JRE update installed successfully at the following location:
/opt/ibm/SCE31
PRESS <ENTER> TO EXIT THE INSTALLER:
- Check that the updated Java version used by SmartCloud Entry is Java 6 SR16 FP3:
/opt/ibm/SCE31/jre/bin/java -fullversion
java full version "JRE 1.6.0 IBM Linux build pxa6460sr16fp3ifx-20150407_01 (SR16 FP3)"
14) Disable RC4 ciphers for 64-bit Java installation used by SmartCloud Entry on each VMware Region Server
- Change to the JRE security directory:
cd /opt/ibm/SCE31/jre/lib/security
- Create a backup copy of the "java.security" file:
cp --preserve java.security java.security_bak
- Edit the "java.security" file, and add or edit the "jdk.tls.disabledAlgorithms" property to disable RC4:
vi java.security
jdk.tls.disabledAlgorithms=SSLv3, RC4
Note: To disable RC4, the text "RC4" must be included in the list of disabled ciphers that is defined by the jdk.tls.disabledAlgorithms property.
15) Upgrade OPENSTACK components
Note: This fix is already included in iFix4 (2.3.0.1-CSI-ISCO-IF0004). Ignore this part if you have already applied the fix as part of the installation of 2.3.0.1-CSI-ISCO-IF0004 or a higher iFix version to your SCO environment.
- Update yum repos with new OPENSTACK packages
(1) On Central Server 1, extract openstack.tar to your iFix path
tar -xf openstack.tar
Two folders are extracted, openstack_norach and openstack_x86.
Copy rpm packages in the above two folders to SCO yum repo.
copy -rf openstack_norach/* /data/repos/scp/ibm-rpms/openstack_noarch/
copy -rf openstack_x86/* /data/repos/scp/ibm-rpms/openstack_x86/
Run below commands to update yum repo:
yum clean all
createrepo /data/repos/scp/
(2) On each SCO Region server, perform same actions like (1) to update its yum repo.
- Backup configuration files
(1) On Central Server 2, backup below files/folders
/root/keystonerc
/etc/keystone
/etc/iaasgateway
/etc/my.cnf
/etc/qpid
(2) On each Region Server, backup below files/folders
/root/.SCE31
/root/openrc
/root/keystonerc
/etc/cinder
/etc/nova
/etc/glance
/etc/qpid
/etc/my.cnf
(3) On each Compute node, backup below files/folders
/etc/nova
/etc/my.cnf
/root/openrc
- Upgrade OPENSTACK rpm packages
On Central Server 2, each region server and each compute node, perform below action:
(1) Copy SP7_rpm_list to a directory of your choice on the server, <your_path>
(2) Run below command to update OPENSTACK components:
cd <your_path>
yum clean all
for i in $(cat SP7_rpm_list); do yum update $i;done
- Restore configuration files
(1) On Central Server 2:
/root/keystonerc
/etc/keystone
/etc/iaasgateway
/etc/my.cnf
/etc/qpid
(2) On each Region Server:
/root/.SCE31
/root/openrc
/root/keystonerc
/etc/cinder
/etc/nova
/etc/glance
/etc/qpid
/etc/my.cnf
(3) On each Compute node:
/etc/nova
/etc/my.cnf
/root/openrc
- Upgrade SCE on each Vmware Region server
(1) Copy 3.1.0.4-IBM-SCE-FP004-201407010432.zip to a directory of your choice on the server, <your_path>
(2) Extract 3.1.0.4-IBM-SCE-FP004-201407010432.zip in the directory where you copied it.
(3) Backup /opt/ibm/SCE31/program/skc.ini
(4) Upgrade SCE:
Login into SmartCloud Entry console running
telnet localhost 7777
You will see a prompt like
osgi>
In this console type the following commands (the commands are in italics, then you can also see an example of the output):
osgi> showrepos
Metadata repositories:
Artifacts repositories:
repositories:
Artifacts repositories:
file:/C:/Users/IBM_ADMIN/.eclipse/207580638/p2/org.eclipse.
equinox.p2.core/cache/
If the repository that is storing the extracted files is not available, use the addrepo command to add that repository.
osgi> addrepo file:<the absolute path when you unpacked the zip file>
SKC update repository added
Install the updates by using the installupdates command.
osgi> installupdates
SKC updates to install:
com.ibm.cfs.product 3.1.0.3-201403100300 ==> com.ibm.cfs.product 3.1.0.4-201407010432
SKC update done
When the update is complete, activate the changes by using the close command to end the OSGi session, then restarting SmartCloud Entry.
osgi> close
(5) Restore the original copy of /opt/ibm/SCE31/program/skc.ini
(6) Restart SmartCloud Entry running
service sce restart
- Apply patches to OpenStack Components
Note: Use ./SCOrchestrator.py to stop all the services before applying the changes.
(1) use below command to find python directory:
$ python -c "from distutils.sysconfig import get_python_lib; print(get_python_lib())"
For example:
$ python -c "from distutils.sysconfig import get_python_lib; print(get_python_lib())"
/usr/lib/python2.6/site-packages
From the command, '/usr/lib/python2.6/site-packages' is #{your_python_dir} which will be used in following steps.
(2) apply keystone patch on CS-2
login to CS-2 and carry out below commands:
$ scp root@$cs-1_ip:/data/repos/scp/patch/keystone.patch <your_directory>
$ cd #{your_python_dir}
$ patch -p1 -N -f < <your_directory>/keystone.patch
(3) apply OpenStack patches on each Region Server
login to Region Server and carry out below commands:
-- apply nova patch
$ cd #{your_python_dir}
$ patch -p1 -N -f < /data/repos/scp/patch/nova.patch
$ cd /usr
$ patch -p1 -N -f < /data/repos/scp/patch/nova.patch
Note: For this part, ignore the error message reporting "can't find file to patch" for below files:
nova/tests/integrated/api_samples/all_extensions/extensions-get-resp.json.tpl
nova/tests/integrated/api_samples/all_extensions/extensions-get-resp.xml.tpl
nova/tests/integrated/test_api_samples.py
nova/tests/test_configdrive2.py
nova/tests/test_nova_manage.py
doc/api_samples/all_extensions/extensions-get-resp.json
doc/api_samples/all_extensions/extensions-get-resp.xml
-- apply glance patch
$ cd #{python_dir}
$ patch -p1 -N -f < /data/repos/scp/patch/glance.patch
-- apply cinder patch
$ cd #{python_dir}
$ patch -p1 -N -f < /data/repos/scp/patch/cinder.patch
(4) apply nova patch on each KVM compute node
login to KVM compute node and carry out below commands:
$ scp root@$rs_ip:/data/repos/scp/patch/nova.patch <your_directory>
$ cd #{your_python_dir}
$ patch -p1 -N -f < /data/repos/scp/patch/nova.patch
$ cd /usr
$ patch -p1 -N -f < /data/repos/scp/patch/nova.patch
Note: For this part, ignore the error message reporting "can't find file to patch" for below files:
nova/tests/integrated/api_samples/all_extensions/extensions-get-resp.json.tpl
nova/tests/integrated/api_samples/all_extensions/extensions-get-resp.xml.tpl
nova/tests/integrated/test_api_samples.py
nova/tests/test_configdrive2.py
nova/tests/test_nova_manage.py
doc/api_samples/all_extensions/extensions-get-resp.json
doc/api_samples/all_extensions/extensions-get-resp.xml
16) Apply IWD cumulative fix on Central Server 3.
Note: This fix is already included in iFix6 (2.3.0.1-CSI-ISCO-IF0006). Ignore this part if you have already applied the fix as part of the installation of 2.3.0.1-CSI-ISCO-IF0006 or a higher iFix version to your SCO environment.
(1) Copy IFIX_IWD_SCO2301_IFIX06_20141119-1243-200.tar on Central Server 3.
(2) Extract IFIX_IWD_SCO2301_IFIX06_20141119-1243-200.tar in a directory of your choice by running the following command:
tar -xf IFIX_IWD_SCO2301_IFIX06_20141119-1243-200.tar
(3) Change directory to the one where IFIX_IWD_SCO2301_IFIX06_20141119-1243-200 was extracted.
The following files are extracted:
install_fix.sh
README.txt
version
checksums_20140307-0054-235
checksums_20140411-0853-495
checksums_20140514-0635-815
checksums_20140515-1212-353
checksums_20140602-0115-622
checksums_20140711-0842-753
checksums_20140827-0603-225
checksums_20140902-0433-971
checksums_20141023-0416-459
checksums_20141023-1136-786
checksums_20141104-0907-146
checksums_20141119-1243-200
iwd_db2_sco2301_20140902-0433-971.tar
iwd_sco2301_20141119-1243-200.tar
removefiles.txt
(4) Follow the steps in README.txt you extracted in the previous step
Note: The README describes two ways to install the IWD cumulative fix, the automated installation of the fix under 3.1 using the script install_fix.sh or the manual installation of the fix under 3.2. Follow one of these two procedures.
The README also contains an optional step to enable notifications under "3.3 Enable notification (optional)" which is outlined in more detail in the following step (5).
(5) Workaround for problem reported in PMR 63576,024,677 (Unable to provision VMs if user not in admin project)
Note: This is a mandotory step which is required to circumvent the problem reported in PMR 63576,024,677.
The final fix will be made available as part of the resolution of the related APAR.
(5.1) Open config file : /opt/ibm/rainmaker/purescale.app/private/expanded/ibm/scp.ui-1.0.0/config/openstack.config on Central Server 3:
(5.2) Modify section "openstack" by adding "cache_service_versions" parameter with value false. After this change, openstack section should be similar to one below.
"openstack": {
"server_operation_timeout_sec": 3600,
"server_operation_check_interval_sec": 10,
"cache_service_versions": false,
"image_service":"vil"
},
(5.3) Restart IWD service on Central Server 3:
Use 'service iwd restart' to restart iwd service and wait about 15min for its initialization.
Note: Restarting IWD service is also be done as part of enabling IWD notifications in the following step.
(6) Enable IWD notifications
Note: This is an optional step which is intended for large environments only (thousands of instances).
(6.1) Update [DEFAULT] section in /etc/nova/nova.conf on each region server
Put the four lines under section #### QPID ####
notification_driver=nova.openstack.common.notifier.rabbit_notifier
instance_usage_audit=True
notify_on_state_change=vm_and_task_state
instance_usage_audit_period=hour
After the update on the region server restart openstack-nova-compute or openstack-smartcloud service
(6.2) Update /opt/ibm/rainmaker/purescale.app/private/expanded/ibm/rainmaker.openstack.notifications-4.0.0.1/config/zero.config on Central Server 3:
/config/openstack/notifications = {
"general": {
"queue_factory_class": "org.apache.qpid.jndi.PropertiesFileInitialContextFactory",
"notifications_topic": "nova/notifications.info",
"vm_expiration_sec": 6000,
"vm_load_on_init": true,
"resolving_threads": 5,
"resolving_threads_processing_delay_sec": 5,
"consumer_connection_timeout_sec": 7200,
"consumer_error_delay_sec": 60
},
"regions": {
"<region-1-name>" : { <------Note: here is the region server name
"brokerlist" : "tcp://<region-1-ip>:5672" <------Note: here is the region server ip
},
"<region-2-name>" " { <------Note: For each of the region server, need a part like that.
"brokerlist" : "tcp://<region-2-ip>:5672"
}
}
}
Note: The region server name can be either obtained from /root/openrc file on the region server (check for setting of environment variable NOVA_REGION_NAME) or by executing the command "keystone endpoint-list" on Central Server 2.
** consumer_connection_timeout_sec - This parameters described how long IWD will wait for any
notification from single region before it will try to reastablish connection to notification server.
Normally OpenStack (with default configuration) is sending update about vm at least once each hour.
If IWD won't get any notification for time described in parameter, it will drop the notification connection and try to reastablish new one
** consumer_error_delay_sec - Time between attempts to establish connection to notification service (in case of connection failure)
(6.3) Ensure CS-3 can use "telnet $region_server_ip 5672" to connect to region server
Here need to ensure each region server have enable the port 5672 access to the cs-3.
There are two ways to enable region server port 5672 access to the cs-3.
method 1:
On region server:
vim /etc/sysconfig/iptables
add below two lines, right before the iprules of port 5672
-A INPUT -s $cs-3_ip/32 -p udp -m udp --dport 5671:5672 -j ACCEPT
-A INPUT -s $cs-3_ip/32 -p tcp -m tcp --dport 5671:5672 -j ACCEPT
Then run 'service iptables restart'
Try command "telnet $region_server_ip 5672" on cs-3. If it can connect to the region server, then you can restart iwd service.
method 2:
On region server:
iptables-save >/tmp/iptables.save
Then modify the file /tmp/iptables.save and add two lines like below:
-A INPUT -j nova-api-INPUT <---right after this line add below two lines.
-A INPUT -s $cs-3_ip/32 -p udp -m udp --dport 5671:5672 -j ACCEPT
-A INPUT -s $cs-3_ip/32 -p tcp -m tcp --dport 5671:5672 -j ACCEPT
Carry out the command to restore iptables:
iptables-restore < /tmp/iptables.save
Then restart iptables.
(6.4) Restart IWD service
On CS-3, use 'service iwd restart' to restart iwd service and wait about 15min for its initialization.
(6.5) Run "telnet $region_server_ip 5672" on cs-3. If telnet is not installed on cs-3, run 'yum install telnet' to install it.
You should get a message on the shell like this:
Trying $region_server_ip...
Connected to $region_server_ip.
Escape character is '^]'.
17) Install Workload Deployer InterimFix 7 on Central Server 3.
(1) Open WorkloadDeployer-2.3.0.1-IFIX7-Install.README (located in ./installfiles/iwd/) and follow the instructions to install the IWD IFIX7
(2) In the README ibm-java-i386-sdk-6.0-16.3.i386.rpm is referenced which is available in installfiles/java.
18) Apply fix for PSIRT 1838
Note: This fix is already included in iFix5 (2.3.0.1-CSI-ISCO-IF0005). Ignore this part if you have already applied the fix as part of the installation of 2.3.0.1-CSI-ISCO-IF0005 or a higher iFix version to your SCO environment.
(1) copy the file rtc-184008-keystone-11c387264-ifix-el6.tar.gz to a directory of your choice <your_dir> on Central Server 2.
(2) extract rtc-184008-keystone-11c387264-ifix-el6.tar.gz by following command:
cd <your_dir>; tar xvf rtc-184008-keystone-11c387264-ifix-el6.tar.gz
(3) backup the keystone configure file /etc/keystone
(4) install keystone rpms with command:
cd noarch; rpm -Uvh openstack-keystone-2013.1.5.1-201408010306.ibm.13.noarch.rpm python-keystone-2013.1.5.1-201408010306.ibm.13.noarch.rpm
(5) check installed keystone version:
rpm -qa |grep keystone
19) Apply fix for PSIRT 1876
Note: This fix is already included in iFix5 (2.3.0.1-CSI-ISCO-IF0005). Ignore this part if you have already applied the fix as part of the installation of 2.3.0.1-CSI-ISCO-IF0005 or a higher iFix version to your SCO environment.
(1) copy PSIRT1876.patch to a directory of your choice <your_dir> on Central Server 2.
(2) get python path with command: python -c "from distutils.sysconfig import get_python_lib; print(get_python_lib())"
(3) backup file $python_path/keystone/token/controllers.py
(4) patch keystone with following command:
cd $python_path; patch -p1 -N -f < <your_dir>/PSIRT1876.patch
20) Apply fix for APAR ZZ00267
Note: This fix is already included in iFix5 (2.3.0.1-CSI-ISCO-IF0005). Ignore this part if you have already applied the fix as part of the installation of 2.3.0.1-CSI-ISCO-IF0005 or a higher iFix version to your SCO environment.
(1) copy keystone_zz00267.patch to a directory of your choice <your_dir> on Central Server 2.
(2) get python path with command: python -c "from distutils.sysconfig import get_python_lib; print(get_python_lib())"
(3) backup file: mv $python_path/keystone/middleware/ldapauth.py /tmp
(4) patch keystone with following command:
cd $python_path; patch -p1 -N -f < <your_dir>/keystone_zz00267.patch
21) Apply fix for keystone multi-worker support
Note: This fix is already included in iFix5 (2.3.0.1-CSI-ISCO-IF0005). Ignore this part if you have already applied the fix as part of the installation of 2.3.0.1-CSI-ISCO-IF0005 or a higher iFix version to your SCO environment.
(1) copy 0001-enable-multiple-keystone-all-worker-processes.patch to a directory of your choice <your_dir> on Central Server 2.
(2) get python path with command: python -c "from distutils.sysconfig import get_python_lib; print(get_python_lib())"
(3) Backup below files before applying the fix.
/usr/bin/keystone-all
$python_path/keystone/common/config.py
$python_path/keystone/common/wsgi.py
(4) Stop the keystone service using command 'service openstack-keystone stop'.
(5) Apply the fix
cd $python_path
patch -p1 < <your_dir>/0001-enable-multiple-keystone-all-worker-processes.patch
You can get the output like:
... ...
|Change-Id: If74f13bc2898e880649ee809967f5b5859b793c6
|---
| bin/keystone-all | 37 ++-
| etc/keystone.conf.sample | 8 +
| keystone/common/config.py | 2 +
| keystone/common/wsgi.py | 31 ++-
| keystone/openstack/common/loopingcall.py | 148 ++++++++++
| keystone/openstack/common/service.py | 446 ++++++++++++++++++++++++++++++
| keystone/openstack/common/threadgroup.py | 122 ++++++++
| 7 files changed, 785 insertions(+), 9 deletions(-)
| create mode 100644 keystone/openstack/common/loopingcall.py
| create mode 100644 keystone/openstack/common/service.py
| create mode 100644 keystone/openstack/common/threadgroup.py
|
|diff --git a/bin/keystone-all b/bin/keystone-all
|index 2fdc8c7..a28f31c 100755
|--- a/bin/keystone-all
|+++ b/bin/keystone-all
--------------------------
File to patch: /usr/bin/keystone-all <======== Note: here you need to input the file path here.
patching file /usr/bin/keystone-all
can't find file to patch at input line 159
Perhaps you used the wrong -p or --strip option?
The text leading up to this was:
--------------------------
|diff --git a/etc/keystone.conf.sample b/etc/keystone.conf.sample
|index 9e66eb6..b70edbc 100644
|--- a/etc/keystone.conf.sample
|+++ b/etc/keystone.conf.sample
--------------------------
File to patch: <======== Note: here you need to click 'Enter'
Skip this patch? [y] y <======== Note: here you need to input 'y'
Skipping patch.
1 out of 1 hunk ignored
patching file keystone/common/config.py
patching file keystone/common/wsgi.py
patching file keystone/openstack/common/loopingcall.py
patching file keystone/openstack/common/service.py
patching file keystone/openstack/common/threadgroup.py
(6) Modify the file /etc/keystone/keystone.conf and specify the number of workers you want.
Typically these would be set to the number of CPUs.
For example:
[DEFAULT]
....
public_workers = 2
admin_workers = 2
.....
(7) start keystone service using 'service openstack-keystone start'
And ensure it is up and running with 'service openstack-keystone status'
Also, you can check the number of keystone process with 'ps -ef | grep keystone'
22) Apply fix for Iaasgateway cluster support
Note: This is an optional step which is intended for large environments.
This fix is already included in iFix5 (2.3.0.1-CSI-ISCO-IF0005). Ignore this part if you have already applied the fix as part of the installation of 2.3.0.1-CSI-ISCO-IF0005 or a higher iFix version to your SCO environment.
Be aware of the additional step "(4) adapt SCOrchestrator configuration to enable Iaasgateway cluster management" not described in the README 2.3.0.1-CSI-ISCO-IF0005.
Refer to "IBM SmartCloud Orchestrator Version 2.3: Capacity Planning, Performance, and Management Guide" mentioned in "VI. ADDITIONAL INFORMATION" for considerations regarding capacity planning, performance optimization and management best practices to achieve service stability.
(1) copy iaasgateway-2013.1-1.1.4.ibm.201409112157.noarch.rpm to a directory of your choice <your_dir> on Central Server 2.
(2) backup below file before applying the fix.
/etc/iaasgateway/iaasgateway.conf
(3) applying the fix.
(3.1) prepare http server as loadbalancer
- Check if there is already a http server on Central Server 2
service httpd status
If there is already a http server, stop it with below command:
service httpd stop
If there is no http server installed, use below command to install one:
yum install httpd
- Update httpd.conf with loadbalancer configuration
Modify the file /etc/httpd/conf/httpd.conf with below two changes:
(1) update listen port to gateway port
# Listen 80
Listen 9973
(2) append loadbalancer configuration to the end the file
<VirtualHost *:9973>
ProxyRequests off
<Proxy balancer://mycluster>
# three node gateway cluster
BalancerMember http://127.0.0.1:12001
BalancerMember http://127.0.0.1:12002
BalancerMember http://127.0.0.1:12003
Order Deny,Allow
Deny from none
Allow from all
ProxySet lbmethod=byrequests
</Proxy>
# path of requests to balance "/" -> everything
ProxyPass / balancer://mycluster/
</VirtualHost>
(3.2) patch the iaasgateway for clustering
cd <your_dir>
rpm -Uvh iaasgateway-2013.1-1.1.3.ibm.201408250136.noarch.rpm
Ignore the warning message "/etc/iaasgateway/iaasgateway.conf created as /etc/iaasgateway/iaasgateway.conf.rpmnew"
(3.3) prepare configure file for cluster members
Perform below commands:
cd /etc/iaasgateway/
cp iaasgateway.conf iaasgateway00.conf
vi iaasgateway00.conf
#It should look like below before applying this fix:
[service]
iaasgateway_listen = <central-server-2-ip>
iaasgateway_listen_port = 9973
#Update it to:
iaasgateway_listen = 127.0.0.1
iaasgateway_listen_port = 1200X
iaasgateway_user_entry = <central-server-2-ip>
iaasgateway_user_entry_port = 9973
# copy configure files and update port
cp iaasgateway00.conf iaasgateway01.conf
sed -i 's/1200X/12001/' iaasgateway01.conf
cp iaasgateway00.conf iaasgateway02.conf
sed -i 's/1200X/12002/' iaasgateway02.conf
cp iaasgateway00.conf iaasgateway03.conf
sed -i 's/1200X/12003/' iaasgateway03.conf
(3.4) prepare init scripts and update configure file
cd /etc/init.d/
cp openstack-iaasgateway openstack-iaasgateway01
cp openstack-iaasgateway openstack-iaasgateway02
cp openstack-iaasgateway openstack-iaasgateway03
sed -i 's/prog=openstack-iaasgateway/prog=openstack-iaasgateway01/' openstack-iaasgateway01
sed -i 's/iaasgateway.conf/iaasgateway01.conf/' openstack-iaasgateway01
sed -i 's/prog=openstack-iaasgateway/prog=openstack-iaasgateway02/' openstack-iaasgateway02
sed -i 's/iaasgateway.conf/iaasgateway02.conf/' openstack-iaasgateway02
sed -i 's/prog=openstack-iaasgateway/prog=openstack-iaasgateway03/' openstack-iaasgateway03
sed -i 's/iaasgateway.conf/iaasgateway03.conf/' openstack-iaasgateway03
(3.5) startup of cluster
Perform below commands to start iaasgateway cluster service
service openstack-iaasgateway stop
Stopping openstack-iaasgateway: [ OK ]
service openstack-iaasgateway01 start
Starting openstack-iaasgateway01: [ OK ]
service openstack-iaasgateway02 start
Starting openstack-iaasgateway02: [ OK ]
service openstack-iaasgateway03 start
Starting openstack-iaasgateway03: [ OK ]
service httpd start
Starting httpd: [ OK ]
(3.6) check the iaasgateway service status
- try to open below link in browser and it should be the same as it was before applying this fix.
http://<central-server-2-ip>:9973/providers
- check listening ports with below command:
# netstat -nap | grep 1200 | grep LISTEN
tcp 0 0 127.0.0.1:12001 0.0.0.0:* LISTEN 7269/python
tcp 0 0 127.0.0.1:12002 0.0.0.0:* LISTEN 7286/python
tcp 0 0 127.0.0.1:12003 0.0.0.0:* LISTEN 7303/python
- check loadbalancer listening:
# netstat -nap | grep 9973 | grep LISTEN
tcp 0 0 :::9973 :::* LISTEN 7321/httpd
- try to login SCO UI
(4) adapt SCOrchestrator configuration to enable Iaasgateway cluster management
(4.1) adapt SCOEnvironment.xml in directory /iaas/scorchestrator/ on Central Server 1
- Replace line 7 (last line below) by line 7 to 10 (first four lines below) in SCOEnvironment.xml and save your changes
7,10c7
< <component name="openstack-iaasgateway01"/>
< <component name="openstack-iaasgateway02"/>
< <component name="openstack-iaasgateway03"/>
< <component name="httpd"/>
---
> <component name="openstack-iaasgateway"/>
(4.2) adapt SCOComponents.xml in directory /iaas/scorchestrator/ on Central Server 1
- Replace line 27 (last line below) by line 27 to 30 (first four lines below) in SCOComponents.xml and save your changes
27,30c27
< <component name="openstack-iaasgateway01" openstackService="true" scriptName="openstack-servicectrl.sh" workdir="/tmp" startPrio="240" stopPrio="60"/>
< <component name="openstack-iaasgateway02" openstackService="true" scriptName="openstack-servicectrl.sh" workdir="/tmp" startPrio="240" stopPrio="60"/>
< <component name="openstack-iaasgateway03" openstackService="true" scriptName="openstack-servicectrl.sh" workdir="/tmp" startPrio="240" stopPrio="60"/>
< <component name="httpd" openstackService="true" scriptName="openstack-servicectrl.sh" workdir="/tmp" startPrio="240" stopPrio="60"/>
---
> <component name="openstack-iaasgateway" openstackService="true" scriptName="openstack-servicectrl.sh" workdir="/tmp" startPrio="240" stopPrio="60"/>
(4.3) run SCOrchestrator from directory /iaas/scorchestrator/ and check if the iaasgateway cluster services are listed/managed by the script
[root@sco-cs-1 scorchestrator]# ./SCOrchestrator.py -p openstack-iaasgateway,httpd
===>>> Collecting Status for Smart Cloud Orchestrator
===>>> Please wait ======>>>>>>
Component Hostname Status
------------------------------------------------------------------
httpd 172.17.58.212 online
openstack-iaasgateway01 172.17.58.212 online
openstack-iaasgateway02 172.17.58.212 online
openstack-iaasgateway03 172.17.58.212 online
===>>> Status Smart Cloud Orchestrator complete
(4.4) Enable httpd and iaasgateway cluster to be started during boot of Central Server 2
Execute following commands:
[root@sco-cs-2 scorchestrator]# chkconfig openstack-iaasgateway off
[root@sco-cs-2 scorchestrator]# chkconfig openstack-iaasgateway01 on
[root@sco-cs-2 scorchestrator]# chkconfig openstack-iaasgateway02 on
[root@sco-cs-2 scorchestrator]# chkconfig openstack-iaasgateway03 on
[root@sco-cs-2 scorchestrator]# chkconfig httpd on
23) Apply fix for the DB2 security vulnerabilities CVE-2013-6747 and CVE-2014-0963.
Note: This fix is already included in iFix3 (2.3.0.1-CSI-ISCO-IF0003). Ignore this part if you have already applied the fix as part of the installation of 2.3.0.1-CSI-ISCO-IF0003 or a higher iFix version to your SCO environment.
(1) Downloading the DB2 fix pack
Follow the steps outlined in <a href="http://www.ibm.com/support/docview.wss?uid=swg21671732" TARGET="_blank">Security Bulletin: IBM DB2 is impacted by multiple TLS/SSL security vulnerabilities (CVE-2013-6747, CVE-2014-0963)</a> to address security vulnerabilities.
On that page, select the DB2 version that matches your environment. Download and install per DB2 instructions.
(2) Installing the DB2 fix pack on Central Server 1 and each Region Servers (if shared DB is not used)
Ensure the SCO processes have been stopped using SCOrchestrator.py stop as outlined in section III step 1 above
Note: The output of the command /iaas/scorchestrator/SCOrchestrator.py will list the servers that are running DB2.
Update each of these servers, starting with Central Server 1 and then moving on to each region server.
Perform following steps to install the DB2 fix.
(2.1) Copy the fix pack to central server 1 to a folder of your choice <db2fixpack>
(2.2) As user db2inst1 stop DB2 and DAS:
db2stop
/opt/ibm/db2/v10.1/das/bin/db2admin stop
(2.3) As user root extract and install the DB2 fix pack (Note: DB2 10.1 FP3a used in example)
cd <db2fixpack>
tar zxvf v10.1fp3a_linuxx64_server.tar.gz
cd server
./installFixPack -n -b /opt/ibm/db2/v10.1 -f db2lib
Wait for the install to complete.
(2.4) Re-run the install command to verify the fix pack has been applied
./installFixPack -n -b /opt/ibm/db2/v10.1/ -f db2lib
Perform the steps described under (2) on each Region Server which uses standalone DB2.
24) Apply fix for APAR ZZ00256
Note: This fix is already included in iFix3 (2.3.0.1-CSI-ISCO-IF0003). Ignore this part if you have already applied the fix as part of the installation of 2.3.0.1-CSI-ISCO-IF0003 or a higher iFix version to your SCO environment.
If your SCO server OS locale is non-English, perform the following steps to return the correct status of openstack services to SAAM.
On each of the SCO servers (Central Servers, Region Servers and KVM compute nodes), find the file 'openstack-servicectrl.sh' in '/home/saam/' or '/home/ <yourmechid>/root/'.
(1) make a backup of openstack-servicectrl.sh
(2) edit 'openstack-servicectrl.sh',
Modify the line
CTRL_CMD="/sbin/service $SERVICE_ID"
to
CTRL_CMD="/etc/init.d/${SERVICE_ID}"
LANG=C
(3) check the output of openstack-servicectrl.sh and ensure it can get the correct status of openstack services.
For example, on Central Server 2 run:
$ LANG=C /etc/init.d/openstack-keystone status
keystone (pid 2817) is running...
$ ./openstack-servicectrl.sh openstack-keystone status
root: openstack-servicectrl.sh (28621): openstack-keystone status monitor detected openstack-keystone online.
25) Apply fix for APAR SE58688 - KVM nodes going offline
Note: This fix is already included in iFix3 (2.3.0.1-CSI-ISCO-IF0003). Ignore this part if you have already applied the fix as part of the installation of 2.3.0.1-CSI-ISCO-IF0003 or a higher iFix version to your SCO environment.
KVM compute nodes are going offline and can't deply any new patterns and VMs.
This is a bug of openstack https://bugs.launchpad.net/oslo-incubator/+bug/1211338.
On each kvm region server and compute node, backup the file /usr/lib/python2.6/site-packages/nova/openstack/common/rpc/impl_qpid.py
Then change it as below.
find the words:
{"type": "Direct"}
modify it to:
{"type": "direct"}
26) Apply fix for ZZ00242 on Central Server 1
Note: This fix is already included in iFix3 (2.3.0.1-CSI-ISCO-IF0003). Ignore this part if you have already applied the fix as part of the installation of 2.3.0.1-CSI-ISCO-IF0003 or a higher iFix version to your SCO environment.
(1) backup file config_network.sh in $your_sco_install_media/installer/scripts/
(2) Update config_network.sh with the one in folder 2.3.0.1-CSI-ISCO-IF0003/ZZ00242
27) (Optional) Apply fix for APAR SE59801
Note: This fix is already included in iFix5 (2.3.0.1-CSI-ISCO-IF0005). Ignore this part if you have already applied the fix as part of the installation of 2.3.0.1-CSI-ISCO-IF0005 or a higher iFix version to your SCO environment.
Note: This fix can only be applied to the vmware region server which attempts to connect to a vCenter with resource pools defined with similar names but with a difference in upper/lower case.
Ignore this fix if your region server don't have the above issue.
On the vmware region server:
(1) check smartcloud version: rpm -qa |grep smartcloud
If your smartcoud is 'smartcloud-2013.1-1.1.3.ibm.xxxxxxx.noarch.rpm', use fix
rpm -Uvh smartcloud-2013.1-1.1.3.ibm.201407310025.noarch.rpm
If your smartcoud is 'smartcloud-2013.1-1.1.4.ibm.xxxxxxx.noarch.rpm', use fix
rpm -Uvh smartcloud-2013.1-1.1.4.ibm.201407310043.noarch.rpm
(2) Modify the configure file
step 1. Ensure there is no VMs under resource pools before enable the setting in step 2. Otherwise, OpenStack will fail to manage those vms after applying this fix.
step 2. Add the following line into /etc/nova/smartcloud.conf after install the iFix:
filter_out_cloud_resource_pool=True
(3) restart openstack-smartcloud service and ensure it is up and running
service openstack-smartcloud restart
service openstack-smartcloud status
28) Apply fix for APAR ZZ00277, ZZ00293 (When using LDAP user filter, authentication fails ...)
Note: This fix is already included in iFix6 (2.3.0.1-CSI-ISCO-IF0006). Ignore this part if you have already applied the fix as part of the installation of 2.3.0.1-CSI-ISCO-IF0006 or a higher iFix version to your SCO environment.
Note: This fix is applied on Central Server 2.
(1) Make a backup copy of file /usr/lib/python2.6/site-packages/keystone/middleware/ldapauth.py
eg scp -p root@cs-2:/usr/lib/python2.6/site-packages/keystone/middleware/ldapauth.py <cs-1>:/data/2.3.0.1-CSI-ISCO-IF0007/2.3.0.1-CSI-ISCO-IF0006/backups/
(2) Update ldapauth.py on Central Server 2
eg scp -p <cs-1>:/data/2.3.0.1-CSI-ISCO-IF0007/2.3.0.1-CSI-ISCO-IF0006/installfiles/ldapauth.py root@cs-2:/usr/lib/python2.6/site-packages/keystone/middleware/ldapauth.py
(3) Change ldapauth.py permissions to 644 on Central Server 2
eg chmod 644 /usr/lib/python2.6/site-packages/keystone/middleware/ldapauth.py
29) Apply fix for APAR IT05168 (After deploying SCO 2.3 IFIX5 seeing the hypervisor cannot be reached in all cloud groups)
Note: This fix is already included in iFix6 (2.3.0.1-CSI-ISCO-IF0006). Ignore this part if you have already applied the fix as part of the installation of 2.3.0.1-CSI-ISCO-IF0006 or a higher iFix version to your SCO environment.
Note: This fix is applied on Central Server 2.
(1) Unzip keystone_memcache.zip
eg unzip /data/2.3.0.1-CSI-ISCO-IF0007/2.3.0.1-CSI-ISCO-IF0006/installfiles/keystone_memcache.zip
(2) Make backup copies of the following files on Central Server 2
keystone/auth/controllers.py keystone/auth/token_factory.py keystone/common/logging.py keystone/token/controllers.py keystone/token/core.py keystone/token/backends/memcache.py
eg scp -p root@cs-2:/usr/lib/python2.6/site-packages/keystone/auth/controllers.py <cs-1>:/data/2.3.0.1-CSI-ISCO-IF0007/2.3.0.1-CSI-ISCO-IF0006/backups/
scp -p root@cs-2:/usr/lib/python2.6/site-packages/keystone/auth/token_factory.py <cs-1>:/data/2.3.0.1-CSI-ISCO-IF0007/2.3.0.1-CSI-ISCO-IF0006/backups/
scp -p root@cs-2:/usr/lib/python2.6/site-packages/keystone/common/logging.py <cs-1>:/data/2.3.0.1-CSI-ISCO-IF0007/2.3.0.1-CSI-ISCO-IF0006/backups/
scp -p root@cs-2:/usr/lib/python2.6/site-packages/keystone/token/controllers.py <cs-1>:/data/2.3.0.1-CSI-ISCO-IF0007/2.3.0.1-CSI-ISCO-IF0006/backups/
scp -p root@cs-2:/usr/lib/python2.6/site-packages/keystone/token/core.py <cs-1>:/data/2.3.0.1-CSI-ISCO-IF0007/2.3.0.1-CSI-ISCO-IF0006/backups/
scp -p root@cs-2:/usr/lib/python2.6/site-packages/keystone/token/backends/memcache.py <cs-1>:/data/2.3.0.1-CSI-ISCO-IF0007/2.3.0.1-CSI-ISCO-IF0006/backups/
(3) Update the following files on Central Server 2
keystone/auth/controllers.py keystone/auth/token_factory.py keystone/common/logging.py keystone/token/controllers.py keystone/token/core.py keystone/token/backends/memcache.py
eg scp -p <cs-1>:/data/2.3.0.1-CSI-ISCO-IF0007/2.3.0.1-CSI-ISCO-IF0006/installfiles/keystone/auth/controllers.py root@cs-2:/usr/lib/python2.6/site-packages/keystone/auth/
scp -p <cs-1>:/data/2.3.0.1-CSI-ISCO-IF0007/2.3.0.1-CSI-ISCO-IF0006/installfiles/keystone/auth/token_factory.py root@cs-2:/usr/lib/python2.6/site-packages/keystone/auth/
scp -p <cs-1>:/data/2.3.0.1-CSI-ISCO-IF0007/2.3.0.1-CSI-ISCO-IF0006/installfiles/keystone/common/logging.py root@cs-2:/usr/lib/python2.6/site-packages/keystone/common/
scp -p <cs-1>:/data/2.3.0.1-CSI-ISCO-IF0007/2.3.0.1-CSI-ISCO-IF0006/installfiles/keystone/token/controllers.py root@cs-2:/usr/lib/python2.6/site-packages/keystone/token/
scp -p <cs-1>:/data/2.3.0.1-CSI-ISCO-IF0007/2.3.0.1-CSI-ISCO-IF0006/installfiles/keystone/token/core.py root@cs-2:/usr/lib/python2.6/site-packages/keystone/token/
scp -p <cs-1>:/data/2.3.0.1-CSI-ISCO-IF0007/2.3.0.1-CSI-ISCO-IF0006/installfiles/keystone/token/backends/memcache.py root@cs-2:/usr/lib/python2.6/site-packages/keystone/token/backends/
(4) Change file permission on Central Server 2
eg cd /usr/lib/python2.6/site-packages
chmod a+r keystone/auth/controllers.py keystone/auth/token_factory.py keystone/common/logging.py keystone/token/controllers.py keystone/token/core.py keystone/token/backends/memcache.py
(5) Restart openstack-keystone service
service openstack-keystone restart
(6) Check that openstack-keystone is up and running
service openstack-keystone status
30) Apply fix for APAR ZZ00282 (Amazon EC2 flavor list does not match the PCG Flavor list for Amazon EC2)
Note: This fix is already included in iFix6 (2.3.0.1-CSI-ISCO-IF0006). Ignore this part if you have already applied the fix as part of the installation of 2.3.0.1-CSI-ISCO-IF0006 or a higher iFix version to your SCO environment.
Note: This fix is applied on Central Server 2.
(1) Make a backup copy of HybridCSB-API.war on Central Server 2
eg scp -p root@cs-2:/opt/ibm/pcg/lib/HybridCSB-API.war <cs-1>:/data/2.3.0.1-CSI-ISCO-IF0007/2.3.0.1-CSI-ISCO-IF0006/backups/
(2) Update HybridCSB-API.war on Central Server 2
eg scp -p <cs-1>:/data/2.3.0.1-CSI-ISCO-IF0007/2.3.0.1-CSI-ISCO-IF0006/installfiles/HybridCSB-API.war root@cs-2:/opt/ibm/pcg/lib/HybridCSB-API.war
(3) Change HybridCSB-API.war permissions to 750 on Central Server 2
eg chmod 750 /opt/ibm/pcg/lib/HybridCSB-API.war
(4) Restart the Public Cloud Gateway and IAAS server on Central Server 2:
service pcg restart
service openstack-iaasgateway restart
31) Apply fix for APAR ZZ00298 (Security: Scan UI found Parameter Value Overflow - instance.list.expanded)
Note: This fix is already included in iFix6 (2.3.0.1-CSI-ISCO-IF0006). Ignore this part if you have already applied the fix as part of the installation of 2.3.0.1-CSI-ISCO-IF0006 or a higher iFix version to your SCO environment.
Note: This fix is applied on Central Server 3.
(1) Make a backup copy of config.ini on Central Server 3
eg scp -p root@cs-3:/opt/ibm/ccs/scui/etc/config.ini <cs-1>:/data/2.3.0.1-CSI-ISCO-IF0007/2.3.0.1-CSI-ISCO-IF0006/backups/
(2) Copy n3.app_1.0.0.20141021-1433.jar and n3.orchestrator.app_1.0.0.20141021-1433.jar to Central Server 3
eg scp -p <cs-1>:/data/2.3.0.1-CSI-ISCO-IF0007/2.3.0.1-CSI-ISCO-IF0006/installfiles/n3.app_1.0.0.20141021-1433.jar root@cs-3:/opt/ibm/ccs/scui/lib/
scp -p <cs-1>:/data/2.3.0.1-CSI-ISCO-IF0007/2.3.0.1-CSI-ISCO-IF0006/installfiles/n3.orchestrator.app_1.0.0.20141021-1433.jar root@cs-3:/opt/ibm/ccs/scui/lib/
(3) Update config.ini on Central Server 3
Edit /opt/ibm/ccs/scui/etc/config.ini and replace:
n3.app_1.0.0.<version>.jar with n3.app_1.0.0.20141021-1433.jar
n3.orchestrator.app_1.0.0.<version>.jar with n3.orchestrator.app_1.0.0.20141021-1433.jar
(4) Restart SCUI on Central Server 3
service scui restart
32) Apply fix for APAR ZZ00295 (Requests aren't shown again on IWD after SCO restart)
Note: This fix is already included in iFix 6 (2.3.0.1-CSI-ISCO-IF0006). Ignore this part if you have already applied the fix as part of the installation of 2.3.0.1-CSI-ISCO-IF0006 or a higher iFix version to your SCO environment.
Note: This fix is applied on Central Server 3.
(1) Make a backup copy of plugin.com.ibm.orchestrator.rest-1.0.1.1.jar on Central Server 3
eg scp -p root@cs-3:/drouter/ramdisk2/mnt/raid-volume/raid0/usr/servers/kernelservices/plugins/bundles/com.ibm.orchestrator.task/1.0.1.1/plugin.com.ibm.orchestrator.rest-1.0.1.1.jar \
<cs-1>:/data/2.3.0.1-CSI-ISCO-IF0007/2.3.0.1-CSI-ISCO-IF0006/backups/
(2) Update plugin.com.ibm.orchestrator.rest-1.0.1.1.jar on Central Server 3
eg scp -p <cs-1>:/data/2.3.0.1-CSI-ISCO-IF0007/2.3.0.1-CSI-ISCO-IF0006/installfiles/plugin.com.ibm.orchestrator.rest-1.0.1.1.jar \
root@cs-3:/drouter/ramdisk2/mnt/raid-volume/raid0/usr/servers/kernelservices/plugins/bundles/com.ibm.orchestrator.task/1.0.1.1/
(3) Restart IWD services on Central Server 3
service iwd restart
33) Apply fix for APAR ZZ00292 (Exposed Workflows in BPM REST causing 2min respnse)
Note: This fix is already included in iFix6 (2.3.0.1-CSI-ISCO-IF0006). Ignore this part if you have already applied the fix as part of the installation of 2.3.0.1-CSI-ISCO-IF0006 or a higher iFix version to your SCO environment.
Note: This fix is applied on Central Server 3 and Central Server 4.
(1) Install BPM fixes using installation manager on Central Server 4
(1.1) Stop BPM server on Central Server 4
Stop BPM using the SCOrchestrator script on Central Server 1:
/iaas/scorchestrator/SCOrchestrator.py --stop -p bpm
(1.2) Download additional BPM fix for APAR JR51814
Note: This is an optional step for applying fix for APAR ZZ00292.
Login to "IBM Support: Fix Central" (FC) and download interim fix "8.5.0.0-WS-BPM-IFJR51814" for APAR JR51814 for BPM 8.5.0.0 if officially available from FC
Select "IBM Business Process Manager Standard" as product selector, "8.5.0.0" as installed version and "Linux 64-bit,x86_64" as platform.
Select "Browse for fixes" and from the list of fixes select/download interim fix: 8.5.0.0-WS-BPM-IFJR51814.
When downloaded, copy 8.5.0.0-WS-BPM-IFJR51814.zip to the Central Server 1, in the iFix directory <your_dir>/2.3.0.1-CSI-ISCO-IF0007/2.3.0.1-CSI-ISCO-IF0006/installfiles.
(1.3) Transfer BPM fixes to Central Server 4
eg scp scp -p <cs-1>:/data/2.3.0.1-CSI-ISCO-IF0007/2.3.0.1-CSI-ISCO-IF0006/installfiles/8.5.0.0-WS-BPM-IFJR47778.zip root@cs-4:root@cs4:/tmp
scp scp -p <cs-1>:/data/2.3.0.1-CSI-ISCO-IF0007/2.3.0.1-CSI-ISCO-IF0006/installfiles/8.5.0.0-WS-BPM-IFJR47937.zip root@cs-4:root@cs4:/tmp
scp scp -p <cs-1>:/data/2.3.0.1-CSI-ISCO-IF0007/2.3.0.1-CSI-ISCO-IF0006/installfiles/8.5.0.0-WS-BPM-IFJR48541.zip root@cs-4:root@cs4:/tmp
scp scp -p <cs-1>:/data/2.3.0.1-CSI-ISCO-IF0007/2.3.0.1-CSI-ISCO-IF0006/installfiles/8.5.0.0-WS-BPM-IFJR48570.zip root@cs-4:root@cs4:/tmp
scp scp -p <cs-1>:/data/2.3.0.1-CSI-ISCO-IF0007/2.3.0.1-CSI-ISCO-IF0006/installfiles/8.5.0.0-WS-BPM-IFJR48704.zip root@cs-4:root@cs4:/tmp
scp scp -p <cs-1>:/data/2.3.0.1-CSI-ISCO-IF0007/2.3.0.1-CSI-ISCO-IF0006/installfiles/8.5.0.0-WS-BPM-IFJR49864.zip root@cs-4:root@cs4:/tmp
scp scp -p <cs-1>:/data/2.3.0.1-CSI-ISCO-IF0007/2.3.0.1-CSI-ISCO-IF0006/installfiles/8.5.0.0-WS-BPM-IFJR51596.zip root@cs-4:root@cs4:/tmp
scp scp -p <cs-1>:/data/2.3.0.1-CSI-ISCO-IF0007/2.3.0.1-CSI-ISCO-IF0006/installfiles/8.5.0.0-WS-BPM-IFJR51814.zip root@cs-4:root@cs4:/tmp
(1.3) Unzip BPM fixes, prepare intallation manager repository on Central Server 4
eg cd /tmp
mkdir /tmp/JR47778
cd /tmp/JR47778
unzip ../8.5.0.0-WS-BPM-IFJR47778.zip
Repeat this sequence for each BPM fix
(1.4) Install BPM fixes using installation manager on Central Server 4
eg cd /opt/IBM/InstallationManager/eclipse/tools
./imcl install 8.5.0.0-WS-BPM-IFJR47778 -repositories /tmp/JR47778 -installationDirectory /opt/ibm/BPM/v8.5
./imcl install 8.5.0.0-WS-BPM-IFJR48541 -repositories /tmp/JR48541 -installationDirectory /opt/ibm/BPM/v8.5
./imcl install 8.5.0.0-WS-BPM-IFJR48704 -repositories /tmp/JR48704 -installationDirectory /opt/ibm/BPM/v8.5
./imcl install 8.5.0.0-WS-BPM-IFJR49864 -repositories /tmp/JR49864 -installationDirectory /opt/ibm/BPM/v8.5
./imcl install 8.5.0.0-WS-BPM-IFJR47937 -repositories /tmp/JR47937 -installationDirectory /opt/ibm/BPM/v8.5
./imcl install 8.5.0.0-WS-BPM-IFJR48570 -repositories /tmp/JR48570 -installationDirectory /opt/ibm/BPM/v8.5
./imcl install 8.5.0.0-WS-BPM-IFJR51596 -repositories /tmp/JR51596 -installationDirectory /opt/ibm/BPM/v8.5
./imcl install 8.5.0.0-WS-BPM-IFJR51596 -repositories /tmp/JR51814 -installationDirectory /opt/ibm/BPM/v8.5
(1.1) Start BPM server on Central Server 4
Start BPM using the SCOrchestrator script on Central Server 1:
/iaas/scorchestrator/SCOrchestrator.py --start -p bpm
(2) Update plugin.com.ibm.orchestrator.BPMInvoker-1.0.1.1.jar on Central Server 3
(2.1) Make a backup copy of plugin.com.ibm.orchestrator.BPMInvoker-1.0.1.1.jar on Central Server 3
eg scp -p root@cs-3:/drouter/ramdisk2/mnt/raid-volume/raid0/usr/servers/kernelservices/plugins/bundles/com.ibm.orchestrator.BPMInvoker/1.0.1.1/plugin.com.ibm.orchestrator.BPMInvoker-1.0.1.1.jar \
<cs-1>:/data/2.3.0.1-CSI-ISCO-IF0007/2.3.0.1-CSI-ISCO-IF0006/backups/
(2.2) Update plugin.com.ibm.orchestrator.BPMInvoker-1.0.1.1.jar on Central Server 3
eg scp -p <cs-1>:/data/2.3.0.1-CSI-ISCO-IF0007/2.3.0.1-CSI-ISCO-IF0006/installfiles/plugin.com.ibm.orchestrator.BPMInvoker-1.0.1.1.jar \
root@cs-3:/drouter/ramdisk2/mnt/raid-volume/raid0/usr/servers/kernelservices/plugins/bundles/com.ibm.orchestrator.BPMInvoker/1.0.1.1/
(2.3) Restart IWD services on Central Server 3
service iwd restart
34) Apply fix for APAR SE60379 (SCO is provisioning VM's to local disk datastores)
Note: This fix is already included in iFix6 (2.3.0.1-CSI-ISCO-IF0006). Ignore this part if you have already applied the fix as part of the installation of 2.3.0.1-CSI-ISCO-IF0006 or a higher iFix version to your SCO environment.
Note: This fix is applied on all VMware Region Servers.
(1) Copy package 3.1.0.4-IBM-SCE-IF003-201410100432.zip to the folder $your_folder on VMware region server
(2) Extract the fix package on VMware Region Server
cd $your_folder
unzip 3.1.0.4-IBM-SCE-IF003-201410100432.zip
(3) Ensure SCE service is up and running with command service sce status, if not running start it with command service sce start
(4) Backup configure file on VMware Region Server
cp --preserve /opt/ibm/SCE31/program/skc.ini $your_folder/
(5) Install SCE iFix on VMware Region Server
Login into SmartCloud Entry console running
telnet localhost 7777
You will see a prompt like
osgi>
In this console type the following commands:
osgi> showrepos
Metadata repositories:
Artifacts repositories:
repositories:
Artifacts repositories:
file:/C:/Users/IBM_ADMIN/.eclipse/207580638/p2/org.eclipse.
equinox.p2.core/cache/
If the repository that is storing the extracted files is not available, use the addrepo command to add that repository.
osgi> addrepo file:<the absolute path when you unpacked the zip file>
osgi> SmartCloud Entry update repository added
Install the updates by using the installupdates command.
osgi> installupdates
osgi> SmartCloud Entry updates to install:
com.ibm.cfs.product 3.1.0.4-201407162230 ==> com.ibm.cfs.product 3.1.0.4-201410100430
SmartCloud Entry update done
When the update is complete, activate the changes by using the close command to end the OSGi session, then restarting SmartCloud Entry.
osgi> close
(6) Restore the original copy of /opt/ibm/SCE31/program/skc.ini on VMware Region Server
cp --preserve $your_folder/skc.ini /opt/ibm/SCE31/program/
(7) Modify file vmware.properties under path /root/.SCE31 on VMware Region Server
If vmware.properties does not exist, manually create one
Add below four parameters to vmware.properties and fill in the correct regular expression to filter all your vmware datastore
# Property name for enabling the following clone template properties (true or false, default is false)
com.ibm.cfs.cloud.vmware.enable.clone.template.properties=true
# Optional list of datastore and datastore cluster names to exclude when looking for an available datastore or
# datastore cluster. This is a comma separated list of datastore and datastore cluster names. This is optional.
com.ibm.cfs.cloud.vmware.datastore.exclude.list=<This should be a correct regular expression or blank>
# Optional list of datastore and datastore cluster names to include when looking for an available datastore or
# datastore cluster. This is a comma separated list of runningdatastore and datastore cluster names. This is optional.
com.ibm.cfs.cloud.vmware.datastore.include.list=<This should be a correct regular expression or blank>
# Optional sets the search method (text or regex) for the following properties.
# com.ibm.cfs.cloud.vmware.datastore.include.list and com.ibm.cfs.cloud.vmware.datastore.exclude.list
# 'text' mode is exact text matching and 'regex' is regular expression matching.
# This is an all or nothing setting for values in each lists
com.ibm.cfs.cloud.vmware.datastore.list.type=regex
(8) Restart SmartCloud Entry service on VMware region server
service sce restart
35) Apply fix for defect 140719 (Running nova-cloud-modify with changed VC password results in duplication of hypervisors)
Note: This fix is already included in iFix6 (2.3.0.1-CSI-ISCO-IF0006). Ignore this part if you have already applied the fix as part of the installation of 2.3.0.1-CSI-ISCO-IF0006 or a higher iFix version to your SCO environment.
Note: This fix is applied on all VMware Region Servers.
(1) Make a backup copy of nova-cloud-modify on VMware Region Server
eg scp -p root@vmwr:/opt/ibm/openstack/iaas/smartcloud/bin/nova-cloud-modify <cs-1>:/data/2.3.0.1-CSI-ISCO-IF0007/2.3.0.1-CSI-ISCO-IF0006/backups/
(2) Update nova-cloud-modify on VMware Region Server
eg scp -p <cs-1>:/data/2.3.0.1-CSI-ISCO-IF0007/2.3.0.1-CSI-ISCO-IF0006/installfiles/nova-cloud-modify root@vmwr:/opt/ibm/openstack/iaas/smartcloud/bin/
36) Apply fix for APAR SE60719 (Openstack nova synchronization powers off vms automatically)
Note: This fix is already included in iFix6 (2.3.0.1-CSI-ISCO-IF0006). Ignore this part if you have already applied the fix as part of the installation of 2.3.0.1-CSI-ISCO-IF0006 or a higher iFix version to your SCO environment.
Note: This fix is applied on all Region Servers and VMware Region Servers respectively.
(1) Install openstack-nova fix on all Region Servers
(1.1) Transfer openstack-nova fix rtc-190961-nova-283ad24da-ifix-el6.tgz to Region Server
eg scp -p <cs-1>:/data/2.3.0.1-CSI-ISCO-IF0007/installfiles/rtc-190961-nova-283ad24da-ifix-el6.tgz root@region:/tmp
(1.2) Untar openstack-nova fix rtc-190961-nova-283ad24da-ifix-el6.tgz on Region Server
eg cd /tmp
tar zxvf rtc-190961-nova-283ad24da-ifix-el6.tgz
The following RPMs will be extracted from the tgz:
openstack-nova-2013.1.5.1-201411241107.ibm.22.noarch.rpm
openstack-nova-doc-2013.1.5.1-201411241107.ibm.22.noarch.rpm
openstack-nova-api-2013.1.5.1-201411241107.ibm.22.noarch.rpm
openstack-nova-network-2013.1.5.1-201411241107.ibm.22.noarch.rpm
openstack-nova-cells-2013.1.5.1-201411241107.ibm.22.noarch.rpm
openstack-nova-objectstore-2013.1.5.1-201411241107.ibm.22.noarch.rpm
openstack-nova-cert-2013.1.5.1-201411241107.ibm.22.noarch.rpm
openstack-nova-scheduler-2013.1.5.1-201411241107.ibm.22.noarch.rpm
openstack-nova-common-2013.1.5.1-201411241107.ibm.22.noarch.rpm
python-nova-2013.1.5.1-201411241107.ibm.22.noarch.rpm
openstack-nova-compute-2013.1.5.1-201411241107.ibm.22.noarch.rpm
openstack-nova-conductor-2013.1.5.1-201411241107.ibm.22.noarch.rpm
openstack-nova-console-2013.1.5.1-201411241107.ibm.22.noarch.rpm
(1.3) Stop applicable nova services (to stop each service do: service <service name> stop) on Region Server
openstack-nova-api openstack-nova-objectstore openstack-nova-network openstack-nova-volume
openstack-nova-scheduler openstack-nova-cert openstack-nova-console openstack-nova-consoleauth
Ensure each service has been stopped using: service <service name> status
(1.4) Install all RPMs by means of "yum install *.rpm" (type "y" when required) on Region Server
(1.5) Check that all RPMs were successfully installed on Region Server using
rpm -qa |grep 201411241107
openstack-nova-cells-2013.1.5.1-201411241107.ibm.22.noarch
openstack-nova-2013.1.5.1-201411241107.ibm.22.noarch
python-nova-2013.1.5.1-201411241107.ibm.22.noarch
openstack-nova-network-2013.1.5.1-201411241107.ibm.22.noarch
openstack-nova-common-2013.1.5.1-201411241107.ibm.22.noarch
openstack-nova-compute-2013.1.5.1-201411241107.ibm.22.noarch
openstack-nova-objectstore-2013.1.5.1-201411241107.ibm.22.noarch
openstack-nova-api-2013.1.5.1-201411241107.ibm.22.noarch
openstack-nova-conductor-2013.1.5.1-201411241107.ibm.22.noarch
openstack-nova-scheduler-2013.1.5.1-201411241107.ibm.22.noarch
openstack-nova-console-2013.1.5.1-201411241107.ibm.22.noarch
openstack-nova-cert-2013.1.5.1-201411241107.ibm.22.noarch
openstack-nova-doc-2013.1.5.1-201411241107.ibm.22.noarch
(1.6) Start the nova services stopped under (1.3) (to start each service do: service <service name> start) on Region Server
Ensure each service has been started using: service <service name> status
(2) Install smartcloud fix smartcloud-2013.1-1.1.4.ibm.201412010336.noarch.rpm on all VMware Region Servers
(2.1) Transfer smartcloud fix smartcloud-2013.1-1.1.4.ibm.201412010336.noarch.rpm to VMware Region Server
eg scp -p <cs-1>:/data/2.3.0.1-CSI-ISCO-IF0007/2.3.0.1-CSI-ISCO-IF0006/installfiles/smartcloud-2013.1-1.1.4.ibm.201412010336.noarch.rpm root@vmwr:/tmp
(2.2) Stop the nova and the smartcloud services (to stop each service do: service <service name> stop) on VMware Region Server
openstack-nova-api openstack-nova-objectstore openstack-nova-network openstack-nova-volume
openstack-nova-scheduler openstack-nova-cert openstack-nova-console openstack-nova-consoleauth
openstack-smartcloud
Ensure each service has been stopped using: service <service name> status
(2.3) Install smartcloud RPM package by means of "yum install smartcloud fix smartcloud-2013.1-1.1.4.ibm.201412010336.noarch.rpm (type "y" when required) on VMware Region Server
Ensure smartcloud RPM package was installed successfully
(2.4) Make backup copies of /etc/nova/nova.conf and /etc/nova/smartcloud.conf on VMware Region Server
eg scp -p root@vmwr:/etc/nova/nova.conf <cs-1>:/data/2.3.0.1-CSI-ISCO-IF0007/2.3.0.1-CSI-ISCO-IF0006/backups/
scp -p root@vmwr:/etc/nova/smartcloud.conf <cs-1>:/data/2.3.0.1-CSI-ISCO-IF0007/2.3.0.1-CSI-ISCO-IF0006/backups/
(2.5) Add the following property to /etc/nova/nova.conf
sync_power_state_interval = -1
eg openstack-config --set /etc/nova/nova.conf DEFAULT sync_power_state_interval -1
(2.6) Add the following property to /etc/nova/smartcloud.conf
auto_sync_data_on_start = False
eg openstack-config --set /etc/nova/smartcloud.conf DEFAULT auto_sync_data_on_start False
(2.7) Start the nova services stopped under (2.2) (to start each service do: service <service name> start) on VMware Region Server
Ensure each service has been started using: service <service name> status
37) Upgrading the Image Construction and Composition Tool (ICCT)
Note: This fix is already included in iFix6 (2.3.0.1-CSI-ISCO-IF0006). Ignore this part if you have already applied the fix as part of the installation of 2.3.0.1-CSI-ISCO-IF0006 or a higher iFix version to your SCO environment.
- Upgrade the Image Construction and Composition Tool using the standard upgrade procedure or
using the silent upgrade procedure by following the steps under:
http://pic.dhe.ibm.com/infocenter/tivihelp/v48r1/topic/com.ibm.sco.doc_2.3/ICON/topics/cicn_upgradeoverview.html
- In step "Download and extract the compressed file to the computer where you want to upgrade or install
the Image Construction and Composition Tool fix pack" use ICCT_Install_2.3.0.1-20.zip
38) Apply OPENSTACK PSIRT rpms shipped with iFix7
- Update yum repos with new OPENSTACK packages
(1) On Central Server 1, extract openstack.tar to your iFix path
tar -xf openstack.tar
Two folders are extracted, openstack_norach and openstack_x86.
Copy rpm packages in the above two folders to SCO yum repo.
copy -rf openstack_noarch/* /data/repos/scp/ibm-rpms/openstack_noarch/
copy -rf openstack_x86/* /data/repos/scp/ibm-rpms/openstack_x86/
chmod 644 /data/repos/scp/ibm-rpms/openstack_noarch/*
chmod 644 /data/repos/scp/ibm-rpms/openstack_x86/*
Run below commands to update yum repo:
yum clean all
createrepo /data/repos/scp/
(2) On each SCO Region server, perform same actions like (1) to update its yum repo.
- Backup configuration files
(1) On Central Server 2, backup below files/folders
/root/keystonerc
/etc/keystone
/etc/iaasgateway
/etc/my.cnf
/etc/qpid
For example
- Copy CS2_backup_list to a directory of your choice on the server, <your_path>
- mkdir /tmp/SCO_IFIX7_BAK
- rsync -arv --files-from=CS2_backup_list / /tmp/SCO_IFIX7_BAK
(2) On each Region Server, backup below files/folders
/root/openrc
/root/keystonerc
/etc/cinder
/etc/nova
/etc/glance
/etc/qpid
/etc/my.cnf
On VMware regionserver backup in addition following folder
/root/.SCE31
For example
- Copy RSVM_backup_list for a VMware region server and RSKV_backup_list for a KVM region server
to a directory of your choice on the server, <your_path>
- mkdir /tmp/SCO_IFIX7_BAK
- rsync -arv --files-from=RSVM_backup_list / /tmp/SCO_IFIX7_BAK
or
- rsync -arv --files-from=RSKV_backup_list / /tmp/SCO_IFIX7_BAK
(3) On each Compute node, backup below files/folders
/etc/nova
/etc/my.cnf
/root/openrc
For example
- Copy CN_backup_list to a directory of your choice on the server, <your_path>
- mkdir /tmp/SCO_IFIX7_BAK
- rsync -arv --files-from=CN_backup_list / /tmp/SCO_IFIX7_BAK
- Upgrade OPENSTACK rpm packages
On Central Server 2, each region server and each compute node, perform below action:
(1) Copy IFIX7_rpm_list to a directory of your choice on the server, <your_path>
(2) Run below command to update OPENSTACK components:
cd <your_path>
yum clean all
while read package; do yum update -y $package; done < IFIX7_rpm_list
- Restore configuration files
(1) On Central Server 2:
/root/keystonerc
/etc/keystone
/etc/iaasgateway
/etc/my.cnf
/etc/qpid
For example
- rsync -arv --files-from=CS2_backup_list /tmp/SCO_IFIX7_BAK /
(2) On each Region Server:
/root/openrc
/root/keystonerc
/etc/cinder
/etc/nova
/etc/glance
/etc/qpid
/etc/my.cnf
On VMware regionserver restore in addition following folder
/root/.SCE31
For example
rsync -arv --files-from=RSVM_backup_list /tmp/SCO_IFIX7_BAK /
or
rsync -arv --files-from=RSKV_backup_list /tmp/SCO_IFIX7_BAK /
(3) On each Compute node:
/etc/nova
/etc/my.cnf
/root/openrc
For example
rsync -arv --files-from=CN_backup_list /tmp/SCO_IFIX7_BAK /
39) Apply nova patch shipped with iFix7
- Transfer OpenStack Nova pach file (nova.patch) to Region Server
eg scp -p nova_*.patch root@<region-server>:/tmp
- Stop the nova services (to stop each service do: service <service name> stop) on VMware Region Server
openstack-nova-api openstack-nova-objectstore openstack-nova-network openstack-nova-volume
openstack-nova-scheduler openstack-nova-cert openstack-nova-console openstack-nova-consoleauth
Ensure each service has been stopped using: service <service name> status
Note: You can also use ./SCOrchestrator.py to stop these services before applying the changes
- Backup the python files that will be changed by the OpenStack patch on Region Server
Login to Region Server and backup the following files:
/usr/lib/python2.6/site-packages/nova/compute/api.py
/usr/lib/python2.6/site-packages/nova/virt/libvirt/blockinfo.py
/usr/lib/python2.6/site-packages/nova/virt/configdrive.py
/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py
/usr/lib/python2.6/site-packages/nova/api/openstack/compute/contrib/fixed_ips.py
/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/impl_qpid.py
/usr/lib/python2.6/site-packages/nova/conductor/manager.py
/usr/bin/nova-manage
/usr/lib/python2.6/site-packages/nova/conductor/rpcapi.py
/usr/lib/python2.6/site-packages/nova/openstack/common/db/sqlalchemy/session.py
/usr/lib/python2.6/site-packages/nova/api/openstack/wsgi.py
For example execute following command
- rsync -arv --files-from=nova_patch_backup_list / /tmp/SCO_IFIX7_nova_patch_BAK
- Determine python directory (your_python_dir) on Region Server
Login to Region Server and carry out below command:
python -c "from distutils.sysconfig import get_python_lib; print(get_python_lib())"
eg /usr/lib/python2.6/site-packages/
- Apply OpenStack Nova patch file on Region Server
Login to Region Server and carry out below commands:
-- apply nova patch
cd #{your_python_dir}
patch -p1 -N -f < /tmp/nova_IFIX7_pyt.patch
cd /usr
patch -p1 -N -f < /tmp/nova_IFIX7_usr.patch
Note: Ignore the error message reporting "The next patch would create the file ..., which already exists! Skipping patch." for below files:
nova/api/openstack/compute/contrib/fixed_by_network.py
nova/api/openstack/compute/contrib/image_activation_data.py
nova/compute/image_activation_manager.py
nova/tests/api/openstack/compute/contrib/test_image_activation_data.py
nova/tests/compute/test_image_activation_manager.py
40) Upgrade SCE shipped with iFix7 on each Vmware Region server
- Copy 3.1.0.4-IBM-SCE-IF008-201505150421.zip to a directory of your choice on the server, <your_path>
- Extract 3.1.0.4-IBM-SCE-IF008-201505150421.zip in the directory where you copied it.
- Backup /opt/ibm/SCE31/program/skc.ini
- Upgrade SCE:
Login into SmartCloud Entry console running
telnet localhost 7777
You will see a prompt like
osgi>
In this console type the following commands (the commands are in italics, then you can also see an example of the output):
osgi> showrepos
Metadata repositories:
Artifacts repositories:
repositories:
Artifacts repositories:
file:/C:/Users/IBM_ADMIN/.eclipse/207580638/p2/org.eclipse.
equinox.p2.core/cache/
If the repository that is storing the extracted files is not available, use the addrepo command to add that repository.
osgi> addrepo file:<the absolute path when you unpacked the zip file>
SKC update repository added
Install the updates by using the installupdates command.
osgi> installupdates
SKC updates to install:
com.ibm.cfs.product 3.1.0.4-201410100430 ==> com.ibm.cfs.product 3.1.0.4-201505150400
SKC update done
When the update is complete, activate the changes by using the close command to end the OSGi session, then restarting SmartCloud Entry.
osgi> close
- Restore the original copy of /opt/ibm/SCE31/program/skc.ini
- Restart SmartCloud Entry running
service sce restart
41) Upgrade smartcloud fix shipped with iFix7 on each Vmware Region server
- Transfer smartcloud fix smartcloud-2013.1-1.1.4.ibm.201506160318.noarch.rpm to VMware Region Server
eg scp -p <cs-1>:/data/2.3.0.1-CSI-ISCO-IF0007/installfiles/smartcloud-2013.1-1.1.4.ibm.201506160318.noarch.rpm root@vmwr:/tmp
- Stop the nova and the smartcloud services (to stop each service do: service <service name> stop) on VMware Region Server
openstack-nova-api openstack-nova-objectstore openstack-nova-network openstack-nova-volume
openstack-nova-scheduler openstack-nova-cert openstack-nova-console openstack-nova-consoleauth
openstack-smartcloud
Ensure each service has been stopped using: service <service name> status
- Install smartcloud RPM package by means of "yum install smartcloud-2013.1-1.1.4.ibm.201506160318.noarch.rpm" (type "y" when required) on VMware Region Server
Ensure smartcloud RPM package was installed successfully
- Start the nova services stopped under (2.2) (to start each service do: service <service name> start) on VMware Region Server
Ensure each service has been started using: service <service name> status
42) Upgrade SCO core toolkits shipped with iFix7 on Central Server 4
- Start IBM Business Process Manager
eg service bpm start
service bpm status
- Transfer all files from folder 2.3.0.1-CSI-ISCO-IF0007/installfiles/scotoolkits to a Central Server 4
eg scp -p <cs-1>:/data/2.3.0.1-CSI-ISCO-IF0007/installfiles/scotoolkits/* root@cs-4:/tmp
- Run script importSCOToolkitsForZZ00253.py (provide business process manager WAS administrator ID and business process manager WAS administrator password)
eg cd /tmp ; ./importSCOToolkitsForZZ00253.py False <wasabpm> <waspbpm>
- Ensure the script run successfully (returns 0)
- Stop IBM Business Process Manager
eg service bpm stop
service bpm status
- Start DB2 on Central Server 1
eg cd /iaas/scorchestrator/SCOrchestrator.py --start -p db2
- Update toolkit envioronment variables on Central Server 1
su - db2inst1
db2 list database directory
db2 connect to BPMDB
db2 set schema bpmuser
db2 "update LSW_ENV_VAR set default_value='https://<Central-Server-3 IP>' where name = 'restEndpoint'"
db2 "update LSW_ENV_VAR set default_value='IWD_Auth_Alias' where name = 'iwdAuthAlias'"
db2 commit
db2 connect reset
- Copy the com.ibm.orchestrator.vmm.adapter.keystone.jar to the BPM WAS runtime environment and replace the existing extension library
Keep loged in at Central Server 1 and execute:
scp /data/2.3.0.1-CSI-ISCO-LA0022/com.ibm.orchestrator.vmm.adapter.keystone.jar root@<Central-Server-4 IP>:/opt/ibm/BPM/v8.5/lib/ext/com.ibm.orchestrator.vmm.adapter.keystone.jar
where /opt/ibm/BPM/v8.5/lib/ext/ is the WAS java extension library path.
Since the vmm adapter keystone jar is also located in the virtual image library it has to be stopped as well
scp /data/2.3.0.1-CSI-ISCO-LA0022/com.ibm.orchestrator.vmm.adapter.keystone.jar root@<Central-Server-2 IP>:/opt/IBM/WebSphere/AppServer/lib/ext/com.ibm.orchestrator.vmm.adapter.keystone.jar
- Verify at Central Server 2 and Central Server 4 to have the same vmm adapter keystone jar in place by executing:
find / -name "*vmm.adapter.keystone*.jar" -exec ls -l {} \;
There should be listed only the once replaced in previous steps, if there are back up copies this copies can be cleaned after some deployment verification.
43) Upgrade VIL classes shipped with iFix7 on Central Server 2
(1.1) Make a backup copy of file ImageLibraryIaaSTAI.jar
eg cp -p /opt/IBM/WebSphere/AppServer/lib/ext/ImageLibraryIaaSTAI.jar /opt/IBM/WebSphere/AppServer/lib/ext/ImageLibraryIaaSTAI.jar_ifix7bkp
(1.2) Update ImageLibraryIaaSTAI.jar
eg scp -p <cs-1>:/data/2.3.0.1-CSI-ISCO-IF0007/installfiles/vil/ImageLibraryIaaSTAI.jar root@<cs-2>:/opt/IBM/WebSphere/AppServer/lib/ext/
(2.1) Make a backup copy of files URLConnectionUtility.class,URLConnectionUtility$1.class,URLConnectionUtility$2.class
eg cp -p /opt/IBM/WebSphere/AppServer/profiles/imageLibraryProfile/installedApps/sco-cs-2Node01Cell/ImageLibraryOpenStackConnector.ear/ImageLibraryOpenStackConnectorWeb.war/WEB-INF/classes/com/ibm/imagelibrary/openstack/common/URLConnectionUtility.class \
/opt/IBM/WebSphere/AppServer/profiles/imageLibraryProfile/installedApps/sco-cs-2Node01Cell/ImageLibraryOpenStackConnector.ear/ImageLibraryOpenStackConnectorWeb.war/WEB-INF/classes/com/ibm/imagelibrary/openstack/common/URLConnectionUtility.class_ifix7bkp
cp -p /opt/IBM/WebSphere/AppServer/profiles/imageLibraryProfile/installedApps/sco-cs-2Node01Cell/ImageLibraryOpenStackConnector.ear/ImageLibraryOpenStackConnectorWeb.war/WEB-INF/classes/com/ibm/imagelibrary/openstack/common/URLConnectionUtility\$1.class \
/opt/IBM/WebSphere/AppServer/profiles/imageLibraryProfile/installedApps/sco-cs-2Node01Cell/ImageLibraryOpenStackConnector.ear/ImageLibraryOpenStackConnectorWeb.war/WEB-INF/classes/com/ibm/imagelibrary/openstack/common/URLConnectionUtility\$1.class_ifix7bkp
cp -p /opt/IBM/WebSphere/AppServer/profiles/imageLibraryProfile/installedApps/sco-cs-2Node01Cell/ImageLibraryOpenStackConnector.ear/ImageLibraryOpenStackConnectorWeb.war/WEB-INF/classes/com/ibm/imagelibrary/openstack/common/URLConnectionUtility\$2.class \
/opt/IBM/WebSphere/AppServer/profiles/imageLibraryProfile/installedApps/sco-cs-2Node01Cell/ImageLibraryOpenStackConnector.ear/ImageLibraryOpenStackConnectorWeb.war/WEB-INF/classes/com/ibm/imagelibrary/openstack/common/URLConnectionUtility\$2.class_ifix7bkp
(2.2) Update URLConnectionUtility.class,URLConnectionUtility$1.class,URLConnectionUtility$2.class
eg scp -p <cs-1>:/data/2.3.0.1-CSI-ISCO-IF0007/installfiles/vil/URLConnectionUtility.class \
root@<cs-2>:/opt/IBM/WebSphere/AppServer/profiles/imageLibraryProfile/installedApps/sco-cs-2Node01Cell/ImageLibraryOpenStackConnector.ear/ImageLibraryOpenStackConnectorWeb.war/WEB-INF/classes/com/ibm/imagelibrary/openstack/common/
scp -p <cs-1>:/data/2.3.0.1-CSI-ISCO-IF0007/installfiles/vil/URLConnectionUtility\$1.class \
root@<cs-2>:/opt/IBM/WebSphere/AppServer/profiles/imageLibraryProfile/installedApps/sco-cs-2Node01Cell/ImageLibraryOpenStackConnector.ear/ImageLibraryOpenStackConnectorWeb.war/WEB-INF/classes/com/ibm/imagelibrary/openstack/common/
scp -p <cs-1>:/data/2.3.0.1-CSI-ISCO-IF0007/installfiles/vil/URLConnectionUtility\$2.class \
root@>cs-2>:/opt/IBM/WebSphere/AppServer/profiles/imageLibraryProfile/installedApps/sco-cs-2Node01Cell/ImageLibraryOpenStackConnector.ear/ImageLibraryOpenStackConnectorWeb.war/WEB-INF/classes/com/ibm/imagelibrary/openstack/common/
(3.1) Make a backup copy of files VILClient.class,VILClient$1.class,VILClient$2.class
eg cp -p /opt/IBM/WebSphere/AppServer/profiles/imageLibraryProfile/installedApps/sco-cs-2Node01Cell/ImageManager.ear/ImageLibraryVMWarePluginWeb.war/WEB-INF/classes/com/ibm/imagelibrary/vmware/plugin/support/vil/VILClient.class \
/opt/IBM/WebSphere/AppServer/profiles/imageLibraryProfile/installedApps/sco-cs-2Node01Cell/ImageManager.ear/ImageLibraryVMWarePluginWeb.war/WEB-INF/classes/com/ibm/imagelibrary/vmware/plugin/support/vil/VILClient.class_ifix7bkp
cp -p /opt/IBM/WebSphere/AppServer/profiles/imageLibraryProfile/installedApps/sco-cs-2Node01Cell/ImageManager.ear/ImageLibraryVMWarePluginWeb.war/WEB-INF/classes/com/ibm/imagelibrary/vmware/plugin/support/vil/VILClient\$1.class \
/opt/IBM/WebSphere/AppServer/profiles/imageLibraryProfile/installedApps/sco-cs-2Node01Cell/ImageManager.ear/ImageLibraryVMWarePluginWeb.war/WEB-INF/classes/com/ibm/imagelibrary/vmware/plugin/support/vil/VILClient\$1.class_ifix7bkp
cp -p /opt/IBM/WebSphere/AppServer/profiles/imageLibraryProfile/installedApps/sco-cs-2Node01Cell/ImageManager.ear/ImageLibraryVMWarePluginWeb.war/WEB-INF/classes/com/ibm/imagelibrary/vmware/plugin/support/vil/VILClient\$2.class \
/opt/IBM/WebSphere/AppServer/profiles/imageLibraryProfile/installedApps/sco-cs-2Node01Cell/ImageManager.ear/ImageLibraryVMWarePluginWeb.war/WEB-INF/classes/com/ibm/imagelibrary/vmware/plugin/support/vil/VILClient\$2.class_ifix7bkp
(3.2) Update VILClient.class,VILClient$1.class,VILClient$2.class
eg scp -p <cs-1>:/data/2.3.0.1-CSI-ISCO-IF0007/installfiles/vil/VILClient.class \
root@<cs-2>:/opt/IBM/WebSphere/AppServer/profiles/imageLibraryProfile/installedApps/sco-cs-2Node01Cell/ImageManager.ear/ImageLibraryVMWarePluginWeb.war/WEB-INF/classes/com/ibm/imagelibrary/vmware/plugin/support/vil/
scp -p <cs-1>:/data/2.3.0.1-CSI-ISCO-IF0007/installfiles/vil/VILClient\$1.class \
root@<cs-2>:/opt/IBM/WebSphere/AppServer/profiles/imageLibraryProfile/installedApps/sco-cs-2Node01Cell/ImageManager.ear/ImageLibraryVMWarePluginWeb.war/WEB-INF/classes/com/ibm/imagelibrary/vmware/plugin/support/vil/
scp -p <cs-1>:/data/2.3.0.1-CSI-ISCO-IF0007/installfiles/vil/VILClient\$2.class \
root@<cs-2>:/opt/IBM/WebSphere/AppServer/profiles/imageLibraryProfile/installedApps/sco-cs-2Node01Cell/ImageManager.ear/ImageLibraryVMWarePluginWeb.war/WEB-INF/classes/com/ibm/imagelibrary/vmware/plugin/support/vil/
(4.1) Make a backup copy of files VMControlRestClient.class,VMControlRestClient$1.class,VMControlRestClient$2.class
eg cp -p /opt/IBM/WebSphere/AppServer/profiles/imageLibraryProfile/installedApps/sco-cs-2Node01Cell/ImageLibraryVMControlConnector.ear/ImageLibraryVMControlConnectorWeb.war/WEB-INF/classes/com/ibm/imagelibrary/vmcontrol/common/VMControlRestClient.class \
/opt/IBM/WebSphere/AppServer/profiles/imageLibraryProfile/installedApps/sco-cs-2Node01Cell/ImageLibraryVMControlConnector.ear/ImageLibraryVMControlConnectorWeb.war/WEB-INF/classes/com/ibm/imagelibrary/vmcontrol/common/VMControlRestClient.class_ifix7bkp
cp -p /opt/IBM/WebSphere/AppServer/profiles/imageLibraryProfile/installedApps/sco-cs-2Node01Cell/ImageLibraryVMControlConnector.ear/ImageLibraryVMControlConnectorWeb.war/WEB-INF/classes/com/ibm/imagelibrary/vmcontrol/common/VMControlRestClient\$1.class \
/opt/IBM/WebSphere/AppServer/profiles/imageLibraryProfile/installedApps/sco-cs-2Node01Cell/ImageLibraryVMControlConnector.ear/ImageLibraryVMControlConnectorWeb.war/WEB-INF/classes/com/ibm/imagelibrary/vmcontrol/common/VMControlRestClient\$1.class_ifix7bkp
cp -p /opt/IBM/WebSphere/AppServer/profiles/imageLibraryProfile/installedApps/sco-cs-2Node01Cell/ImageLibraryVMControlConnector.ear/ImageLibraryVMControlConnectorWeb.war/WEB-INF/classes/com/ibm/imagelibrary/vmcontrol/common/VMControlRestClient\$2.class \
/opt/IBM/WebSphere/AppServer/profiles/imageLibraryProfile/installedApps/sco-cs-2Node01Cell/ImageLibraryVMControlConnector.ear/ImageLibraryVMControlConnectorWeb.war/WEB-INF/classes/com/ibm/imagelibrary/vmcontrol/common/VMControlRestClient\$2.class_ifix7bkp
(4.2) Update VMControlRestClient.class,VMControlRestClient$1.class,VMControlRestClient$2.class
eg scp -p <cs-1>:/data/2.3.0.1-CSI-ISCO-IF0007/installfiles/vil/VMControlRestClient.class \
root@<cs-2>:/opt/IBM/WebSphere/AppServer/profiles/imageLibraryProfile/installedApps/sco-cs-2Node01Cell/ImageLibraryVMControlConnector.ear/ImageLibraryVMControlConnectorWeb.war/WEB-INF/classes/com/ibm/imagelibrary/vmcontrol/common/
scp -p <cs-1>:/data/2.3.0.1-CSI-ISCO-IF0007/installfiles/vil/VMControlRestClient\$1.class \
root@<cs-2>:/opt/IBM/WebSphere/AppServer/profiles/imageLibraryProfile/installedApps/sco-cs-2Node01Cell/ImageLibraryVMControlConnector.ear/ImageLibraryVMControlConnectorWeb.war/WEB-INF/classes/com/ibm/imagelibrary/vmcontrol/common/
scp -p <cs-1>:/data/2.3.0.1-CSI-ISCO-IF0007/installfiles/vil/VMControlRestClient\$2.class \
root@<cs-2>:/opt/IBM/WebSphere/AppServer/profiles/imageLibraryProfile/installedApps/sco-cs-2Node01Cell/ImageLibraryVMControlConnector.ear/ImageLibraryVMControlConnectorWeb.war/WEB-INF/classes/com/ibm/imagelibrary/vmcontrol/common/
44) Change protocol to TLSv1.2 for SCUI on Central Server 3
- Make a backup copy of file jetty.xml
eg cp -p /opt/ibm/ccs/scui/etc/jetty.xml /opt/ibm/ccs/scui/etc/jetty.xml_ifix7bkp
- Edit jetty.xml and modify the sslContextFactory section by adding sections
ExcludeProtocols and IncludeProtocols as shown below
<!-- Create a SSL factory for the HTTPS connector -->
<New class='org.eclipse.jetty.http.ssl.SslContextFactory' id='sslContextFactory'>
<!-- Start Add sections -->
<Set name='ExcludeProtocols'>
<Array type='java.lang.String'>
<Item>SSLv3</Item>
<Item>TLSv1</Item>
<Item>TLSv1.1</Item>
</Array>
</Set>
<Set name='IncludeProtocols'>
<Array type="java.lang.String">
<Item>TLSv1.2</Item>
</Array>
</Set>
<!-- Ende Add sections -->
</New>
- Restart SCUI and check its status
service scui restart
service scui status
- Check the SCUI log file /var/log/scoui.log for the following info:
n3.app.N3App Starting the SmartCloud UI web application...
org.eclipse.jetty.server.Server jetty-8.1.3.v20120522
org.eclipse.jetty.util.ssl.SslContextFactory Enabled Protocols [TLSv1.2] of [TLSv1, TLSv1.1, TLSv1.2]
org.eclipse.jetty.server.AbstractConnector Started SslSelectChannelConnector@0.0.0.0:7443
45) Change file mode bits of smartcloud bin files on VMware region servers
Logon to every VMware region server and execute following command as root:
- chmod +x /opt/ibm/openstack/iaas/smartcloud/bin/*
Download Package
The following sections provide detailed information related to this release.
Click the FC link below to obtain the release from Fix Central. |
Impact | Description |
---|---|
Critical |
This is a maintenance release. It contains fixes for client-reported and internally found defects. This release also contains fixes to the following security vulnerabilities:
|
There are no known regressions to report. |
Problems Solved
Click the Fix List link in the table of contents above to review a list of the problems solved in this release. |
Known Side Effects
Review the following technotes for troubleshooting assistance: |
Open defectsReview the following list of open defects for IBM Cloud Orchestrator on the IBM Support Portal. |
Change History
Product components versions after upgrading to iFix7: OpenStack component versions:
DB2: Version 10.1 Fix Pack 3a BPM: v8.5.0.0 (BPM_Std_V85) ICCT: 2.3.0.1-20 IWD Version: 2.3.0.0 IWD Build: 20150514-0617-922 SCE: 3.1.0.4-201505150400 (iFix7) VIL: 23027 JAVA versions:
|
Click the link in the Download Options column:
Technical Support
Review the IBM Cloud Support BLOG article Enhance your IBM Cloud Support Experience for a complete list of the different support offerings along with a brief description on the best way to use each resource to improve your experience using IBM Cloud products and services. Forums | Communities | Documentation | Contacting Support | Helpful Hints |
Problems (APARS) fixed
Was this topic helpful?
Document Information
Modified date:
05 April 2019
UID
swg24040336