Friday, June 29, 2018

Steps To Create A File System

This post deals about the various steps involved in creating a file system.
Once a PV is available in the system, admin can list down the PVs by running the following command:
# lspv
abc of aix
lspv command
To Create a VG from a PV with Default Name, command used is:
# mkvg hdisk0
This command will create a VG with default name vg00.
Once a VG is created, LV is created with a default name lv00 and admin assigns number of LPs to the LV by running the following command:
# mklv vg00 10
To Check VG information interms of PPs assigned, command used:
# lsvg -p vg00
abc of aix
lsvg command
To Check VG information interms of LPs assigned, command used:
# lsvg -l vg00
abc of aix
lsvg -l
To Check complete LV information, command used:
# lslv lv00
OR
# lsvg -l  vg00
After LV creation, a directory is created:
# mkdir /test01
After the creation of Directory, a FS is created by running the following command:
# crfs -v jfs -d lv00 -m /test01 -A yes
CRFS stands for create a File System.
-v flag is used to mention the type of file system admin is planning to create.
-d is used to know in which LV the file system will be created and -m for the directory on which the file system will be mounted.
– A flag is used to Activate the FS on Next Reboot.
P.S : there are 2 commands to create a FS:
1. crfs
2. mkfs
The only difference between the two is, mkfs only creates a file system whereas crfs not only creates a FS in LV, it also mounts the FS on a  directory and updates /etc/filesystem and ODM about the FS created.
A File System created using crfs command will have the same name as that of the directory. here /test01.
Next, admin mounts the FS by running:
# mount /test01
Verifies the FS is mounted or not:
# mount
To Display Complete FS information, command used is:
# df -m
OR
# lsfs
abc of aix
lsfs command
Finally, we can check the complete information about PV, VG, LV and FS by running the following command:
# lsvg -o | lsvg -il
This post gave an insight on how a FS is created by creating a VG and LV first on a PV.

Thursday, June 28, 2018

Update AIX Service Packs

Downloading Fixes

You can download AIX Fix Packs or Search for a Specific AIX Fix from “IBM Support: Fix Central“. AIX Fix image files are in backup file format (*.bff). After downloading, you can install them by using procedure below.
-----------------------------------------------------------------------------------------------------------------------

Before installation
 
Before you begin the installation of this package, please note the following:
1. You must be logged in as the root user to install this package.
2. You can see your current Maintenance Level and Service Pack Level by command below:
#oslevel -s

3. IBM recommends creating a system backup(mksysb) before starting the installation procedure.
4. The latest AIX installation hints and tips for your version of AIX are available from the IBM Subscription service for Unix and Linux web site:
http://www14.software.ibm.com/webapp/set2/subscriptions/pqvcmjd
These tips contain important information that should be reviewed before installing this update.

5. IBM recommends creating a separate file system for /usr/sys/inst.images for the following reasons:
  • Downloaded fix packages require a significant amount of disk space.
  • By creating a separate file system, you prevent the expansion of the /usr file system.
6. You can setup a NIM server and save all the Downloded Fix Packs on this server. Then, you can use these packs to update all of your AIX servers. Using NIM server you don’t have to copy the downloaded fix packs on all the AIX servers. They need to copy just to the NIM server.
 -----------------------------------------------------------------------------------------------------------------------

Package installation
 
Follow these steps to install the update package:
1. Always run the inutoc command to ensure the installation subsystem will recognize the new fix packages you download. This command creates a new .toc file for the fix package. Run the inutoc command in the same directory where you downloaded the package filesets.
For example, if you downloaded the filesets to the directory /usr/sys/inst.images, run the following command:
# inutoc /usr/sys/inst.images 
or 
# inutoc . --> if you are in the directory already

2. Optional: Sometimes, downloaded fix image files’ name are in N.bff format that N is an integer number. Renaming the N.bff files is not necessary, but does help you see which filesets are affected by the downloaded files. To rename the downloaded files to their fileset name, run the bffcreate command, or smit bffcreate. After renaming the files, run the inutoc command again. For example, if you downloaded the filesets to /usr/sys/inst.images, run the following command to rename them:
#bffcreate -c -d /usr/sys/inst.images

The default target directory in which images will be created is /usr/sys/inst.images, for a different location you must use -t flag with bffcreate:
#bffcreate -d /usr/sys/inst.images -t /DIFFERENT-LOCATION [-f all ]

3. To install all updates from this package that apply to the installed filesets on your system, enter the directory that contains all BFF files (/usr/sys/inst.images) then use the following command:
#smit update_all
It is highly recommended that you apply all updates from this package.
3. Reboot the system. A reboot is required for this update to take effect.
4. Now again, you can see your new Maintenance Level and Service Pack Level by the following command:
#oslevel -s
After installation

1. After applying all the software updates, you can simply commit, reject or even retained them in applied state forever:
  • Commit Applied Software Updates (Remove Saved Files): #smit commit
  • Reject Applied Software Updates (Use Previous Version): #smit reject
  • Retain Applied Software Updates (Retain Saved Files Forever): Do Nothing!

2. You may wish to retain this package for future use. When you install additional operating system software you will want to use this package to bring the additional software up to this fix package level.
You can create a list of the APARs closed in this package, which you can then view (and search) in an ASCII editor, such as vi or emacs.
First, create the .toc file for the package by using the inutoc command described above. Then, run the following command to extract the APAR listings from the .toc file:
#grep ^I .toc > apar.list

Run the following command to eliminate duplicate entries from the apar.list file:
#sort -u apar.list -o sorted.apars

References: http://www14.software.ibm.com/webapp/set2/fp/fixinstall?release=52&fp=5200-10-04-0750
                       https://www.ibm.com/support/knowledgecenter/en/8202-E4D/p7hbm/iphbm_updateall_howto.htm

Wednesday, June 27, 2018

How to mirror the rootvg in AIX?



This procedure is to assume the rootvg having hdisk0 and we need to take a mirror to hdisk1. 




1. Ground work:
lspv                                 --> find out the none disk on the server for the mirroring.
bootinfo -s hdisk1            --> check the size of the hdisk1 (It should be equal or bigger than hdisk0)

2. Mirror the rootvg
extendvg rootvg hdisk1             --> Extend hdisk1 into the rootvg (force to extend if it throws error "extendvg -f rootvg hdisk1")
mirrovg -S rootvg hdisk1           --> To initiate the mirror in background which help us to access the rootvg volume group
lsvg rootvg|grep -i stale      --> To check status of the sync process. (Execute the cmd again and again until the stale PPs becomes zero
lsvg -l rootvg                             --> Check and confirm the number of PPs is double than number of LPs.
bosboot -ad /dev/hdisk1            --> To create the boot images on hdisk1
bootlist -m normal hdisk0 hdisk1  -->  To add the hdisk1 on the bootlist.
bootlist -m normal -o           --> To confirm both hdisk0 and hdisk1 is part of the boot sequence.

3. Validation
lsvg -l rootvg                        --> Check and confirm the number of PPs is double than number of LPs.
lsvg rootvg|grep -i stale       --> To check the number of stale pp is Zero

4. Roll back
unmirrorvg rootvg hdisk1    --> To unmirror the hdisk1 in the mirroring.
reducevg rootvg hdisk1       --> To get the hdisk1 out of the rootvg
chpv -c hdisk1                         --> To Clear the boot image on the hdisk1
bootlist -m normal hdisk0    --> To remove hdisk1 on the boot sequence which means add the hdisk0 alone on the boot sequence.
bootlist -m normal -o           --> To confirm only hdisk0 is part of the boot sequence.

5. Downtime
NO downtime required for this mirror process.

Tuesday, June 26, 2018

What is inutoc and .toc file ? AIX

Today I will talk a little about an unknown subject, the inutoc and the .toc file. I did a fast search and was difficult to find anything useful to help me explaining WTF is the inutoc and .toc… The point is not how unknown and difficult it is to be understood, but how much simple and easy is to understand it…
I remember when my tech leader said : “The installation is not working because you have to recreate the .toc” … I didn’t understand what is inutoc, so i google it and found some explanation such as :
” TOC : “Table Of Contents” is a fileset inventory file used by the AIX installers. A .toc file will contain package names, version information, fileset prerequisites, and other information required to perform an installation “
“Without .toc file (Table of Contents) AIX will not recognize installable filesets. This is not the case with AIX 6.1 and 7.1.”
So .toc file is a table of contents..in practice when we are going to install a fileset, or a bunch of  filesets, the AIX needs to use a .toc file in the same path of the filesets to use as reference to get information. The .toc file will contain all the information related to the fileset, such as the space required in the filesystem, package names, version information, fileset prerequisites, and other information required to perform an installation (you can cat .toc to see which information has inside the file).
System administrator often has to install/update packages, but sometimes we don’t notice the necessity of the file because the installp create the file .toc before install. Obs. The automatically creation just happen for AIX 6.1 and 7.1.
If you have a server with a dir containing all the filesets and you export it to others servers you need to be carefull, because if you add a new fileset you have to recreate the .toc file by yourself, otherwise the fileset will not be installed.
To create a .toc file is very simple, you have just to use the command inutoc because it generate the TOC for a particular directory.

Examples

To create the .toc file for the /home/douglas/filesets/ directory, enter:
# cd /home/douglas/filesets/
# inutoc .
To create a .toc file for the /tmp/images directory, enter:
# inutoc /tmp/images

Installing technology levels and service packs for AIX

In 2006, all the rules changed with respect to operating system upgrades in AIX. Even the nomenclature has changed from maintenance level (ML) to technology level (TL). Is this just a rebranding or are there substantive changes here? What about best practices? When should you deploy technology- (maintenance) level upgrades? Furthermore, what is the best way of retrieving upgrades, service packs, and fixes? Finally, how do you actually perform a TL upgrade?
This article explains all of these concepts and discusses recent changes to the upgrade methodology. In doing so, you'll review important new concepts such as the Concluding Service Pack (CSP), which is the final service pack of a TL. You'll even walk through an actual upgrade of a system where you'll be working with systems such as the IBM® Service Update Management Assistant (SUMA) and Fix Central. SUMA helps the systems administrator automate the retrieval of AIX updates. Fix Central is the Web-based central repository for all TLs, services packs, and fixes, including hardware firmware. Finally, you'll learn the rationale in determining when you should deploy a TL upgrade.

Technology level overview

Although some systems administrators still use the term "maintenance level" when discussing their AIX version, the term is now reserved for legacy AIX systems. The new IBM methodology dictates two TL releases per year. The first TL includes hardware features, enablement, and software services. The second includes software features in the release, which means the second release is larger and more comprehensive. Finally, there is also now support for new hardware on older TLs.
Historically, new hardware was only supported in new technology releases, which obviously required an upgrade to the new level. The way AIX now supports new hardware is broken down into two categories. The first category is support. First, AIX has actually undergone changes allowing it to recognize new hardware at boot time. Changes to support the new hardware include, at a minimum, updates of tables that are referenced at boot. These determine the processor type and how to create new boot media. The second category is referred to as exploitation, which requires AIX to undergo more pervasive changes to the operating system, such as changes to the Virtual Memory Manager (VMM) to exploit new pages sizes. This new release strategy was first implemented in 2007, starting with AIX V5.3 TL6, which was the first level to have two-year support.

Service pack overview

What about service packs? A service pack (SP) contains groups of Program Temporary Fixes (PTF)s for highly pervasive issues. Service packs are cumulative and are generally released every four to six weeks after the release of a new TL. Service packs can include any of the following:
  • Customer-reported problems that can't wait until the next TL.
  • Critical problems reported by development teams.
  • Limited changes to support new hardware, such as device drivers' kernels changes to reflect a new processor. These are changes that do not add new functionality. New functionality will only be added for TLs or new releases. SP release schedules are approximately every four to six weeks.
So, what happens between service packs? Interim fixes, sometimes referred to as fix packs or temporary fixes, or stand-alone PTFs, are made available for relief until the time that the fix becomes available in a service pack. It's what IBM calls "temporary relief." They are tracked on the system with either the lslpp -L command or emgr -l command.
Security updates, published through advisories, are made available by IBM through subscription services.

Concluding service packs

Another important concept to understand is the Concluding Service Pack (CSP). The CSP is the final service pack for a TL. It usually contains fixes for critical problems or security issues. The new strategy also includes a longer period of support for each OS TL. Each TL will now be supported for up to a two-year period. This means that you can continue to call IBM support for up to a two-year period (from the introduction to the update) without having to move up to the latest TL. The new server update strategy also promises improved serviceability throughout the life of TL. This is done by allowing you to maintain your operating system by installing service packs and PTFs throughout the entire lifecycle of the TL.

Automating TLs and service pack deployments with SUMA

The IBM Service Update Management Assistance (SUMA) is an extremely important system because it allows you to automate the retrieval of TL and service pack deployments. In this section, you're going to use SUMA to retrieve TLs. SUMA was first released with AIX V5.3. More than anything, SUMA helps the system administrator automate the retrieval of AIX updates, allowing the administrator to get away from the mundane tasks of manually retrieving updates from the Web. Furthermore, it allows you to configure policies to automatically download either entire TL upgrades, service packs, or even interim fixes. The primary objective of the utility is to allow systems administrators to spend more time on proactive systems administration and less time on redundant or tedious work such as downloading updates.
So, how do SUMAs work? Essentially, they use a scheduling module that allows policies to be run at predefined intervals, which conform to your maintenance window. The policies can be configured easily without any extensive configuration. You even have the option to manually run SUMA (either from smit or the command line) to bring in whatever updates you require. To configure SUMA, you need to know the fix type. There are eight different kinds of fix types. They include APAR, PTF, Critical, Security, I/O Server, Latest (all fixes), Filesets (specific types), and Maintenance Levels. In addition, there are three types of actions you can perform on a policy. These include preview, download, and download and clean. Preview mode does not really do anything; it just generates a preview of what would be downloaded. The "download only" action downloads the actual data, while the "download and clean" action removes filesets no longer necessary after a new fix level has been brought down. This limits the size of the data that you'll need to keep.
You can run the suma command from either smit or the command line. In the first example, use smit to download an entire TL: # smit suma.
When the smit screen comes up, choose Download Updates Now (easy) and click Enter.
Figure 1. Downloading updates from the smit screen
Downloading updates from smit screenFrom this screen, scroll down to Download Maintenance Level or Technology Level and click Enter (see Figure 2).
Figure 2. Selecting Download Maintenance Level or Technology Level
Selecting Download Maintenance Level or Technology LevelAt the window in Figure 3, click F4 and choose the appropriate TL level.
Figure 3. Choosing the appropriate TL level
Choosing the appropriate TL levelIn this case, it is 6100-01, as shown in Figure 4.
Figure 4. TL level of 6100-01
TL level of 6100-01Click Enter and let it run. When it completes, you'll see a summary, as shown in Figure 5.
Figure 5. Summary page
summary pageThis provides you with the following summary:
  • 59 downloaded
  • 0 failed
  • 36 skipped
Now try it from the command line. In this case, you're going to download TL Two for AIX V6.1. Do this by running the suma -x command (see Figure 6).
Figure 6. Running the suma -x command
running the suma -x commandAfter about 30 minutes, it completes successfully (see Figure 7).
Figure 7. Command completed
command completedThe files get installed in /usr/sys/inst.images, which is where you would also manually put them if you were to retrieve them using different processes.
Why is SUMA important? Perhaps most importantly, it helps to ensure that your systems have the latest patches that it needs. Current fixes are important. Secondly, it downloads the patches without intervention, which allows the systems administrator to focus on more important tasks.

Fix Central

This section reviews Fix Central and discusses how to use it to download TL and service pack deployments. Fix Central is the central repository for all TLs and service packs for AIX. Among other things, you'll see how to log into Fix Central and retrieve service packs. Completely revamped in October of 2007, it provides fixes and updates for all your software, hardware, and operating systems. This includes the Hardware Management Console (HMC). Using Fix Central, you can download using the following options: by APAR, Fix ID, or Test. In addition, there are three download options, IBM Download Director, HTTP, and FTP.
Download a service pack from the exact TL that you downloaded previously during the work with suma from the command line -- 6100-02. First go to Fix Central for System p®, as shown in Figure 8 (see Related topics for a link to Fix Central).
Figure 8. Fix Central for System P
Fix Central for Systemp PFrom here, in the drop-down menu, choose your version. Another drop-down menu pops up, which is where you can select from one of the following: fix packs, fix recommendations, fix search, managing updates, and security advisories. Choose fix pack. Click Continue (see Figure 9).
Figure 9. Choosing Fix packs type
Choosing Fix packs typeFrom here, select the TL: 6100-02. At this time, you can either download the latest service pack or the entire TL. Choose the entire TL (see Figure 10).
Figure 10. Choosing the entire TL
choosing the entire TLThe options for download include Download Director, bulk FTP, or CD. In this case, use Download Director. I recommend this method because it has a friendly interface and there is also flexibility to pause downloads.
Figure 11. Download Director
Download DirectorThe length of time is dependent on your Internet connection. For me, on broadband, this took roughly an hour.
As a systems administrator, the Fix Central URL should definitely be one of your browser favorites. Fix Central helps you keep your systems up to date and is the best method of manually retrieving data for your upgrades. You really can't be an effective AIX systems administrator without knowing how to use this tool.

Upgrading a TL

Now review the procedures to upgrade your system to the next TL.
First log in as root: # su - root. Make sure you back up your system. If you prefer, you can also use alt_disk_install or multibios; the bottom line is that you need to have a plan B if you must go back to your prior level. You should also commit them, because they can't be rejected and they also make it easier to track and reject PTFs.
Do a backup using the following command: # mksysb. When the backup is completed, you're set!

Installation

Create a .toc file. This is done by running the inutoc command (see Figure 12). You run this in the directory where your filesets reside. If you don't have the .toc file, your update will not work.
Figure 12. Running the inutoc command
Running the inutoc commandWhen this is completed, you are ready to start the upgrade. Move to the directory where your .toc file resides. If you do this, you will not have to specify a path name during your upgrade: # smit update_all.
On the screen in Figure 13, you will be making several changes.
Figure 13. Update software screen
Update software screenFor Input device/directory for software, put in the dot (.). If you remembered to cd to the directory that has the .toc file, you don't need to specify the full path name. In this case, you did not commit the software, though as a practical matter because you can't back out of a TL upgrade, you really should say yes to limit the amount of filesets stored on your system -- a real disk hog. In this case, you first previewed the data to ensure that there would not be any problems. This is the third option on this smit menu (shown in Figure 13). The preview option does nothing except validate whether something might be missing as a prerequisite of the upgrade. This is good to do to avoid surprises. You don't want to find out something is missing during your two-hour maintenance window of the month. You can run a preview at any time without any impact to the system.
In our case, as you can see in Figure 14, the preview is successful and there are no failures. So you are ready to move on.
Figure 14. Output of the preview
Output of the previewWhen you are ready to run the upgrade, change the preview to no. You will also have to change the default field that relates to << Accept new license agreement >>. For some reason, AIX defaults to no. Change it to yes.
After you click Enter, it will prompt you to make sure you want to do this. Click Enter to continue to start the process (see Figure 15).
Figure 15. Start the upgrade process
Start the upgrade processThis can run for up to an hour, depending on the speed of your system and the type of upgrade you are performing. When the process is complete, you can scroll down to the summary section to see if you've been successful, which in this case is yes. (see Figure 16).
Figure 16. Success
successYour final step is to reboot the box. Make sure you perform this step before bringing your applications back and going live. After I rebooted the box, I ran the oslevel command to confirm the new system level (see Listing 1).
Listing 1. Confirming the new system level
1
2
lpar46ml16fd_pub > # oslevel -s
6100-02-01-0847
What does this information signify? It tells that you are running AIX V6 TL2, service pack 1, released in 2008 in the 47th week. The forth field, 0847, indicates the year and week. Finally, it is highly recommended that you apply the latest service pack when moving to the new TL. In our case we did not have to, because the latest pack was already a part of the TL upgrade.

TL upgrade deployment schedule

When should you actually perform a TL upgrade? There are generally three scenarios where you will make this choice:
  • When your TL is going out of the available support period.
  • You want to test a new distribution level that will be going into production and need to get the longest period of fix support.
  • You want to use the new features and functionality of the TL.
The short answer is that there really is no right or wrong time to perform an upgrade. Some clients need an environment that requires maximum uptime and stability. These clients typically wait until new TLs are out for at least six months to a year before applying them. Other folks will wait for several TL's to be put into place prior to upgrading to ensure maximum reliability. For clients that like to take advantage of new features and ensure that they have the latest security patches, they typically install new service packs shortly after they come out. From a vendor-support perspective, IBM would prefer if you upgraded your TLs (and service packs, for that matter) as soon as they come out. The reason is that it's just easier to support systems that are always at current levels. I like to upgrade at least once a year to a new TL. In doing so, I will also usually wait until at least service pack number two has been released. If you really want to err on the side of caution, wait until service pack number three. That way, you will know that your release is really rock solid.

Summary

This article,discussed how to work with technology upgrades and service packs on AIX. It reviewed recent enhancements and changes in both nomenclature and substance. At the same time, it discussed best practices on how and when to upgrade environments. It looked at various methods at bringing the data to our systems: automatically using SUMA and manually with the IBM Fix Central. The article also performed an actually upgrade (with smit) using a download performed using SUMA.

Tuesday, June 19, 2018

Understanding AIX levels

Understanding AIX levels

What do all those numbers mean on oslevel -s?

If you're looking for some guidance on understanding exactly what level of AIX you're running, you may like to read my article in the IBM Systems Magazine on Understanding AIX versions. It answers questions such as:
  • What's a TL?
  • What's a Service Pack?
  • When you run oslevel -s, what do all the numbers mean?
  • Should you run a migration or a smitty update_all?
  • What's the difference between applying and committing updates?
  • Why does my oslevel go backwards after installing new software?
Here's an extract:

If your system is running AIX 5.3 TL 6 or anything later, “oslevel -s” will look something like this: 6100-02-06-0943. Breaking that down, the first four numbers show the AIX base level. In this example, it’s 6100, which means we’re running AIX 6.1. Next is the Technology Level (TL), followed by the number of the Service Pack (SP).The last four digits show the release date of the Service Pack using the format YYWW (YY for the year, then WW for the week of the year). So, if your “oslevel -s” command reports 6100-02-06-0943, then you know you’re on AIX 6.1, running TL 2, with SP 6. The “0943” tells you that Service Pack came out in week 43 of 2009. It’s time to update your system.

Friday, June 15, 2018

Mirroring rootvg for PowerVM Virtual I/O Server

This document describes how to mirror the rootvg for PowerVM Virtual I/O Server (VIOS) using mirrorios (padmin) command.

Symptom

Rootvg is not mirrored. Needs to add disk/adapter redundancy.

Environment

VIOS 2.2.x

Diagnosing the problem

To check if VIOS is mirrored, login as padmin, and run:
$ lsvg -lv rootvg > If # of LPs=# of PPs, then rootvg is not mirrored, as shown below:

Resolving the problem

To mirror rootvg use mirrorios command.
In the following example (taken from VIOS 2.2.4.10), hdisk0 is the only physical volume in rootvg and hdisk1 will be added to the volume to be used as the target disk.

To list current physical volumes in rootvg:
$ lsvg -pv rootvg


To list free physical volumes, use lspv command:
$ lspv -free


To add hdisk1 to rootvg:
$ extendvg rootvg hdisk1




To mirror rootvg:
$ mirrorios  <target_hdisk>

To confirm mirroring:
$ lsvg -lv rootvg > Notice # of PPs is double the # of LPs


Note1: By default, the dump device, lg_dumplv, is the only logical volume that is not mirrored.

Note2: At VIOS 2.2 and above, a reboot is no longer required for quorum change to take effect after mirrorios. IV64049 was created to implement fix where rebooting VIOS after mirrorios would not be the default behavior anymore. The APAR is included in 2.2.3.50 and above. Equivalent APAR for VIOS 2.2.2 is IV70130 included in 2.2.2.70 and higher.

Note3: By default, mirrorios command automatically updates the bootlist to include the new mirrored disk (hdisk1 in this case). To verify, use bootlist commad:
    $ bootlist -mode normal -ls


    To modify the bootlist, to remove the nework adapter, ent0, and just leave the mirrored disks, run:
    $ bootlist -mode normal hdisk0 hdisk1

Thursday, June 14, 2018

IBM PowerHA SystemMirror cluster migration


The purpose of this article is to provide a step-by-step guide for migrating an existing PowerHA cluster (at PowerHA 6.1.0) to PowerHA SystemMirror 7.1.2. This article helps in understanding how to plan for and accomplish a successful migration. It provides an overview of cluster variants in the PowerHA 7.1.2 and provides an overview of the PowerHA migration process, various migration methodologies, and the requirements for the migration process. I will discuss some migration limitations and prerequisites along with the planning process and also introduce the clmigcheck utility, which checks the current cluster configuration for any unsupported element in the cluster as well as collecting additional information required for the migration. The actual migration steps are presented in detail for use by customers to seamlessly migrate their two-node PowerHA 6.1 (single-site clusters) to PowerHA 7.1.2.

Cluster Aware AIX

The Cluster Aware AIX (CAA) is a built-in clustering capability of the IBM AIX® operating system. Using CAA, administrators can create a cluster of AIX nodes and take advantage of the capabilities of cluster. The CAA has many capabilities, and some of them are listed below:
  • Cluster-wide event management
    • Communication and storage events such as node up and down, network adapter up and down, network address changes, and disk up and down
    • Predefined and user-defined events
  • Cluster-wide storage naming service
  • Cluster-wide command distribution
  • Commands and application programming interfaces (APIs) to create clusters across a set of AIX systems: Kernel-based heartbeats and messages provide a robust cluster infrastructure and by default, use multichannel communication between nodes using the network and storage area network (SAN) physical links

Cluster repository disk

A cluster repository disk is a storage device shared across all the cluster nodes. This disk is used as a central repository. You can have only one cluster repository disk. In PowerHA 7.1.2, you can define a backup repository disk, which can be used in case the primary repository disk fails. For a linked cluster (true XD cluster), each PowerHA site will have its own repository. The repository disk cannot be mirrored using AIX Logical Volume Manager (LVM), and therefore, plan to have Redundant Array of Independent Disks (RAID) mirroring for the disk. The minimum space required for a cluster repository disk is 1 GB. Refer to the PowerHA SystemMirror Admin Guide for information on how to define backup repository disk.

Multicast IP addresses

CAA uses multicast addresses for cluster communication between the nodes in the cluster. It is mandatory to have multicast enabled in your cluster network infrastructure.

Differences between clcomdES and clcomd

Starting with AIX 6.1 TL6 and AIX 7.1, the cluster communication daemon has been integrated into AIX as part of the CAA infrastructure. Some of the differences between the clcomdES subsystem (used by previous versions of PowerHA) and the new clcomd daemon of CAA and PowerHA 7.1 and later are provided in this section.
  • Install: The clcomdES subsystem is part of the PowerHA SystemMirror install media. Whereas, clcomd is part of AIX installed with Base AIX Enterprise Edition (delivered with the bos.cluster.rte file set).
  • Name: The subsystem name of traditional cluster communication daemon is clcomdES; the new subsystem name is clcomd.
  • Run ability: The clcomdES daemon is always running on the nodes installed with PowerHA SystemMirror (run from /etc/inittab). The clcomd daemon is always running on nodes even if PowerHA SystemMirror is not installed (this is run from /etc/inittab as well).
  • Port: The clcomdES subsystem uses port 6191 (/etc/services). The clcomd daemon uses port 16191 (/etc/services); also uses the clcomdES port 6191 if PowerHA SystemMirror migration is detected.
  • Cluster definition: The clcomdES subsystem uses the /usr/es/sbin/cluster/etc/rhosts file for initial cluster definition. It can be populated with IP addresses for all available adapters on the node. Whereas, clcomd uses /etc/cluster/rhosts for initial cluster definition. The file /etc/cluster/rhosts should be populated with IP addresses, only one per line, of members in this file. Then, refresh clcomd using the refresh –s clcomd command.
  • Definition query: The clcomdES subsystem gets the cluster definition from PowerHA SystemMirror configuration data, whereas clcomd queries the definition of the cluster using kernel API (making use of the CAA infrastructure).

Differences between PowerHA 6.1 and PowerHA 7.1 and later

With the introduction of the CAA feature in AIX 7.1 and AIX 6.1 TL6, PowerHA SystemMirror has undergone a lot of architectural changes. Due to architectural changes in PowerHA 7.1 and later with the advent of CAA, PowerHA 7.1 and later expects the communication path for cluster node be set to the IP address mapped to the host name. Some of the differences between PowerHA 6.1 and PowerHA 7.1 and later are:
  • PowerHA 7.1 and later releases are based upon CAA where monitoring and event management is built into the AIX kernel providing robust foundation not prone to job scheduling. In the previous releases, PowerHA monitored soft and hard errors within the cluster from various event sources using Reliable Scalable Cluster Technology (RSCT).
  • In PowerHA 6.1 and lower releases, the main communication path goes from PowerHA to group services (grpsvcs subsystem of RSCT) and then to topology services (topsvcs subsystem of RSCT) and back. In PowerHA 7.1 and later releases, the main communication path goes from PowerHA to group services (cthags) and then to CAA.
  • With PowerHA 7.1, event management is handled by using a new pseudo file system architecture called Autonomic Health Advisor File System (AHAFS). This is used by CAA as its monitoring framework.
  • PowerHA 7.1 uses the cluster repository disk, Fibre Channel (FC)/SAN adapters and multicasting for heartbeating. Heartbeat is performed by sending and receiving special gossip packets across the network using the multicast protocol. The gossip packets are always replied to by other nodes. In older releases of PowerHA, IP and non-IP networks participated in heartbeats and detection or diagnosis of network, node, or network adapter failures. These heartbeat packets were never acknowledged.
  • PowerHA 7.1 and later releases use a special gossip protocol over the multicast address to determine node information and implement scalable reliable multicast. Older releases use traditional cluster communication daemon (clcomdES subsystem) which gets information from PowerHA Object Data Manager (ODM) and uses the heartbeat mechanism provided by RSCT for node information processing.
  • PowerHA 7.1 and later releases, introduced Systems Events, which are handled by the clevmgrdES subsystem. The root volume group (rootvg) system event allows the monitoring of loss of access to the rootvg. Loss of access to rootvg results in log entry in the system error log and system reboot. Older releases of PowerHA do not handle rootvg failures.

PowerHA 7.1.2 cluster variants

PowerHA SystemMirror 7.1.2 allows customers to configure three different styles of clusters namely local, stretched, and linked clusters.
Local cluster – It is a simple, multinode, single-site or local cluster configured using node or logical partitions (LPARs) within a single data center. This is the most typical cluster configuration providing for local PowerHA cluster fallover. Local fallover provides a faster transition onto another machine than a fallover going to a geographically dispersed site. Local clusters can benefit from advanced functions such as IBM PowerVM Live Partition Mobility (LPM) between machines within the same site. This combination of IBM PowerVM functions and IBM PowerHA SystemMirror clustering is useful for helping to avoid any service interruption for a planned maintenance event while protecting the environment in the event of unforeseen outage.
Stretched cluster – The term denotes a cluster that has sites defined within the same geographic location. This provides for a campus style disaster recovery and high availability cluster with cluster nodes separated by a shorter distance. The sites can be near enough to have shared logical unit numbers (LUNs) in the same SAN. The key aspect about stretched cluster is that it uses a shared repository disk. Stretched clusters can support cross-site LVM mirroring, IBM HyperSwap®, and Geographic Logical Volume Manager (GLVM). Extended distance sites with IP-only connectivity are not possible with this configuration.
Figure 1. Example of a stretched cluster
Example of a stretched clusterA stretched cluster configuration can also be used with PowerHA 7.1.2 Standard Edition with the use of LVM cross-site Mirroring. The stretched cluster is capable of using all three levels of cluster communication (TCP/IP, SAN heartbeat and repository disk). The distance can be up to 15 km, with direct SAN links and up to 120 km with dense wavelength division multiplexing (DWDM) or coarse wavelength division multiplexing (CWDM) or other SAN extenders. This provides for synchronous replication or mirroring.
Linked cluster – The term denotes a cluster that has sites defined across geographic locations allowing configuration of a traditional extended distance cluster between two sites, for example Brisbane and Singapore. The key aspect of a linked cluster that makes it different from extended distance clusters in previous versions is the use of SIRCOL in CAA. This means that each site has its own CAA repository disk, which is replicated automatically between sites by CAA. Linked cluster sites communicate with each other using unicast and not multicast as it is the case with stretched cluster or normal cluster. However, local sites internally use multicast, and therefore, multicast still must be enabled in the network at each site.
Figure 2. Illustration of a linked cluster
Illustration of a linked clusterAll the interfaces are defined in this type of configuration as CAA gateway addresses. CAA maintains the repository information automatically across sites through unicast address communication.

Migration overview

Unlike previous PowerHA SystemMirror migration methods, there will be some cases where migration will have to be done manually by the customer resulting in a complete cluster outage. These conditions can be detected at the time we run /usr/sbin/clmigcheck. There are three supported migration paths for PowerHA SystemMirror 6.1 migration to PowerHA SystemMirror 7.1.2. Each one requires an AIX upgrade, migration to AIX 6.1 TL8 SP1 or later, or AIX 7.1 TL2 SP1. Migration to PowerHA SystemMirror 7.1.2 is a two-phase process.
  • Phase I: AIX migration or upgrade based on the current AIX level.
  • Phase II: PowerHA SystemMirror migration
AIX migration
Refer to the AIX information center for steps on how to migrate or upgrade AIX.
PowerHA migration
PowerHA SystemMirror provides the following three different migration options.
  • Offline migration: As the name suggests, this type of migration involves bringing down the entire PowerHA cluster, installing PowerHA SystemMirror 7.1.2, and restarting cluster services for one node at a time.
  • Rolling migration: During rolling migration, the workload is moved from the node where it is currently running to another node in the system. This is followed by the installation of PowerHA 7.1.2 and the starting of cluster services. These steps are followed on all the remaining nodes.
  • Snapshot migration: This really is not a migration at all. Customers would remove the previous version of PowerHA SystemMirror and install the newer version of PowerHA SystemMirror 7.1.2. Customer would then use the PowerHA SystemMirror 7.1.2 configuration interface, either the Director GUI, System Management Interface Tool (SMIT), or command line to install the same configuration as they previously had, that is, restoring from cluster snapshot.

Migration requirements

Before you start migrating the cluster nodes, ensure that the following tasks are completed:
  1. Back up all the application and system data.
  2. Create a back out or reversion plan. A back out plan allows for easy restoration of cluster and AIX configuration in case migration runs into some problem. System backup should be created using the mksysb and savevg utilities.
  3. Ensure that the Communication Path to Node option in the PowerHA cluster nodes is set to the IP address mapping to the hostname.
  4. Save the existing cluster configuration. Also, save any user provided scripts, most commonly custom events, pre and post event scripts, notification scripts, and application controller scripts.
Some migration requirements are as follows:
  1. All cluster nodes have one shared disk that will be used for cluster repository, having at least 1 GB size. The list of supported FC and SAS adapters for connection to the repository disk:
    • 4 GB Single-Port Fibre Channel PCI-X 2.0 DDR Adapter (FC 1905; CCIN 1910)
    • 4 GB Single-Port Fibre Channel PCI-X 2.0 DDR Adapter (FC 5758; CCIN 280D)
    • 4 GB Single-Port Fibre Channel PCI-X Adapter (FC 5773; CCIN 5773)
    • 4 GB Dual-Port Fibre Channel PCI-X Adapter (FC 5774; CCIN 5774)
    • 4 Gb Dual-Port Fibre Channel PCI-X 2.0 DDR Adapter (FC 1910; CCIN 1910)
    • 4 Gb Dual-Port Fibre Channel PCI-X 2.0 DDR Adapter (FC 5759; CCIN 5759)
    • 8 Gb PCI Express Dual Port Fibre Channel Adapter (FC 5735; CCIN 577D)
    • 8 Gb PCI Express Dual Port Fibre Channel Adapter 1Xe Blade (FC 2B3A; CCIN 2607)
    • 3 Gb Dual-Port SAS Adapter PCI-X DDR External (FC 5900 and 5912; CCIN 572A)
    For the most current list of supported storage adapters, refer to the IBM PowerHA SystemMirror for AIX web page.
  2. Ensure that the current network infrastructure supports multicast. Enable multicast traffic on all network switches connected to all cluster nodes.
  3. Ensure that the /etc/cluster/rhosts file is properly filled with hostnames or IP addresses of all cluster nodes (IP addresses mapping to the host name), else cluster communication will fail and migration will not take place.
  4. Ensure that all cluster nodes have the requisite version of AIX installed. Refer to the following table.
  5. Ensure that Virtual I/O Server (VIOS) 2.2.0.1-FP24-SP01 or later is installed.
  6. The following additional file sets are required:
    • bos.cluster
    • bos.ahafs
    • bos.clvm.enh
    • devices.common.IBM.storfwork (required for SAN heartbeat)
  7. RSCT version:
    • rsct.core.rmc 3.1.4.0
    • rsct.basic 3.1.4.0
    • rsct.compat.basic.hacmp 3.1.4.0
    • rsct.compat.clients.hacmp 3.1.4.0

Migration limitations

There are certain limitations in migrating to PowerHA 7.1.2 because of the structural changes. The limitations are listed below:
  • Not all configurations can be migrated.
    • Configurations with FDDI, ATM, X.25 or Token Ring cannot be migrated and must be removed before migration.
    • Configurations with IP Address Takeover (IPAT) using replacement or Hardware Address Takeover (HWAT) cannot be migrated, and must be removed from configuration.
    • Configurations with heartbeat over IP aliasing must be removed before migration.
  • Non-IP networking is accomplished differently. PowerHA 7.1.2 (and the underlying CAA) use multicast, FC/SAN and the cluster repository disk for heartbeating. Traditional non-IP networks such as rs232, diskhb, mndhb, tmscsi, tmssa are not supported. These will be removed during migration.

clmigcheck utility

The clmigcheck utility is part of base AIX, included with AIX 6.1 TL6 or later. It is an interactive tool that verifies the current cluster configuration, checks for unsupported elements, and collects additional information required for migration. You must run this command on all cluster nodes, one node at a time, before installing PowerHA 7.1.2. The initial screen is as follows:

1
2
3
4
5
6
7
----------[PowerHA System Mirror Migration Check] -------------
Please select one of the following options:
1 = Check ODM configuration.
2 = Check snapshot configuration.
3 = Enter repository disk and multicast IP addresses.
Select one of the above, "x" to exit or "h" for help:
Note that at any prompt, you can type h for help about that data entry prompt.
  • Option 1 checks SystemMirror configuration data (/etc/es/objrepos) and provides errors and warnings if there are any elements in the configuration that must be removed manually. In that case, the flagged elements must be removed, cluster configuration verified and synchronized, and this command must be re-run until the SystemMirror configuration data check completes without errors.
  • Option 2 checks a snapshot (present in /usr/es/sbin/cluster/snapshots) and provides error information if there are any elements in the configuration that will not migrate. Because PowerHA SystemMirror provides no tools to edit a snapshot, any errors checking this snapshot means that it cannot be used for migration. In this case, the customer might have to apply the snapshot on the back-level PowerHA SystemMirror and update the configuration manually. Save the new snapshot and start the procedure all over again.
  • Option 3 queries the customer for additional configuration needed and saves it in a file in /var on every node in the cluster. When option 3 is selected from the main screen, you will be prompted for repository disk and multicast dotted decimal IP addresses. This data will be stored in a file (/var/clmigcheck/clmigcheck.txt) on every node in the cluster. When PowerHA SystemMirror 7.1.2 is installed, this file is read and the SystemMirror configuration data is populated. The customer must use either option 1 or option 2 successfully before running option 3, which collects and stores configuration data.
When the /usr/sbin/clmigcheck command is run on the last node of the cluster before installing PowerHA SystemMirror 7.1.2, the CAA infrastructure will be started. This can be verified by running the /usr/sbin/lscluster –m command.

FC/SAN based heartbeat mechanism

The cluster communication in PowerHA 7.1 and later (and CAA) is achieved by communicating over multiple redundant paths. This includes the important process of sending and processing the cluster heartbeats by each participating node. The following redundant paths provide robust clustering foundation:
  • TCP/IP (basically using multicast address)
  • Optional SAN or FC adapters
  • Repository disk
SAN-based path is a redundant, high-speed path of communication established between the hosts by using the SAN fabric that exists in any data center between hosts. Discovery-based configuration reduces the burden of configuring the links. PowerHA 7.1.2 supports SAN-based heartbeat within a site. It is not mandatory to set up FC or SAN-based heartbeat path, if the configured SANComm (sfwcomm as seen in lscluster –i output) provides additional heartbeat path for redundancy.
The SAN heartbeat infrastructure can be accomplished in several ways:
  • Using real adapters on the cluster nodes and enabling the storage framework capability (sfwcomm device) of the host bus adapters (HBAs). Currently, FC and SAS technologies are supported. The Setting up cluster storage communication link provides more details about supported HBAs and the required steps to set up the storage framework communication.
  • In a virtual environment using N-Port ID Virtualization (NPIV) or virtual Small Computer System Interface (vSCSI) with a VIOS instance, enabling the sfwcomm interface requires activating the target mode (the tme attribute) on the real adapter in the VIOS instance and defining a private virtual LAN (VLAN) (ID 3358) for communication between the partition containing the sfwcomm interface and VIOS. The real adapter on the VIOS must be a supported HBA.
The target mode enabled (tme) attribute for a supported adapter is only available when the minimum AIX level for CAA is installed. The configuration steps are as follows:
  1. Configure the FC adapters for SAN heartbeat on the VIOS instances. Use the chdev command to enable the tme attribute:
    1
    # chdev –l fcsX –a tme=yes -perm
  2. Run the chdev command to enable dynamic tracking and fast failure recovery on all FSCSI adapters.
    1
    # chdev –l fscsiX –a dyntrk=yes –a fc_err_recov=fast_fail
  3. Restart the VIOS instances.
  4. On the Hardware Management Console (HMC) create a new virtual Ethernet adapter for each cluster LPAR and VIOS. Set the VLAN ID to 3558 (no other VLAN ID is allowed).
  5. On the VIOS, run the cfgmgr command and check for the virtual Ethernet adapter and sfwcomm device using the lsdev command.
    1
    2
    3
    # lsdev –C | grep sfwcomm
    sfwcomm0 Available 01-00-02-FF Fibre Channel Storage Framework Comm
    sfwcomm1 Available 01-01-02-FF Fibre Channel Storage Framework Comm
  6. On the cluster nodes, run the cfgmgr command and check for the virtual Ethernet and sfwcomm device using the lsdev command.
  7. No other configuration is required in PowerHA. When the cluster is up and running, you can check the status of SAN heartbeat using the lscluster –i command.
You can run clras from /usr/lib/cluster, as shown below, to check if sfwcomm and dpcomm are working or not.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
0) root @ <nodename>: /usr/lib/cluster # ./clras
sancomm_status +---------------------------------------------------------------+ |
NAME | UUID | STATUS |
+---------------------------------------------------------------+ |
servr2.abcdefg.xxx.com | 6c3af126-d8d4-11e2-9c7a-00145ee770e9 | UP |
+---------------------------------------------------------------+ (0) root @
<nodename>: /usr/lib/cluster # ./clras dpcomm_status
+---------------------------------------------------------------+ | NAME | UUID |
STATUS | +---------------------------------------------------------------+ |
servr1.abcdefg.xxx.com | 54119a46-d8d4-11e2-ac6b-00145ee770e9 | UP|
+---------------------------------------------------------------+ |
servr2.abcdefg.xxx.com | 6c3af126-d8d4-11e2-9c7a-00145ee770e9 | UP|
+---------------------------------------------------------------+ (0) root @
<nodename>: /usr/lib/cluster #

PowerHA migration

Before migrating PowerHA to PowerHA 7.1 and later, test whether the nodes in your environment support multicast-based communication. To test end-to-end multicast communication for all nodes used to create the cluster on your network, run the mping command, which is part of the CAA framework of AIX. You can run mping with a specific multicast address; otherwise the command uses a default address. The following is an example of the mping command for the multicast address 228.168.101.43, where nodeA is the receiver and nodeB is the sender. You must run the following commands from both the nodes at the same time:
  1. From nodeA, run mping –r –v –c 5 –a 228.168.101.43
  2. From nodeB, run mping –s –v –c 5 –a 228.168.101.43
Repeat the steps this time by reversing the sender and receiver.

Offline migration

You can choose to stop cluster services on all nodes, and then install PowerHA 7.1. After all the checks are successful, the clconvert utility runs from installp to convert the configuration represented in the back level PowerHA configuration data classes to PowerHA 7.1 and later version, including the running of mkcluster and creating the CAA version of cluster in addition to removing any discovered interface not in the previous version of PowerHA (such as SAN/FC heartbeat sfwcomm). After AIX has been migrated, follow these steps to migrate the PowerHA level to version PowerHA 7.1 and later.
  1. Stop cluster services on all cluster nodes. Use the smitty clstop command and select the Bring a Resource Group Offline option.
  2. Ensure that the cluster services have been stopped. Use the lssrc –ls clstrmgrES command to check the cluster state. It should be ST_INIT.
  3. Run /usr/sbin/clmigcheck on the first node and select option 1.
  4. If the cluster cannot be migrated, the clmigcheck utility will indicate that in error messages. Remove the unsupported elements. If no errors are reported, skip step 5.
  5. Perform a verification and then synchronize.
  6. Run clmigcheck once again and select option 1. The clmigcheck command says The ODM has no unsupported elements, as shown in the following figure.
  7. Now select option 3 to enter the repository disk information and optionally provide the multicast IP address. The data is saved in the /var/clmigcheck/clmigcheck.txt file on each node. You need to enter this information only on the first node.
  8. Populate the /etc/cluster/rhosts file on this node with the IP addresses of all the cluster nodes (addresses corresponding to hostname command).
  9. Refresh the clcomd daemon, and run the refresh –s clcomd command.
  10. Install PowerHA 7.1 and later on the first node.
  11. Run the following steps on all remaining nodes, one at a time.
    1. Run /usr/sbin/clmigcheck. It prompts you to install new version of PowerHA, as shown in the message in the following figure.
      figure 4
    2. Add the IP addresses of all the cluster nodes in the /etc/cluster/rhosts file and refresh the clcomd daemon.
    3. Install PowerHA 7.1.2.
  12. /usr/sbin/clmigcheck detects the last node when it runs and it creates a cluster-aware infrastructure, that is, a CAA cluster on all the nodes. This can be verified by running the /usr/sbin/lscluster –m command
  13. Update the /etc/cluster/rhosts file, refresh clcomd and install PowerHA SystemMirror 7.1.
  14. Start the cluster services, one node at a time, and ensure that each node successfully joins the cluster. After the last node has joined the cluster, your migration is successful.

Rolling migration

In rolling migration, the newer version of PowerHA is installed (one node at a time), while the remaining nodes continue to run cluster services and host the workload. In this mixed version state, PowerHA continue to respond to cluster events. In a rolling migration, you stop cluster services on the target node with the Move Resource Groups option. There is a brief interruption while the application moves to the backup or fallover node, and a second interruption while the application moves back to the primary or home node after it has been migrated. The steps to migrate are as follows:
  1. Run /usr/sbin/clmigcheck on the first node, and select option 1.
  2. If the cluster cannot be migrated, error messages will be displayed. In that case, remove all unsupported elements.
  3. Verify and sync the corrected cluster definition from the first node.
  4. Populate the /et/cluster/rhosts file on this node, and refresh the clcomd daemon.
  5. Run clmigcheck again to verify that there are no further unsupported elements. The The ODM has no unsupported elements message is displayed, as shown in the following figure.
  6. Stop the cluster services with the Move Resource Groups option.
  7. Run clmigcheck again and select option 3. Enter the shared repository disk, and optionally, provide the multicast IP address. The information is saved in /var/clmigcheck/clmigcheck.txt
  8. Install the newer version of PowerHA on this node.
  9. After the installation is complete, start the cluster services.
  10. On the remaining nodes, follow these steps, one node at a time, after stopping the cluster services with the Move Resource Groups option.
    1. Populate the /etc/cluster/rhosts file with the IP addresses of all cluster nodes.
    2. Refresh the clcomd daemon.
    3. Run /usr/sbin/clmigcheck. A message, as shown in the following figure is displayed.
    4. Install PowerHA 7.1.2 on the node.
    5. Start cluster services.
  11. /usr/sbin/clmigcheck will detect the last node when it runs and will create a CAA cluster on all the nodes. Run the /usr/sbin/lscluster –m command to verify this.
  12. Start cluster services on the last node. After the last node joins the cluster, your migration is complete.

Snapshot migration

The snapshot migration path requires cluster services to be down on all the nodes, thus calling for a cluster outage or application downtime. To migrate a cluster using this path, you need to perform the following steps.
  1. Create a cluster snapshot. By default, the snapshot is saved in the /usr/es/sbin/cluster/snapshots directory. Save a copy of it in /tmp or some other location.
  2. Stop the cluster services on all nodes using the Bring Resource Groups Offline option.
  3. Run /usr/sbin/clmigcheck on the first node, and then select option 2. Enter the snapshot name.
  4. If the utility reports errors for unsupported elements, the snapshot cannot be migrated. In this case, remove all unsupported elements reported by clmigcheck. If no errors are reported, go to step 7.
  5. Take a new cluster snapshot and save a copy of it in /tmp.
  6. Run /usr/sbin/clmigcheck again with option 2 to ensure that there are no unsupported elements.
  7. Choose option 3 in /usr/sbin/clmigcheck to enter the shared disk (repository disk) and optionally, the multicast address.
  8. Remove the existing version of PowerHA software on all cluster nodes.
  9. In the /etc/cluster/rhosts file, fill in the IP addresses of all cluster nodes (IP addresses corresponding to the host name command).
  10. Refresh clcomd using the refresh –s clcomd command.
  11. Install a newer version of PowerHA.
  12. Convert the snapshot using the clconvert_snapshot command:
    1
    # /usr/es/sbin/cluster/conversion/clconvert_snapshot –v 6.1.0 –s <snapshot file name>
  13. Restore the converted snapshot. Use the path: smitty sysmirror -> Cluster Nodes and Networks -> Manage the Cluster -> Snapshot Configuration -> Restore the Cluster Configuration from a Snapshot.
  14. After the restoration is done, run verification and synchronization. This creates and enables the CAA infrastructure. You can verify this using the lscluster –m command.
  15. Start the cluster services, one node at a time. After the last node joins the cluster, the migration is complete.