Saturday, December 15, 2012

IBM AIX - Creating a New LPAR using HMC GUI


IBM AIX - Creating a New LPAR using HMC GUI - Dedicated Resource!!

Step1: Login into HMC GUI.
To create a new logical partition, you have to login into HMC either locally or remotely using WebSM or using web browser. Example below uses WebSM to login into the HMC



Type the hostname of HMC that you want to connect to and click on Log on tab.In this example we are logging into adcaphmc01 to create a new logical partition.

                                                        


WebSM client then communicates with WebSM server services on HMC and prompts for user name and password. Enter the username and password for HMC and click on Log on tab.
A successful authentication allows you to access the HCM GUI as shown below.




Step 2: Plan for the resource allocation to a new LPAR.Before creating any new LPAR, one should plan thoroughly for the resource allocation. An agreement should be made on the minimum, desired and maximum value of processors & memory, shared or dedicated processors and type and number of I/O cards.
In the example to follow, we are creating a new LPAR adcapsap03-HRNEW with following resources
1. CPU – Mode - Shared Processing unit - 0.2 min, 0.5 desired, 1.0 max Virual CPU – 1 min, 1 desired, 2 max
2.Memory – 4 GB min, 8 GB desired , 10 GB Max
3.I/0 cards – 1 * SCSI controller with 4 SCSI disks
(U5791.001.99B064P-P1-T5)
2 * 2Gbps fiber channel cards (U5791.001.99B064P-P1-C05) (U5791.001.99B064P-P2-C08)
3 * Virtual Ethernet cards (U5791.001.99B0760-P2-C06)
Resource allocation for the SAP p590 is mentioned in a separate document.

Step 3: Creating new LPAR
On the left panel, click on adcaphmc01.uk.rweutil.net -> server and partitions ->server management. The right panel will now display the p590 server and the
partitions under it. Right click on thames-p50-01



select Create -> logical partition

This would start a wizard for creating the new LPAR



On this screen, type the partition name as adcapsap03-HR3NEW and select partition environment as AIX or Linux.

Click on Next



This screen prompts for adding the logical partition to a Workload group. Since we are not using any Workload group, so select NO and Click on Next. The following screen prompts for logical partition profile name. Type Profile name as Normal and Click on Next. The following wizard screen prompts for the minimum, desired and maximum value of memory.


        


As mentioned in step 2, we select the values of minimum, desired and maximum memory as 4GB, 8GB and 10GBrespectively.
Click on Next The following wizard screen prompts for the Processor allocation.


         


Because we are using micro partitioning for the development, test and training servers, so select the Shared radio button.
For production servers we would be assigning dedicated processors for the LPAR.I this example we are creating LPAR for a SAP development server, so we select Shared mode and Click on Next.
The following screen prompts for the processing setting


        


As mentioned in step 2, we enter the value of minimum, desired and maximum processing units as 0.2, 05 and 1.0 respectively.
click on Advance tab.

  


Select processing sharing mode as Uncapped with Weight as 128. Under Virtual processor panel, select minimum, desired and maximum values as 1,1 and 2 respectively. Click on OK tab. Again Click on Next to proceed further.
The following wizard screen will prompt for I/O resources.

                             


Select the I/O resources as mentioned in step 2

   


Select U5791.001.99B064P-P1-T5 SCSI bus controller and click on Add as required.
This would add the I/O device to the LPAR profile in pane below.

         


In a similar way add I/O U5791.001.99B064P-P1-C05 Fiber channel card and
U5791.001.99B064P-P2-C08 Fiber channel card to the LPAR profile.

 


Similarly add U5791.001.99B0760-P2-C06 Ethernet adapter to the profile I/O devices.


     


Once all the resources are added to the profile I/O devices as required, Click on Next.
The following wizard screen prompts for I/O pools. Since we are not using any shared I/O pool, so ignore this screen.

      


Click on Next
The following wizard screen prompts for the Virtual I/O Adapters. We are not making any use of Virtual I/O devices in this example.

   


Select No and Click on Next
The following screen prompts for the power controlling partition. For the all the LPARS that we have defined, the default partition 83-8C6DB(1) is acting as the power controlling partition.


   


Ensure that the default partition 83-8C6DB(1) is displayed in the drop down list and click Next.
The following screen prompts for the Boot mode.

   


Select the Boot mode as Normal and click on next
The following wizard screen is a summary Screen. If you want to make any changes on previous screens, the you can use previous tab to scroll back and make changes if any. If you are sure of your selections, the click on Finish tab and this would create the new LPAR.
You will now see a status screen

    


Once the creation of LPAR is successful, the new LPAR is displayed under the managed system thames-p50-01.

                                                     


You can see that the new LPAR adcapsap03-HRNEW under managed system thames-p590-01.
You can recheck the resource allocation by right clicking the profile and selecting properties.



             


This end the procedure to create a new LPAR using dedicated resources.

IBM AIX - Creating a New LPAR using HMC CLI

IBM AIX - Creating a New LPAR using HMC CLI

Here is an example, for more information see '''man mksyscfg'''
mksyscfg -r lpar -m MACHINE -i name=LPARNAME, profile_name=normal, lpar_env=aixlinux, shared_proc_pool_util_auth=1, min_mem=512, desired_mem=2048, max_mem=4096, proc_mode=shared, min_proc_units=0.2, desired_proc_units=0.5, max_proc_units=2.0, min_procs=1, desired_procs=2, max_procs=2, sharing_mode=uncap, uncap_weight=128, boot_mode=norm, conn_monitoring=1, shared_proc_pool_util_auth=1

If you want to create more LPARS at once you can use a configuration file and provide it as input for mksyscfg.
Here is an example for 3 LPARs, each definition starting at new line:

name=LPAR1,profile_name=normal,lpar_env=aixlinux,all_resources=0,min_mem=1024,desired_mem=9216,max_mem=9216,proc_mode=shared,min_proc_units=0.3,desired_proc_units=1.0,max_proc_units=3.0,min_procs=1,desired_procs=3,max_procs=3,sharing_mode=uncap,uncap_weight=128,lpar_io_pool_ids=none,max_virtual_slots=10,"virtual_scsi_adapters=6/client/4/vio1a/11/1,7/client/9/vio2a/11/1","virtual_eth_adapters=4/0/3//0/1,5/0/4//0/1",boot_mode=norm,conn_monitoring=1,auto_start=0,power_ctrl_lpar_ids=none,work_group_id=none,shared_proc_pool_util_auth=1
name=LPAR2,profile_name=normal,lpar_env=aixlinux,all_resources=0,min_mem=1024,desired_mem=9216,max_mem=9216,proc_mode=shared,min_proc_units=0.3,desired_proc_units=1.0,max_proc_units=3.0,min_procs=1,desired_procs=3,max_procs=3,sharing_mode=uncap,uncap_weight=128,lpar_io_pool_ids=none,max_virtual_slots=10,"virtual_scsi_adapters=6/client/4/vio1a/12/1,7/client/9/vio2a/12/1","virtual_eth_adapters=4/0/3//0/1,5/0/4//0/1",boot_mode=norm,conn_monitoring=1,auto_start=0,power_ctrl_lpar_ids=none,work_group_id=none,shared_proc_pool_util_auth=1
name=LPAR3,profile_name=normal,lpar_env=aixlinux,all_resources=0,min_mem=1024,desired_mem=15360,max_mem=15360,proc_mode=shared,min_proc_units=0.4,desired_proc_units=1.0,max_proc_units=4.0,min_procs=1,desired_procs=4,max_procs=4,sharing_mode=uncap,uncap_weight=128,lpar_io_pool_ids=none,max_virtual_slots=10,"virtual_scsi_adapters=6/client/4/vio1a/13/1,7/client/9/vio2a/13/1","virtual_eth_adapters=4/0/3//0/1,5/0/4//0/1",boot_mode=norm,conn_monitoring=1,auto_start=0,power_ctrl_lpar_ids=none,work_group_id=none,shared_proc_pool_util_auth=1 
Copy this file to HMC and run: mksyscfg -r lpar -m SERVERNAME -f /tmp/profiles.txt

If you already have LPARs created you can use this command to get their configuration which can be reused as template: lssyscfg -r prof -m SERVERNAME --filter "lpar_ids=X, profile_names=normal"



IBM AIX - OS Upgrade nimadm 12 phases

IBM AIX - OS Upgrade nimadm 12 phases!!

The nimadm utility offers several advantages over a conventional migration. Following are the advantages of nimadm over other migration methods:

Reduced downtime for the client: The migration can execute while the system is up and running as normal. There is no disruption to any of the applications or services running on the client. Therefore, the upgrade can be done at a anytime time. Once upgrade complete we need take a downtime from the client and scheduled a reboot in order to restart the system at the later level of AIX.


Flexibility: The nimadm process is very flexible and it can be customized using some of the optional NIM customization resources, such as image_data, bosinst_data, pre/post_migration scripts, exclude_files, and so on.


Quick recovery from migration failures: All changes are performed on the copied rootvg (altinst_rootvg). If there are any problems with the migration, the original rootvg is still available and the system has not been impacted. If a migration fails or terminates at any stage, nimadm is able to quickly recover from the event and clean up afterwards. There is little for the administrator to do except determine why the migration failed, rectify the situation, and attempt the nimadm process again. If the migration completed but issues are discovered after the reboot, then the administrator can back out easily by booting from the original rootvg disk.
The nimadm command performs a migration in 12 phases. All migration activity is logged on the NIM master in the /var/adm/ras/alt_mig directory. It is useful to have knowledge of each phase before performing a migration. After starting the alt_disk process from NIM master we output as below, these are pre ALT_DISK steps


0513-029 The biod Subsystem is already active.
Multiple instances are not supported.
0513-059 The nfsd Subsystem has been started. Subsystem PID is 3780796.
0513-059 The rpc.mountd Subsystem has been started. Subsystem PID is 1237104.
0513-059 The nfsrgyd Subsystem has been started. Subsystem PID is 3477732.
0513-059 The gssd Subsystem has been started. Subsystem PID is 3743752.
0513-029 The rpc.lockd Subsystem is already active.
Multiple instances are not supported.
0513-029 The rpc.statd Subsystem is already active.
Multiple instances are not supported.
starting upgrade now
Initializing the NIM master.
Initializing NIM client webmanual01.
Verifying alt_disk_migration eligibility.
Initializing log: /var/adm/ras/alt_mig/webmanual01_alt_mig.log
Starting Alternate Disk Migration.

Explanation of Phase 1 : After starting nfsd , rpc.mountd , gssd , nfsrgyd,rpc.lockd, rpc.statd process in the pre ALT_DISK , the master issues the alt_disk_install command to the client, which makes a copy of the clients rootvg to the target disks. In this phase, the alternate root volume group (altinst_rootvg) is created.

+-----------------------------------------------------------------------------+
Executing nimadm phase 1.
+-----------------------------------------------------------------------------+
Cloning altinst_rootvg on client, Phase 1.
Client alt_disk_install command: alt_disk_copy -j -M 6.1 -P1 -d "hdisk0"
Calling mkszfile to create new /image.data file.
Checking disk sizes.
Creating cloned rootvg volume group and associated logical volumes.
Creating logical volume alt_hd5
Creating logical volume alt_hd6
Creating logical volume alt_hd8
Creating logical volume alt_hd4
Creating logical volume alt_hd2
Creating logical volume alt_hd9var
Creating logical volume alt_hd3
Creating logical volume alt_hd1
Creating logical volume alt_hd10opt
Creating logical volume alt_lg_dumplv
Creating logical volume alt_lv_admin
Creating logical volume alt_lv_sw
Creating logical volume alt_lg_crmhome
Creating logical volume alt_lv_crmhome
Creating logical volume alt_paging00
Creating logical volume alt_hd11admin
Creating /alt_inst/ file system.
Creating /alt_inst/admin file system.
Creating /alt_inst/adminOLD file system.
Creating /alt_inst/crmhome file system.
Creating /alt_inst/home file system.
Creating /alt_inst/opt file system.
Creating /alt_inst/software file system.
Creating /alt_inst/tmp file system.
Creating /alt_inst/usr file system.
Creating /alt_inst/var file system.
Generating a list of files
for backup and restore into the alternate file system...
Phase 1 complete.

Explanation of Phase 2 : The NIM master creates the cache file systems in the nimadmvg volume group. Some initial checks for the required migration disk space are performed. 

+-----------------------------------------------------------------------------+
Executing nimadm phase 2.
+-----------------------------------------------------------------------------+
Creating nimadm cache file systems on volume group nimvg.
Checking for initial required migration space.
Creating cache file system /webmanual01_alt/alt_inst
Creating cache file system /webmanual01_alt/alt_inst/admin
Creating cache file system /webmanual01_alt/alt_inst/adminOLD
Creating cache file system /webmanual01_alt/alt_inst/crmhome
Creating cache file system /webmanual01_alt/alt_inst/home
Creating cache file system /webmanual01_alt/alt_inst/opt
Creating cache file system /webmanual01_alt/alt_inst/sw
Creating cache file system /webmanual01_alt/alt_inst/tmp
Creating cache file system /webmanual01_alt/alt_inst/usr
Creating cache file system /webmanual01_alt/alt_inst/var

Explanation of Phase 3 : The NIM master copies the NIM client’s data to the cache file systems in nimvg. This data copy is done by either rsh or nimsh.

+-----------------------------------------------------------------------------+
Executing nimadm phase 3.
+-----------------------------------------------------------------------------+
Syncing client data to cache ...
cannot access ./tmp/alt_lock: A file or directory in the path name does not exist.
Explanation of Phase 4 : If a pre-migration script resource has been specified, it is executed at this time.

+-----------------------------------------------------------------------------+
Executing nimadm phase 4.
+-----------------------------------------------------------------------------+
nimadm: There is no user customization script specified for this phase.

Explanation of Phase 5 : System configuration files are saved. Initial migration space is calculated and appropriate file system expansions are made. The bos image is restored and the device database is merged. All of the migration merge methods are executed, and some miscellaneous processing takes place.

+-----------------------------------------------------------------------------+
Executing nimadm phase 5.
+-----------------------------------------------------------------------------+
Saving system configuration files.
Checking for initial required migration space.
Setting up for base operating system restore.
/webmanual01_alt/alt_inst
Restoring base operating system.
Merging system configuration files.
Running migration merge method: ODM_merge Config_Rules.
Running migration merge method: ODM_merge SRCextmeth.
Running migration merge method: ODM_merge SRCsubsys.
Running migration merge method: ODM_merge SWservAt.
Running migration merge method: ODM_merge pse.conf.
Running migration merge method: ODM_merge vfs.
Running migration merge method: ODM_merge xtiso.conf.
Running migration merge method: ODM_merge PdAtXtd.
Running migration merge method: ODM_merge PdDv.
Running migration merge method: convert_errnotify.
Running migration merge method: passwd_mig.
Running migration merge method: login_mig.
Running migration merge method: user_mrg.
Running migration merge method: secur_mig.
Running migration merge method: RoleMerge.
Running migration merge method: methods_mig.
Running migration merge method: mkusr_mig.
Running migration merge method: group_mig.
Running migration merge method: ldapcfg_mig.
Running migration merge method: ldapmap_mig.
Running migration merge method: convert_errlog.
Running migration merge method: ODM_merge GAI.
Running migration merge method: ODM_merge PdAt.
Running migration merge method: merge_smit_db.
Running migration merge method: ODM_merge fix.
Running migration merge method: merge_swvpds.
Running migration merge method: SysckMerge.

Explanation of Phase 6: All system filesets are migrated using installp. Any required RPM images are also installed during this phase.

+-----------------------------------------------------------------------------+
Executing nimadm phase 6.
+-----------------------------------------------------------------------------+
Installing and migrating software.
Updating install utilities.
+-----------------------------------------------------------------------------+
Pre-installation Verification...
+-----------------------------------------------------------------------------+
Verifying selections...done
Verifying requisites...done
Results...

SUCCESSES
---------
Filesets listed in this section passed pre-installation verification
and will be installed.

Mandatory Fileset Updates
-------------------------
(being installed automatically due to their importance)
bos.rte.install 6.1.6.15 # LPP Install Commands

<< End of Success Section >>

+-----------------------------------------------------------------------------+
BUILDDATE Verification ...
+-----------------------------------------------------------------------------+
Verifying build dates...done
FILESET STATISTICS
------------------
1 Selected to be installed, of which:
1 Passed pre-installation verification
----
1 Total to be installed

+-----------------------------------------------------------------------------+
Installing Software...
+-----------------------------------------------------------------------------+

installp: APPLYING software for:
bos.rte.install 6.1.6.15

. . . . . << Copyright notice for bos >> . . . . . . .
Licensed Materials - Property of IBM

[LOTS OF OUTPUT]

Installation Summary
--------------------
Name Level Part Event Result
-------------------------------------------------------------------------------
lwi.runtime 6.1.6.15 USR APPLY SUCCESS
lwi.runtime 6.1.6.15 ROOT APPLY SUCCESS
X11.compat.lib.X11R6_motif 6.1.6.15 USR APPLY SUCCESS
Java5.sdk 5.0.0.395 USR APPLY SUCCESS
Java5.sdk 5.0.0.395 ROOT APPLY SUCCESS
Java5.sdk 5.0.0.395 USR COMMIT SUCCESS
Java5.sdk 5.0.0.395 ROOT COMMIT SUCCESS
lwi.runtime 6.1.6.15 USR COMMIT SUCCESS
lwi.runtime 6.1.6.15 ROOT COMMIT SUCCESS
X11.compat.lib.X11R6_motif 6.1.6.15 USR COMMIT SUCCESS 

install_all_updates: Generating list of updatable rpm packages.
install_all_updates: No updatable rpm packages found.

install_all_updates: Checking for recommended maintenance level 6100-06.
install_all_updates: Executing /usr/bin/oslevel -rf, Result = 6100-06
install_all_updates: Verification completed.
install_all_updates: Log file is /var/adm/ras/install_all_updates.log
install_all_updates: Result = SUCCESS
Known Recommended Maintenance Levels
------------------------------------
Restoring device ODM database.

Explanation of Phase 7 : If a post-migration script resource has been specified, it is executed at this time.

+-----------------------------------------------------------------------------+
Executing nimadm phase 7.
+-----------------------------------------------------------------------------+
nimadm: There is no user customization script specified for this phase.

Explanation of Phase 8 : The bosboot command is run to create a client boot image, which is written to the client’s alternate boot logical volume (alt_hd5)

+-----------------------------------------------------------------------------+
Executing nimadm phase 8.
+-----------------------------------------------------------------------------+
Creating client boot image.
bosboot: Boot image is 47136 512 byte blocks.
Writing boot image to client's alternate boot disk hdisk0.

Explanation of Phase 9 : All the migrated data is now copied from the NIM master’s local cache file and synced to the client’s alternate rootvg.

+-----------------------------------------------------------------------------+
Executing nimadm phase 9.
+-----------------------------------------------------------------------------+
Adjusting client file system sizes ...
Adjusting size for /
Adjusting size for /admin
Adjusting size for /adminOLD
Adjusting size for /crmhome
Adjusting size for /home
Adjusting size for /opt
Adjusting size for /sw
Adjusting size for /tmp
Adjusting size for /usr
Expanding /alt_inst/usr client filesystem.
Filesystem size changed to 12058624
Adjusting size for /var
Syncing cache data to client ...

Explanation of Phase 10 : The NIM master cleans up and removes the local cache file systems.

+-----------------------------------------------------------------------------+
Executing nimadm phase 10.
+-----------------------------------------------------------------------------+
Unmounting client mounts on the NIM master.
forced unmount of /webmanual01_alt/alt_inst/var
forced unmount of /webmanual01_alt/alt_inst/usr
forced unmount of /webmanual01_alt/alt_inst/tmp
forced unmount of /webmanual01_alt/alt_inst/sw
forced unmount of /webmanual01_alt/alt_inst/opt
forced unmount of /webmanual01_alt/alt_inst/home
forced unmount of /webmanual01_alt/alt_inst/crmhome
forced unmount of /webmanual01_alt/alt_inst/adminOLD
forced unmount of /webmanual01_alt/alt_inst/admin
forced unmount of /webmanual01_alt/alt_inst
Removing nimadm cache file systems.
Removing cache file system /webmanual01_alt/alt_inst
Removing cache file system /webmanual01_alt/alt_inst/admin
Removing cache file system /webmanual01_alt/alt_inst/admin
Removing cache file system /webmanual01_alt/alt_inst/crmhome
Removing cache file system /webmanual01_alt/alt_inst/home
Removing cache file system /webmanual01_alt/alt_inst/opt
Removing cache file system /webmanual01_alt/alt_inst/sw
Removing cache file system /webmanual01_alt/alt_inst/tmp
Removing cache file system /webmanual01_alt/alt_inst/usr
Removing cache file system /webmanual01_alt/alt_inst/var

Explanation of Phase 11 :
The alt_disk_install command is called again to make the final adjustments and put altinst_rootvg to sleep. The bootlist is set to the target disk

+-----------------------------------------------------------------------------+
Executing nimadm phase 11.
+-----------------------------------------------------------------------------+
Cloning altinst_rootvg on client, Phase 3.
Client alt_disk_install command: alt_disk_copy -j -M 6.1 -P3 -d "hdisk0"
## Phase 3 ###################
Verifying altinst_rootvg...
Modifying ODM on cloned disk.
forced unmount of /alt_inst/var
forced unmount of /alt_inst/usr
forced unmount of /alt_inst/tmp
forced unmount of /alt_inst/sw
forced unmount of /alt_inst/opt
forced unmount of /alt_inst/home
forced unmount of /alt_inst/crmhome
forced unmount of /alt_inst/admin
forced unmount of /alt_inst/admin
forced unmount of /alt_inst
Changing logical volume names in volume group descriptor area.
Fixing LV control blocks...
Fixing file system superblocks...
Bootlist is set to the boot disk: hdisk0 blv=hd5

Explanation of Phase 12: Cleanup is executed to end the migration.

+-----------------------------------------------------------------------------+
Executing nimadm phase 12.
+-----------------------------------------------------------------------------+
Cleaning up alt_disk_migration on the NIM master.
Cleaning up alt_disk_migration on client webmanual01.
Please review log to verify success
Initializing the NIM master.
Initializing NIM client webmanual01.
Verifying alt_disk_migration eligibility.
Initializing log: /var/adm/ras/alt_mig/webmanual01_alt_mig.log
Starting Alternate Disk Migration.
After the migration is complete, login to client and confirm bootlist is set to the altinst_rootvg disk.
# lspv | grep rootvg
Hdisk1 0000273ac30fdcfc rootvg active
hdisk0 000273ac30fdd6e altinst_rootvg active

# bootlist -m normal -o
Hdisk0 blv=hd5

Pages