Sometimes our users need some space for storing their new data to our HP-Ux servers. There are several way for doing this, especially if our servers is using storage controller software, such as : powerpath from EMC or HDLM from HDS.
Here, I will try to share the native HP-UX command for mounting new disk to the system. Here are the steps :
1. run ioscan command
serverdb1:/# ioscan -fnC disk
disk 175 0/0/12/1/0.10.194.0.0.3.0 sdisk CLAIMED DEVICE HITACHI OPEN-V
disk 176 1/0/14/1/0.10.16.0.0.3.0 sdisk CLAIMED DEVICE HITACHI OPEN-V
we have two link for our new disk
2. run insf -e command
disk 176 1/0/14/1/0.10.16.0.0.3.0 sdisk CLAIMED DEVICE HITACHI OPEN-V
/dev/dsk/c20t3d0 /dev/rdsk/c20t3d0
disk 175 0/0/12/1/0.10.194.0.0.3.0 sdisk CLAIMED DEVICE HITACHI OPEN-V
/dev/dsk/c22t3d0 /dev/rdsk/c22t3d0
3. Check the disk for ensuring it is the right new disk for preventing from wrong disks
serverdb1:/# diskinfo /dev/rdsk/c22t3d0
SCSI describe of /dev/rdsk/c22t3d0:
vendor: HITACHI
product id: OPEN-V
type: direct access
size: 167772480 Kbytes
bytes per sector: 512
serverdb1:/# pvdisplay /dev/dsk/c22t3d0
pvdisplay: Couldn't find the volume group to which
physical volume "/dev/dsk/c22t3d0" belongs.
4. create new physical
serverdb1:/# pvcreate /dev/rdsk/c22t3d0
serverdb1:/# cd /dev
serverdb1:/# mkdir vgnew
serverdb1:/dev/vgtrtda# pwd
/dev/vgnew
5. create new volume group
serverdb1:/dev/vgtrtda# mknod group c 64 0x050000
serverdb1:/dev/vgtrtda# vgcreate /dev/vgnew /dev/dsk/c22t3d0
Increased the number of physical extents per physical volume to 40960.
vgcreate: Volume group "/dev/vgnew" could not be created:
VGRA for the disk is too big for the specified parameters. Increase the
extent size or decrease max_PVs/max_LVs and try again.
(for disk above 100GB ussualy more than '4' value for -s)
serverdb1:/dev/vgtrtda# vgcreate -s 16 /dev/vgnew /dev/dsk/c22t3d0
Increased the number of physical extents per physical volume to 10240.
Volume group "/dev/vgnew" has been successfully created.
Volume Group configuration for /dev/vgnew has been saved in /etc/lvmconf/vgnew.conf
serverdb1:/dev/vgnew# vgdisplay vgnew
--- Volume groups ---
VG Name /dev/vgnew
VG Write Access read/write
VG Status available
Max LV 255
Cur LV 0
Open LV 0
Max PV 16
Cur PV 1
Act PV 1
Max PE per PV 10240
VGDA 2
PE Size (Mbytes) 16
Total PE 10238
Alloc PE 0
Free PE 10238
Total PVG 0
Total Spare PVs 0
Total Spare PVs in use 0
6. Create Logical Volume
serverdb1:/dev/vgnew# lvcreate -l 10238 /dev/vgnew
Logical volume "/dev/vgnew/lvol1" has been successfully created with
character device "/dev/vgnew/rlvol1".
Logical volume "/dev/vgnew/lvol1" has been successfully extended.
Volume Group configuration for /dev/vgnew has been saved in /etc/lvmconf/vgnew.conf
serverdb1:/dev/vgnew# newfs -F vxfs /dev/vgnew/rlvol1
version 5 layout
167739392 sectors, 167739392 blocks of size 1024, log size 16384 blocks
unlimited inodes, largefiles not supported
167739392 data blocks, 167680712 free data blocks
5119 allocation units of 32768 blocks, 32768 data blocks
serverdb1:/dev/vgtrtda# cd /
7. Mount new directory
serverdb1:/# mkdir newdirforu
serverdb1:/dev/vgtrtda# lvcreate -l 10238 /dev/vgnew
Logical volume "/dev/vgnew/lvol1" has been successfully created with
Volume Group configuration for /dev/vgnew has been saved in /etc/lvmconf/vgtrtda.conf
7.1. Filesystem formating
serverdb1:/dev/vgnew# newfs -F vxfs /dev/vgnew/rlvol1
version 5 layout
167739392 sectors, 167739392 blocks of size 1024, log size 16384 blocks
unlimited inodes, largefiles not supported
167739392 data blocks, 167680712 free data blocks
5119 allocation units of 32768 blocks, 32768 data blocks
serverdb1:/dev/vgnew# cd /
serverdb1:/# mkdir trtdadebug
serverdb1:/# mount /dev/vgnew/lvol1 /newdirforu
serverdb1:/# bdf
Filesystem kbytes used avail %used Mounted on
/dev/vg00/lvol3 5242880 690736 4516584 13% /
/dev/vg00/lvol1 10485760 220432 10185224 2% /stand
/dev/vg00/lvol8 10485760 7879504 2586224 75% /var
/dev/vg00/lvol7 6291456 2714464 3549096 43% /usr
/dev/vg00/lvol6 5242880 3066944 2162112 59% /tmp
/dev/vg00/lvol5 10485760 5874848 4574904 56% /opt
/dev/vg00/lvol4 1048576 18792 1021800 2% /home
/dev/vg00/lvol9 17203200 11853334 5018074 70% /app
/dev/vgtrtda/lvol1 167739392 57581 157201705 0% /newdirforu
8. Add on /etc/fstab files
serverdb1:/# more /etc/fstab
# System /etc/fstab file. Static information about the file systems
# See fstab(4) and sam(1M) for further details on configuring devices.
/dev/vg00/lvol3 / vxfs delaylog 0 1
/dev/vg00/lvol1 /stand vxfs tranflush 0 1
/dev/vg00/lvol4 /home vxfs delaylog 0 2
/dev/vg00/lvol5 /opt vxfs delaylog 0 2
/dev/vg00/lvol6 /tmp vxfs delaylog 0 2
/dev/vg00/lvol7 /usr vxfs delaylog 0 2
/dev/vg00/lvol8 /var vxfs delaylog 0 2
/dev/vg00/lvol9 /app vxfs delaylog 0 2
/dev/vgnew/lvol1 /newdirforu vxfs delaylog 0 2
9. Add alternate link :
serverdb1:/# vgextend vgnew /dev/dsk/c20t3d0
Volume group "vgnew" has been successfully extended.
Volume Group configuration for /dev/vgnew has been saved in /etc/lvmconf/vgnew.conf
serverdb1:/# strings /etc/lvmconf/vgnew.conf
CONFIG01
/dev/vgnew
/dev/rdsk/c22t3d0
/dev/rdsk/c20t3d0
LVMREC01
4aMQ
4aMQ
LVMREC01
4aMQ
4aMQ
4aMQ
4aMQ
VGDA0001
VGSA0001MQ
serverdb1:/# vgdisplay -v vgnew
--- Volume groups ---
VG Name /dev/vgnew
VG Write Access read/write
VG Status available
Max LV 255
Cur LV 1
Open LV 1
Max PV 16
Cur PV 1
Act PV 1
Max PE per PV 10240
VGDA 2
PE Size (Mbytes) 16
Total PE 10238
Alloc PE 10238
Free PE 0
Total PVG 0
Total Spare PVs 0
Total Spare PVs in use 0
--- Logical volumes ---
LV Name /dev/vgnew/lvol1
LV Status available/syncd
LV Size (Mbytes) 163808
Current LE 10238
Allocated PE 10238
Used PV 1
--- Physical volumes ---
PV Name /dev/dsk/c22t3d0
PV Name /dev/dsk/c20t3d0 Alternate Link
PV Status available
Total PE 10238
Free PE 0
Autoswitch On
Proactive Polling On
These steps only for remembering if sometimes i'm forget.
Here, I will try to share the native HP-UX command for mounting new disk to the system. Here are the steps :
1. run ioscan command
serverdb1:/# ioscan -fnC disk
disk 175 0/0/12/1/0.10.194.0.0.3.0 sdisk CLAIMED DEVICE HITACHI OPEN-V
disk 176 1/0/14/1/0.10.16.0.0.3.0 sdisk CLAIMED DEVICE HITACHI OPEN-V
we have two link for our new disk
2. run insf -e command
disk 176 1/0/14/1/0.10.16.0.0.3.0 sdisk CLAIMED DEVICE HITACHI OPEN-V
/dev/dsk/c20t3d0 /dev/rdsk/c20t3d0
disk 175 0/0/12/1/0.10.194.0.0.3.0 sdisk CLAIMED DEVICE HITACHI OPEN-V
/dev/dsk/c22t3d0 /dev/rdsk/c22t3d0
3. Check the disk for ensuring it is the right new disk for preventing from wrong disks
serverdb1:/# diskinfo /dev/rdsk/c22t3d0
SCSI describe of /dev/rdsk/c22t3d0:
vendor: HITACHI
product id: OPEN-V
type: direct access
size: 167772480 Kbytes
bytes per sector: 512
serverdb1:/# pvdisplay /dev/dsk/c22t3d0
pvdisplay: Couldn't find the volume group to which
physical volume "/dev/dsk/c22t3d0" belongs.
4. create new physical
serverdb1:/# pvcreate /dev/rdsk/c22t3d0
serverdb1:/# cd /dev
serverdb1:/# mkdir vgnew
serverdb1:/dev/vgtrtda# pwd
/dev/vgnew
5. create new volume group
serverdb1:/dev/vgtrtda# mknod group c 64 0x050000
serverdb1:/dev/vgtrtda# vgcreate /dev/vgnew /dev/dsk/c22t3d0
Increased the number of physical extents per physical volume to 40960.
vgcreate: Volume group "/dev/vgnew" could not be created:
VGRA for the disk is too big for the specified parameters. Increase the
extent size or decrease max_PVs/max_LVs and try again.
(for disk above 100GB ussualy more than '4' value for -s)
serverdb1:/dev/vgtrtda# vgcreate -s 16 /dev/vgnew /dev/dsk/c22t3d0
Increased the number of physical extents per physical volume to 10240.
Volume group "/dev/vgnew" has been successfully created.
Volume Group configuration for /dev/vgnew has been saved in /etc/lvmconf/vgnew.conf
serverdb1:/dev/vgnew# vgdisplay vgnew
--- Volume groups ---
VG Name /dev/vgnew
VG Write Access read/write
VG Status available
Max LV 255
Cur LV 0
Open LV 0
Max PV 16
Cur PV 1
Act PV 1
Max PE per PV 10240
VGDA 2
PE Size (Mbytes) 16
Total PE 10238
Alloc PE 0
Free PE 10238
Total PVG 0
Total Spare PVs 0
Total Spare PVs in use 0
6. Create Logical Volume
serverdb1:/dev/vgnew# lvcreate -l 10238 /dev/vgnew
Logical volume "/dev/vgnew/lvol1" has been successfully created with
character device "/dev/vgnew/rlvol1".
Logical volume "/dev/vgnew/lvol1" has been successfully extended.
Volume Group configuration for /dev/vgnew has been saved in /etc/lvmconf/vgnew.conf
serverdb1:/dev/vgnew# newfs -F vxfs /dev/vgnew/rlvol1
version 5 layout
167739392 sectors, 167739392 blocks of size 1024, log size 16384 blocks
unlimited inodes, largefiles not supported
167739392 data blocks, 167680712 free data blocks
5119 allocation units of 32768 blocks, 32768 data blocks
serverdb1:/dev/vgtrtda# cd /
7. Mount new directory
serverdb1:/# mkdir newdirforu
serverdb1:/dev/vgtrtda# lvcreate -l 10238 /dev/vgnew
Logical volume "/dev/vgnew/lvol1" has been successfully created with
Volume Group configuration for /dev/vgnew has been saved in /etc/lvmconf/vgtrtda.conf
7.1. Filesystem formating
serverdb1:/dev/vgnew# newfs -F vxfs /dev/vgnew/rlvol1
version 5 layout
167739392 sectors, 167739392 blocks of size 1024, log size 16384 blocks
unlimited inodes, largefiles not supported
167739392 data blocks, 167680712 free data blocks
5119 allocation units of 32768 blocks, 32768 data blocks
serverdb1:/dev/vgnew# cd /
serverdb1:/# mkdir trtdadebug
serverdb1:/# mount /dev/vgnew/lvol1 /newdirforu
serverdb1:/# bdf
Filesystem kbytes used avail %used Mounted on
/dev/vg00/lvol3 5242880 690736 4516584 13% /
/dev/vg00/lvol1 10485760 220432 10185224 2% /stand
/dev/vg00/lvol8 10485760 7879504 2586224 75% /var
/dev/vg00/lvol7 6291456 2714464 3549096 43% /usr
/dev/vg00/lvol6 5242880 3066944 2162112 59% /tmp
/dev/vg00/lvol5 10485760 5874848 4574904 56% /opt
/dev/vg00/lvol4 1048576 18792 1021800 2% /home
/dev/vg00/lvol9 17203200 11853334 5018074 70% /app
/dev/vgtrtda/lvol1 167739392 57581 157201705 0% /newdirforu
8. Add on /etc/fstab files
serverdb1:/# more /etc/fstab
# System /etc/fstab file. Static information about the file systems
# See fstab(4) and sam(1M) for further details on configuring devices.
/dev/vg00/lvol3 / vxfs delaylog 0 1
/dev/vg00/lvol1 /stand vxfs tranflush 0 1
/dev/vg00/lvol4 /home vxfs delaylog 0 2
/dev/vg00/lvol5 /opt vxfs delaylog 0 2
/dev/vg00/lvol6 /tmp vxfs delaylog 0 2
/dev/vg00/lvol7 /usr vxfs delaylog 0 2
/dev/vg00/lvol8 /var vxfs delaylog 0 2
/dev/vg00/lvol9 /app vxfs delaylog 0 2
/dev/vgnew/lvol1 /newdirforu vxfs delaylog 0 2
9. Add alternate link :
serverdb1:/# vgextend vgnew /dev/dsk/c20t3d0
Volume group "vgnew" has been successfully extended.
Volume Group configuration for /dev/vgnew has been saved in /etc/lvmconf/vgnew.conf
serverdb1:/# strings /etc/lvmconf/vgnew.conf
CONFIG01
/dev/vgnew
/dev/rdsk/c22t3d0
/dev/rdsk/c20t3d0
LVMREC01
4aMQ
4aMQ
LVMREC01
4aMQ
4aMQ
4aMQ
4aMQ
VGDA0001
VGSA0001MQ
serverdb1:/# vgdisplay -v vgnew
--- Volume groups ---
VG Name /dev/vgnew
VG Write Access read/write
VG Status available
Max LV 255
Cur LV 1
Open LV 1
Max PV 16
Cur PV 1
Act PV 1
Max PE per PV 10240
VGDA 2
PE Size (Mbytes) 16
Total PE 10238
Alloc PE 10238
Free PE 0
Total PVG 0
Total Spare PVs 0
Total Spare PVs in use 0
--- Logical volumes ---
LV Name /dev/vgnew/lvol1
LV Status available/syncd
LV Size (Mbytes) 163808
Current LE 10238
Allocated PE 10238
Used PV 1
--- Physical volumes ---
PV Name /dev/dsk/c22t3d0
PV Name /dev/dsk/c20t3d0 Alternate Link
PV Status available
Total PE 10238
Free PE 0
Autoswitch On
Proactive Polling On
These steps only for remembering if sometimes i'm forget.