Preparation
First order of business is, as usual, BACKUP ALL DATA on the array.
After this you might want to insert the new disk into the array either with a hot-swap carrier (like in my case a IcyDock MB455SPF) or incase you are running the array with permanently attached drives by shutting down the computer and adding another drive to the controller.
Begin Hardware Volume configuration
The raid controller will begin automatic detection of the new drive instantly after insertation (or after boot if not hotswap). The new drive will be marked as unused free space / unallocated.
At this point you should be in a running instance of Ubuntu. Fire up the Areca Linux Control HTTP-service
bash #> archttp64
Connect to the WebManagement interface at http://localhost:81 if you have the management daemon running with defaults.
Default values for console authentication are admin:0000
Once you have successfully authenticated to the console check the status of the array from the front page. You should see the new drive(s) as free space and unused.
Now begin the array expansion by selecting
RaidSet Functions -> Expand Raid Set
Select the RaidSet you want to expand and click the submit button -> select the free volumes you want to add to the RaidSet and check the confirmation tick-box and click on submit.
Wait for the RaidSet to rebuild. This will take a long time depending on the size of your array. With a 1TB expansion the rebuild time is around 24h.
After the RaidSet Expansion is complete (you'll see progress from the WebAdmin Console in the Volume State field) you will need to add the new space to the current VolumeSet by selecting
VolumeSet Functions -> Modify Volume Set
and entering the new volume size into the size field and if you are expanding the array over 2TB you might need to set the 2TB limit circumvention to off depending on your OS and filesystem.
Wait for the VolumeSet expansion to complete. This will take about 3-4h for 1TB expansion.
LVM2 and ext3 resizing
You may resize your LVM2 volumes online with atleast LVM v2.02.26 shipping with Ubuntu 7.10.
Issue the command
bash #> pvdisplay
to view current volume information. Then you issue the command
bash #> pvresize /dev/sda
This will happen almost instantly.
At this point you will want to reboot the server to update the kernels view of the logical volumes.
Now you may check the new size of the physical volume by issuing the command
bash #> pvdisplay
again. Write down the available PE-count (Physical Extents) of the resized volume. This information will be needed in the next step.
Next we will expand your logical volumes to max available size by issuing the command
bash #> lvresize -l (insert Total PE-number) /dev/areca/volume-name
Wait for volume resizing. This will take abt. 3h per TB.
After stopping of all networking services that use the volume and unmounting the volume
bash #> umount /dev/areca/volume-name
(not strictly nescessary since resize2fs supports nowadays online resizing) you may begin priming the system for the expansion of the ext3 filesystem by issuing the command
bash #> e2fsck -vtf /dev/areca/volume-name
This will run a filesystem check on the current filesystem and reset all check-bits so that the resizing utility will run. This will take about an hour per terabyte of data in the array. e2fsck is very bad at telling what it's doing so don't panic if it seems to hang.
Resize the ext3 filesystem by
bash #> resize2fs -p /dev/areca/volume-name
This will take another 30 min - 1h.
After the resize successfully completes reboot the server.
Same as above in a flow chart format |
No comments:
Post a Comment