It is no secret that I am a pretty big fan of excellent Linux Software RAID. Creating, assembling and rebuilding small array is fine. But, things started to get nasty when you try to rebuild or re-sync large size array. You may get frustrated when you see it is going to take 22 hours to rebuild the array. You can always increase the speed of Linux Software RAID 0/1/5/6 reconstruction using the following five tips.
Why speed up Linux software RAID rebuilding and re-syncing?
Recently, I build a small NAS server running Linux for one my client with 5 x 2TB disks in RAID 6 configuration for all in one backup server for Linux, Mac OS X, and Windows XP/Vista/7/10 client computers. Next, I type the command cat /proc/mdstat and it reported that md0 is active and recovery is in progress. The recovery speed was around 4000K/sec and will complete in approximately in 22 hours. I wanted to finish this early.
A note about lazy initialization and ext4 file system
When creating an ext4 file system, the Linux kernel uses lazy initialization. This feature allows the faster creatation of a file system. A process called “ext4lazyinit” runs in the background to create rest of all inode tables. As a result, your RAID rebuild is going to operate at minimal speed. This only affects if you have just created an ext4 filesystem. There is an option to enable or disable this feature while running mkfs.ext4 command:
lazy_itable_init[= <0 to disable, 1 to enable>] – If enabled and the uninit_bg feature is enabled, the inode table will not be fully initialized by mke2fs. This speeds up filesystem initialization noticeably, but it requires the kernel to finish initializing the filesystem in the background when the filesystem is first mounted. If the option value is omitted, it defaults to 1 to enable lazy inode table zeroing.
lazy_journal_init[= <0 to disable, 1 to enable>] – If enabled, the journal inode will not be fully zeroed out by mke2fs. This speeds up filesystem initialization noticeably, but carries some small risk if the system crashes before the journal has been overwritten entirely one time. If the option value is omitted, it defaults to 1 to enable lazy journal inode zeroing.
Tip #1: /proc/sys/dev/raid/{speed_limit_max,speed_limit_min} kernel variables
The /proc/sys/dev/raid/speed_limit_min is config file that reflects the current “goal” rebuild speed for times when non-rebuild activity is current on an array. The speed is in Kibibytes per second (1 kibibyte = 210 bytes = 1024 bytes), and is a per-device rate, not a per-array rate . The default is 1000.
The /proc/sys/dev/raid/speed_limit_max is config file that reflects the current “goal” rebuild speed for times when no non-rebuild activity is current on an array. The default is 100,000.
To see current limits, enter: sysctl dev.raid.speed_limit_min<br>
NOTE: The following hacks are used for recovering Linux software raid, and to increase the speed of RAID rebuilds. Options are good for tweaking rebuilt process and may increase overall system load, high cpu and memory usage.
To increase speed, enter: echo value > /proc/sys/dev/raid/speed_limit_min OR sysctl -w dev.raid.speed_limit_min=value In this example, set it to 50000 K/Sec, enter: echo 50000 > /proc/sys/dev/raid/speed_limit_min OR
sysctl -w dev.raid.speed_limit_min=50000 If you want to override the defaults you could add these two lines to /etc/sysctl.conf:
#################NOTE ################
## You are limited by CPU and memory too #
###########################################
dev.raid.speed_limit_min = 50000
## good for 4-5 disks based array ##
dev.raid.speed_limit_max = 2000000
## good for large 6-12 disks based array ###
dev.raid.speed_limit_max = 5000000
Set readahead (in 512-byte sectors) per raid device. The syntax is: blockdev --setra 65536 /dev/mdX<br>## Set read-ahead to 32 MiB ##<br>blockdev --setra 65536 /dev/md0<br>
This is only available on RAID5 and RAID6 and boost sync performance by 3-6 times. It records the size (in pages per device) of the stripe cache which is used for synchronising all write operations to the array and all read operations if the array is degraded. The default is 256. Valid values are 17 to 32768. Increasing this number can increase performance in some situations, at some cost in system memory. Note, setting this value too high can result in an “out of memory” condition for the system. Use the following formula:
To set stripe_cache_size to 16 MiB for /dev/md0, type: echo 16384 > /sys/block/md0/md/stripe_cache_size To set stripe_cache_size to 32 MiB for /dev/md3, type:
Bitmaps optimize rebuild time after a crash, or after removing and re-adding a device. Turn it on by typing the following command: mdadm --grow --bitmap=internal /dev/md0 Once array rebuild or fully synced, disable bitmaps:
mdadm --grow --bitmap=none /dev/md0
Results
My speed went from 4k to 51k: cat /proc/mdstat Sample outputs:
Fig.01: Performance optimization for Linux raid6 for /dev/md2 The following command provide details about /dev/md2 raid arrray including status and health report: mdadm --detail /dev/md2 Sample outputs:
Fig.03: Find out CPU statistics and input/output statistics for devices and partitions Feel free to use the df command or du command to get info about the disk space usage on Linux. For example: df -hT /raid1<br>du -csh /raid1
Conclusion
We learned how to optimize speed for Linux software RAID devices.
See the following man pages using the man command:
man 4 md<br>man 8 mdadm<br>man 5 proc Also look into /etc/cron.d/mdadm and /usr/share/mdadm/checkarray on Debian/Ubuntu Linux
This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish.AcceptRead More
Privacy & Cookies Policy
Privacy Overview
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.