Flashcache implementation steps on Openvz
# cd /usr/src/
# git clone https://github.com/facebook/flashcache.git
# cd flashcache
if you are facing some issues while installing git, you can use the below command
# yum --disableexcludes=main install git -y
Make sure the kernel-devel for current openVZ kernel is installed, otherwise you will get error while doing "make"
# rpm -qa |grep vzkernel-devel-`uname -r`
# make
# make install
Initializing the module
# modprobe flashcache
By checking out your kernel messages you can see that it has been initialized.
# dmesg | tail
.............................
[ 5806.891504] flashcache: flashcache-1.0 initialized
# lsmod |grep flashcache
flashcache 77223 0
dm_mod 80860 1 flashcache
stop the VE and unmount the LVM
umount -l /vz/hostname
umount the lvm /dev/mapper/VG1-vzhostname
make sure there are no associated process, stopping the cdp services is also recommended.
you can check the process with fuser -vm
Create flash cache device using SSD drive and vz LVM
# flashcache_create -p around vzhostname_cache /dev/sdb1 /dev/mapper/VG1-vzhostname (here sdb is the SSD drive and sdb1 is the partition)
use theas "around" ; write back mode create issues
/dev/sdb1is the ssd drive or partitions
/dev/mapper/VG1-vzhostname is the lvm mounted for the ve
We can either use different partition of same ssd drive or can use dedicated ssd drive for each LVMs if t needed.
Mode: there are three modes, wrire_back is more dangerous. Write_through and write_around are the safest mode.
check fdisk -l and see the device "/dev/mapper/vzhostname_cache" is created and mount it
ll /dev/mapper/vzhostname_cache
lrwxrwxrwx 1 root root 7 Aug 25 23:28 /dev/mapper/vzhostname_cache -> ../dm-1
This step is not needed if you are adding the init script. The init script will automatically mount the cache devices. Make sure that you have commented the normal volume group mount to the VE
*Then add the device in fstab and mount it using "mount -a"
Eg: (make sure you set 0 0 for last two columns to disable filesystem check on booting)
UUID=4c670b55-12a0-4d4a-9209-eb21a74e5ee3 /vz/hostname ext4 defaults 0 0
blkid should be same for "/dev/mapper/VG1-hostname" & "/dev/mapper/vzhostname_cache"
You can see whether the flashcache device is properly working by mouting manually the flashcache device.
[root@server ~]# mount /dev/mapper/vzhostname_cache /vz/hostname
After mounting you can see the VPS is listed, here you can see the VEID 302 is in stopped status.
[root@e2host1003 ~]# vzlist -a
CTID NPROC STATUS IP_ADDR HOSTNAME
302 - stopped x.x.x.x hostname.com
..
dmsetup status /dev/mapper/vzhostname_cache
This can show mapper status
Then we can start the VPS
[root@server ~]# vzctl start 302
check the mount using "df -h"
INIT script is needed to load the flashcache after the reboot.
Create a file named flashcache in /etc/init.d/flashcache and add the following contents. The following script is for cache devices set up in a openvz node.
#!/bin/bash
#
# flashcache Init Script to manage cachedev loads
#
# chkconfig: 345 9 98
# description: Flashcache Management
# Flashcache options
# modify this before using this init script
SSD_DISK=/dev/sdb1
BACKEND_DISK=/dev/mapper/VolGroup-101
CACHEDEV_NAME=103cache
MOUNTPOINT=/vz/vps1
FLASHCACHE_NAME=103cache
# Just a check, to validate the above params are set
[ -z "$SSD_DISK" ] && exit 10
[ -z "$BACKEND_DISK" ] && exit 11
[ -z "$CACHEDEV_NAME" ] && exit 12
[ -z "$MOUNTPOINT" ] && exit 13
[ -z "$FLASHCACHE_NAME" ] && exit 14
# Source function library.
. /etc/rc.d/init.d/functions
#globals
DMSETUP=`/usr/bin/which dmsetup`
SERVICE=flashcache
#create
flashcache_create -p around $FLASHCACHE_NAME $SSD_DISK $BACKEND_DISK
SUBSYS_LOCK=/var/lock/subsys/$SERVICE
RETVAL=0
start() {
echo "Starting Flashcache..."
#Load the module
/sbin/modprobe flashcache
RETVAL=$?
if [ $RETVAL -ne 0 ]; then
echo "Module Load Error: flashcache. Exited with status - $RETVAL"
exit $RETVAL
fi
#mount
if [ -L /dev/mapper/$CACHEDEV_NAME1 ]; then
/bin/mount /dev/mapper/$CACHEDEV_NAME1 $MOUNTPOINT1
RETVAL=$?
if [ $RETVAL -ne 0 ]; then
echo "Mount Failed: /dev/mapper/$CACHEDEV_NAME1 to $MOUNTPOINT1"
exit $RETVAL
fi
else
echo "Not Found: /dev/mapper/$CACHEDEV_NAME1"
exit 1
fi
#lock subsys
touch $SUBSYS_LOCK
}
stop() {
#unmount
/bin/umount $MOUNTPOINT1
echo "Flushing flashcache: Flushes to $BACKEND_DISK1"
$DMSETUP remove $CACHEDEV_NAME1
#unlock subsys
rm -f $SUBSYS_LOCK
}
status() {
[ -f $SUBSYS_LOCK ] && echo "Flashcache status: loaded" || echo "Flashcache status: NOT loaded";
$DMSETUP status $CACHEDEV_NAME1
exit $?
}
case $1 in
start)
start
;;
stop)
stop
;;
status)
status
;;
forcestop)
stop --force
;;
*)
echo "Usage: $0 {start|stop|status}"
exit 1
esac
exit 0
dmsetup status
Now the following script is added to load flashcache for two devices after the reboot.
#!/bin/bash
#
# flashcache Init Script to manage cachedev loads
#
# chkconfig: 345 9 98
# description: Flashcache Management
# Flashcache options
# modify this before using this init script
SSD_DISK1=/dev/sdb1
BACKEND_DISK1=/dev/mapper/VolGroup-101
CACHEDEV_NAME1=103cache
MOUNTPOINT1=/vz/vps1
FLASHCACHE_NAME1=103cache
SSD_DISK2=/dev/sdb2
BACKEND_DISK2=/dev/mapper/VolGroup-102
CACHEDEV_NAME2=104cache
MOUNTPOINT2=/vz/vps2
FLASHCACHE_NAME2=104cache
# Just a check, to validate the above params are set
[ -z "$SSD_DISK1" ] && exit 10
[ -z "$BACKEND_DISK1" ] && exit 11
[ -z "$CACHEDEV_NAME1" ] && exit 12
[ -z "$MOUNTPOINT1" ] && exit 13
[ -z "$FLASHCACHE_NAME1" ] && exit 14
[ -z "$SSD_DISK2" ] && exit 10
[ -z "$BACKEND_DISK2" ] && exit 11
[ -z "$CACHEDEV_NAME2" ] && exit 12
[ -z "$MOUNTPOINT2" ] && exit 13
[ -z "$FLASHCACHE_NAME2" ] && exit 14
# Source function library.
. /etc/rc.d/init.d/functions
#globals
DMSETUP=`/usr/bin/which dmsetup`
SERVICE=flashcache
#create
flashcache_create -p around $FLASHCACHE_NAME1 $SSD_DISK1 $BACKEND_DISK1
flashcache_create -p around $FLASHCACHE_NAME2 $SSD_DISK2 $BACKEND_DISK2
SUBSYS_LOCK=/var/lock/subsys/$SERVICE
RETVAL=0
start() {
echo "Starting Flashcache..."
#Load the module
/sbin/modprobe flashcache
RETVAL=$?
if [ $RETVAL -ne 0 ]; then
echo "Module Load Error: flashcache. Exited with status - $RETVAL"
exit $RETVAL
fi
#mount 1
if [ -L /dev/mapper/$CACHEDEV_NAME1 ]; then
/bin/mount /dev/mapper/$CACHEDEV_NAME1 $MOUNTPOINT1
RETVAL=$?
if [ $RETVAL -ne 0 ]; then
echo "Mount Failed: /dev/mapper/$CACHEDEV_NAME1 to $MOUNTPOINT1"
exit $RETVAL
fi
else
echo "Not Found: /dev/mapper/$CACHEDEV_NAME1"
exit 1
fi
#mount 2
if [ -L /dev/mapper/$CACHEDEV_NAME2 ]; then
/bin/mount /dev/mapper/$CACHEDEV_NAME2 $MOUNTPOINT2
RETVAL=$?
if [ $RETVAL -ne 0 ]; then
echo "Mount Failed: /dev/mapper/$CACHEDEV_NAME2 to $MOUNTPOINT2"
exit $RETVAL
fi
else
echo "Not Found: /dev/mapper/$CACHEDEV_NAME2"
exit 1
fi
#lock subsys
touch $SUBSYS_LOCK
}
stop() {
#unmount
/bin/umount $MOUNTPOINT1
echo "Flushing flashcache: Flushes to $BACKEND_DISK1"
$DMSETUP remove $CACHEDEV_NAME1
/bin/umount $MOUNTPOINT2
echo "Flushing flashcache: Flushes to $BACKEND_DISK2"
$DMSETUP remove $CACHEDEV_NAME2
#unlock subsys
rm -f $SUBSYS_LOCK
}
status() {
[ -f $SUBSYS_LOCK ] && echo "Flashcache status: loaded" || echo "Flashcache status: NOT loaded";
$DMSETUP status $CACHEDEV_NAME1
$DMSETUP status $CACHEDEV_NAME2
exit $?
}
case $1 in
start)
start
;;
stop)
stop
;;
status)
status
;;
forcestop)
stop --force
;;
*)
echo "Usage: $0 {start|stop|status}"
exit 1
esac
exit 0
[edit]=============================
Configuring the script using chkconfig:
[edit]=============================
1. Copy '/usr/src/flashcache/utils/flashcache' from the repo to '/etc/init.d/flashcache' 2. Make sure this file has execute permissions, 3. Edit this file and specify the values for the following variables
SSD_DISK, BACKEND_DISK, CACHEDEV_NAME, MOUNTPOINT, FLASHCACHE_NAME
SSD_DISK- Name of SSD disk. here it is /dev/sdb BACKEND_DISK- Actual device, here /dev/mapper/VG1-vz CACHEDEV_NAME-flash cache name(here vz_cache) MOUNTPOINT- here /vz FLASHCACHE_NAME-flash cache name(here vz_cache)
4. Modify the headers in the file if necessary.
By default, it starts in runlevel 3, with start-stop priority 90-10
5. Register this file using chkconfig
'chkconfig flashcache on'
# cd /usr/src/
# git clone https://github.com/facebook/flashcache.git
# cd flashcache
if you are facing some issues while installing git, you can use the below command
# yum --disableexcludes=main install git -y
Make sure the kernel-devel for current openVZ kernel is installed, otherwise you will get error while doing "make"
# rpm -qa |grep vzkernel-devel-`uname -r`
# make
# make install
Initializing the module
# modprobe flashcache
By checking out your kernel messages you can see that it has been initialized.
# dmesg | tail
.............................
[ 5806.891504] flashcache: flashcache-1.0 initialized
# lsmod |grep flashcache
flashcache 77223 0
dm_mod 80860 1 flashcache
stop the VE and unmount the LVM
umount -l /vz/hostname
umount the lvm /dev/mapper/VG1-vzhostname
make sure there are no associated process, stopping the cdp services is also recommended.
you can check the process with fuser -vm
Create flash cache device using SSD drive and vz LVM
# flashcache_create -p
use the
/dev/sdb1is the ssd drive or partitions
/dev/mapper/VG1-vzhostname is the lvm mounted for the ve
We can either use different partition of same ssd drive or can use dedicated ssd drive for each LVMs if t needed.
Mode: there are three modes, wrire_back is more dangerous. Write_through and write_around are the safest mode.
check fdisk -l and see the device "/dev/mapper/vzhostname_cache" is created and mount it
ll /dev/mapper/vzhostname_cache
lrwxrwxrwx 1 root root 7 Aug 25 23:28 /dev/mapper/vzhostname_cache -> ../dm-1
This step is not needed if you are adding the init script. The init script will automatically mount the cache devices. Make sure that you have commented the normal volume group mount to the VE
*Then add the device in fstab and mount it using "mount -a"
Eg: (make sure you set 0 0 for last two columns to disable filesystem check on booting)
UUID=4c670b55-12a0-4d4a-9209-eb21a74e5ee3 /vz/hostname ext4 defaults 0 0
blkid should be same for "/dev/mapper/VG1-hostname" & "/dev/mapper/vzhostname_cache"
You can see whether the flashcache device is properly working by mouting manually the flashcache device.
[root@server ~]# mount /dev/mapper/vzhostname_cache /vz/hostname
After mounting you can see the VPS is listed, here you can see the VEID 302 is in stopped status.
[root@e2host1003 ~]# vzlist -a
CTID NPROC STATUS IP_ADDR HOSTNAME
302 - stopped x.x.x.x hostname.com
..
dmsetup status /dev/mapper/vzhostname_cache
This can show mapper status
Then we can start the VPS
[root@server ~]# vzctl start 302
check the mount using "df -h"
INIT script is needed to load the flashcache after the reboot.
Create a file named flashcache in /etc/init.d/flashcache and add the following contents. The following script is for cache devices set up in a openvz node.
#!/bin/bash
#
# flashcache Init Script to manage cachedev loads
#
# chkconfig: 345 9 98
# description: Flashcache Management
# Flashcache options
# modify this before using this init script
SSD_DISK=/dev/sdb1
BACKEND_DISK=/dev/mapper/VolGroup-101
CACHEDEV_NAME=103cache
MOUNTPOINT=/vz/vps1
FLASHCACHE_NAME=103cache
# Just a check, to validate the above params are set
[ -z "$SSD_DISK" ] && exit 10
[ -z "$BACKEND_DISK" ] && exit 11
[ -z "$CACHEDEV_NAME" ] && exit 12
[ -z "$MOUNTPOINT" ] && exit 13
[ -z "$FLASHCACHE_NAME" ] && exit 14
# Source function library.
. /etc/rc.d/init.d/functions
#globals
DMSETUP=`/usr/bin/which dmsetup`
SERVICE=flashcache
#create
flashcache_create -p around $FLASHCACHE_NAME $SSD_DISK $BACKEND_DISK
SUBSYS_LOCK=/var/lock/subsys/$SERVICE
RETVAL=0
start() {
echo "Starting Flashcache..."
#Load the module
/sbin/modprobe flashcache
RETVAL=$?
if [ $RETVAL -ne 0 ]; then
echo "Module Load Error: flashcache. Exited with status - $RETVAL"
exit $RETVAL
fi
#mount
if [ -L /dev/mapper/$CACHEDEV_NAME1 ]; then
/bin/mount /dev/mapper/$CACHEDEV_NAME1 $MOUNTPOINT1
RETVAL=$?
if [ $RETVAL -ne 0 ]; then
echo "Mount Failed: /dev/mapper/$CACHEDEV_NAME1 to $MOUNTPOINT1"
exit $RETVAL
fi
else
echo "Not Found: /dev/mapper/$CACHEDEV_NAME1"
exit 1
fi
#lock subsys
touch $SUBSYS_LOCK
}
stop() {
#unmount
/bin/umount $MOUNTPOINT1
echo "Flushing flashcache: Flushes to $BACKEND_DISK1"
$DMSETUP remove $CACHEDEV_NAME1
#unlock subsys
rm -f $SUBSYS_LOCK
}
status() {
[ -f $SUBSYS_LOCK ] && echo "Flashcache status: loaded" || echo "Flashcache status: NOT loaded";
$DMSETUP status $CACHEDEV_NAME1
exit $?
}
case $1 in
start)
start
;;
stop)
stop
;;
status)
status
;;
forcestop)
stop --force
;;
*)
echo "Usage: $0 {start|stop|status}"
exit 1
esac
exit 0
dmsetup status
Now the following script is added to load flashcache for two devices after the reboot.
#!/bin/bash
#
# flashcache Init Script to manage cachedev loads
#
# chkconfig: 345 9 98
# description: Flashcache Management
# Flashcache options
# modify this before using this init script
SSD_DISK1=/dev/sdb1
BACKEND_DISK1=/dev/mapper/VolGroup-101
CACHEDEV_NAME1=103cache
MOUNTPOINT1=/vz/vps1
FLASHCACHE_NAME1=103cache
SSD_DISK2=/dev/sdb2
BACKEND_DISK2=/dev/mapper/VolGroup-102
CACHEDEV_NAME2=104cache
MOUNTPOINT2=/vz/vps2
FLASHCACHE_NAME2=104cache
# Just a check, to validate the above params are set
[ -z "$SSD_DISK1" ] && exit 10
[ -z "$BACKEND_DISK1" ] && exit 11
[ -z "$CACHEDEV_NAME1" ] && exit 12
[ -z "$MOUNTPOINT1" ] && exit 13
[ -z "$FLASHCACHE_NAME1" ] && exit 14
[ -z "$SSD_DISK2" ] && exit 10
[ -z "$BACKEND_DISK2" ] && exit 11
[ -z "$CACHEDEV_NAME2" ] && exit 12
[ -z "$MOUNTPOINT2" ] && exit 13
[ -z "$FLASHCACHE_NAME2" ] && exit 14
# Source function library.
. /etc/rc.d/init.d/functions
#globals
DMSETUP=`/usr/bin/which dmsetup`
SERVICE=flashcache
#create
flashcache_create -p around $FLASHCACHE_NAME1 $SSD_DISK1 $BACKEND_DISK1
flashcache_create -p around $FLASHCACHE_NAME2 $SSD_DISK2 $BACKEND_DISK2
SUBSYS_LOCK=/var/lock/subsys/$SERVICE
RETVAL=0
start() {
echo "Starting Flashcache..."
#Load the module
/sbin/modprobe flashcache
RETVAL=$?
if [ $RETVAL -ne 0 ]; then
echo "Module Load Error: flashcache. Exited with status - $RETVAL"
exit $RETVAL
fi
#mount 1
if [ -L /dev/mapper/$CACHEDEV_NAME1 ]; then
/bin/mount /dev/mapper/$CACHEDEV_NAME1 $MOUNTPOINT1
RETVAL=$?
if [ $RETVAL -ne 0 ]; then
echo "Mount Failed: /dev/mapper/$CACHEDEV_NAME1 to $MOUNTPOINT1"
exit $RETVAL
fi
else
echo "Not Found: /dev/mapper/$CACHEDEV_NAME1"
exit 1
fi
#mount 2
if [ -L /dev/mapper/$CACHEDEV_NAME2 ]; then
/bin/mount /dev/mapper/$CACHEDEV_NAME2 $MOUNTPOINT2
RETVAL=$?
if [ $RETVAL -ne 0 ]; then
echo "Mount Failed: /dev/mapper/$CACHEDEV_NAME2 to $MOUNTPOINT2"
exit $RETVAL
fi
else
echo "Not Found: /dev/mapper/$CACHEDEV_NAME2"
exit 1
fi
#lock subsys
touch $SUBSYS_LOCK
}
stop() {
#unmount
/bin/umount $MOUNTPOINT1
echo "Flushing flashcache: Flushes to $BACKEND_DISK1"
$DMSETUP remove $CACHEDEV_NAME1
/bin/umount $MOUNTPOINT2
echo "Flushing flashcache: Flushes to $BACKEND_DISK2"
$DMSETUP remove $CACHEDEV_NAME2
#unlock subsys
rm -f $SUBSYS_LOCK
}
status() {
[ -f $SUBSYS_LOCK ] && echo "Flashcache status: loaded" || echo "Flashcache status: NOT loaded";
$DMSETUP status $CACHEDEV_NAME1
$DMSETUP status $CACHEDEV_NAME2
exit $?
}
case $1 in
start)
start
;;
stop)
stop
;;
status)
status
;;
forcestop)
stop --force
;;
*)
echo "Usage: $0 {start|stop|status}"
exit 1
esac
exit 0
[edit]=============================
Configuring the script using chkconfig:
[edit]=============================
1. Copy '/usr/src/flashcache/utils/flashcache' from the repo to '/etc/init.d/flashcache' 2. Make sure this file has execute permissions, 3. Edit this file and specify the values for the following variables
SSD_DISK, BACKEND_DISK, CACHEDEV_NAME, MOUNTPOINT, FLASHCACHE_NAME
SSD_DISK- Name of SSD disk. here it is /dev/sdb BACKEND_DISK- Actual device, here /dev/mapper/VG1-vz CACHEDEV_NAME-flash cache name(here vz_cache) MOUNTPOINT- here /vz FLASHCACHE_NAME-flash cache name(here vz_cache)
4. Modify the headers in the file if necessary.
By default, it starts in runlevel 3, with start-stop priority 90-10
5. Register this file using chkconfig
'chkconfig flashcache on'
Comments
Post a Comment