Skip to main content

Flash cache implimentation

Flashcache implementation steps on Openvz

 # cd /usr/src/
 # git clone https://github.com/facebook/flashcache.git
 # cd flashcache


if you are facing some issues while installing git, you can use the below command

# yum --disableexcludes=main install git -y


Make sure the kernel-devel for current openVZ kernel is installed, otherwise you will get error while doing "make"
# rpm -qa |grep vzkernel-devel-`uname -r`
# make
# make install
Initializing the module
# modprobe flashcache


By checking out your kernel messages you can see that it has been initialized.
# dmesg | tail
.............................
[ 5806.891504] flashcache: flashcache-1.0 initialized


# lsmod |grep flashcache
flashcache             77223  0
dm_mod                 80860  1 flashcache


 stop the VE and unmount the LVM

umount -l   /vz/hostname

umount the lvm /dev/mapper/VG1-vzhostname

make sure there are no associated process, stopping the cdp services is also recommended.

you can check the process with fuser -vm




Create flash cache device using SSD drive and vz LVM

# flashcache_create  -p   around vzhostname_cache /dev/sdb1 /dev/mapper/VG1-vzhostname (here sdb is the SSD drive and sdb1 is the partition)


use the as "around" ; write back mode create issues

 /dev/sdb1is the ssd drive or partitions

/dev/mapper/VG1-vzhostname is the lvm mounted for the ve




We can either use different partition of same ssd drive or can use dedicated ssd drive for each LVMs if t needed.
Mode: there are three modes, wrire_back is more dangerous. Write_through and write_around are the safest mode.


check fdisk -l and see the device "/dev/mapper/vzhostname_cache" is created and mount it



 ll /dev/mapper/vzhostname_cache
lrwxrwxrwx 1 root root 7 Aug 25 23:28 /dev/mapper/vzhostname_cache -> ../dm-1


This step is not needed if you are adding the init script. The init script will automatically mount the cache devices. Make sure that you have commented the normal volume group mount to the VE

*Then add the device in fstab and mount it using "mount -a"

 Eg: (make sure you set 0 0 for last two columns to disable filesystem check on booting)
 UUID=4c670b55-12a0-4d4a-9209-eb21a74e5ee3 /vz/hostname                ext4    defaults        0 0

blkid should be same for "/dev/mapper/VG1-hostname" & "/dev/mapper/vzhostname_cache"

You can see whether the flashcache device is properly working by mouting manually the flashcache device.

[root@server ~]# mount /dev/mapper/vzhostname_cache /vz/hostname


After mounting you can see the VPS is listed, here you can see the VEID 302 is in stopped status.
[root@e2host1003 ~]# vzlist -a
      CTID      NPROC STATUS    IP_ADDR         HOSTNAME
       302          - stopped   x.x.x.x  hostname.com

..

dmsetup status   /dev/mapper/vzhostname_cache

This can show mapper status




Then we can start the VPS
[root@server ~]# vzctl start 302
check the mount using "df -h"







INIT script is needed to load the flashcache after the reboot.
Create a file named flashcache in /etc/init.d/flashcache and add the following contents. The following script is for cache devices set up in a openvz node.


#!/bin/bash
#
# flashcache Init Script to manage cachedev loads
#
# chkconfig: 345 9 98
# description: Flashcache Management

# Flashcache options
# modify this before using this init script

SSD_DISK=/dev/sdb1
BACKEND_DISK=/dev/mapper/VolGroup-101
CACHEDEV_NAME=103cache
MOUNTPOINT=/vz/vps1
FLASHCACHE_NAME=103cache



# Just a check, to validate the above params are set
[ -z "$SSD_DISK" ] && exit 10
[ -z "$BACKEND_DISK" ] && exit 11
[ -z "$CACHEDEV_NAME" ] && exit 12
[ -z "$MOUNTPOINT" ] && exit 13
[ -z "$FLASHCACHE_NAME" ] && exit 14



# Source function library.
. /etc/rc.d/init.d/functions

#globals
DMSETUP=`/usr/bin/which dmsetup`
SERVICE=flashcache

#create
flashcache_create  -p around $FLASHCACHE_NAME $SSD_DISK $BACKEND_DISK


SUBSYS_LOCK=/var/lock/subsys/$SERVICE

RETVAL=0

start() {
    echo "Starting Flashcache..."
    #Load the module
    /sbin/modprobe flashcache
    RETVAL=$?
    if [ $RETVAL -ne 0 ]; then
 echo "Module Load Error: flashcache. Exited with status - $RETVAL"
 exit $RETVAL
    fi

    #mount
    if [ -L /dev/mapper/$CACHEDEV_NAME1 ]; then
 /bin/mount /dev/mapper/$CACHEDEV_NAME1 $MOUNTPOINT1
 RETVAL=$?
 if [ $RETVAL -ne 0 ]; then
     echo "Mount Failed: /dev/mapper/$CACHEDEV_NAME1 to $MOUNTPOINT1"
     exit $RETVAL
 fi
    else
 echo "Not Found: /dev/mapper/$CACHEDEV_NAME1"
 exit 1
    fi



    #lock subsys
    touch $SUBSYS_LOCK
}

stop() {
    #unmount
    /bin/umount $MOUNTPOINT1
    echo "Flushing flashcache: Flushes to $BACKEND_DISK1"
    $DMSETUP remove $CACHEDEV_NAME1


    #unlock subsys
    rm -f $SUBSYS_LOCK
}

status() {
    [ -f $SUBSYS_LOCK ] && echo "Flashcache status: loaded" || echo "Flashcache status: NOT loaded";
    $DMSETUP status $CACHEDEV_NAME1
 
    exit $?
}

case $1 in
    start)
 start
 ;;
    stop)
 stop
 ;;
    status)
 status
 ;;
    forcestop)
 stop --force
 ;;
    *)
 echo "Usage: $0 {start|stop|status}"
 exit 1
esac
exit 0


dmsetup status
Now the following script is added to load flashcache for two devices after the reboot.
#!/bin/bash
#
# flashcache Init Script to manage cachedev loads
#
# chkconfig: 345 9 98
# description: Flashcache Management

# Flashcache options
# modify this before using this init script

SSD_DISK1=/dev/sdb1
BACKEND_DISK1=/dev/mapper/VolGroup-101
CACHEDEV_NAME1=103cache
MOUNTPOINT1=/vz/vps1
FLASHCACHE_NAME1=103cache
SSD_DISK2=/dev/sdb2
BACKEND_DISK2=/dev/mapper/VolGroup-102
CACHEDEV_NAME2=104cache
MOUNTPOINT2=/vz/vps2
FLASHCACHE_NAME2=104cache



# Just a check, to validate the above params are set
[ -z "$SSD_DISK1" ] && exit 10
[ -z "$BACKEND_DISK1" ] && exit 11
[ -z "$CACHEDEV_NAME1" ] && exit 12
[ -z "$MOUNTPOINT1" ] && exit 13
[ -z "$FLASHCACHE_NAME1" ] && exit 14
[ -z "$SSD_DISK2" ] && exit 10
[ -z "$BACKEND_DISK2" ] && exit 11
[ -z "$CACHEDEV_NAME2" ] && exit 12
[ -z "$MOUNTPOINT2" ] && exit 13
[ -z "$FLASHCACHE_NAME2" ] && exit 14


# Source function library.
. /etc/rc.d/init.d/functions

#globals
DMSETUP=`/usr/bin/which dmsetup`
SERVICE=flashcache

#create
flashcache_create  -p around $FLASHCACHE_NAME1 $SSD_DISK1 $BACKEND_DISK1
flashcache_create  -p around $FLASHCACHE_NAME2 $SSD_DISK2 $BACKEND_DISK2

SUBSYS_LOCK=/var/lock/subsys/$SERVICE

RETVAL=0

start() {
    echo "Starting Flashcache..."
    #Load the module
    /sbin/modprobe flashcache
    RETVAL=$?
    if [ $RETVAL -ne 0 ]; then
 echo "Module Load Error: flashcache. Exited with status - $RETVAL"
 exit $RETVAL
    fi

    #mount 1
    if [ -L /dev/mapper/$CACHEDEV_NAME1 ]; then
 /bin/mount /dev/mapper/$CACHEDEV_NAME1 $MOUNTPOINT1
 RETVAL=$?
 if [ $RETVAL -ne 0 ]; then
     echo "Mount Failed: /dev/mapper/$CACHEDEV_NAME1 to $MOUNTPOINT1"
     exit $RETVAL
 fi
    else
 echo "Not Found: /dev/mapper/$CACHEDEV_NAME1"
 exit 1
    fi

    #mount 2
    if [ -L /dev/mapper/$CACHEDEV_NAME2 ]; then
 /bin/mount /dev/mapper/$CACHEDEV_NAME2 $MOUNTPOINT2
 RETVAL=$?
 if [ $RETVAL -ne 0 ]; then
     echo "Mount Failed: /dev/mapper/$CACHEDEV_NAME2 to $MOUNTPOINT2"
     exit $RETVAL
 fi
    else
 echo "Not Found: /dev/mapper/$CACHEDEV_NAME2"
 exit 1
    fi

    #lock subsys
    touch $SUBSYS_LOCK
}

stop() {
    #unmount
    /bin/umount $MOUNTPOINT1
    echo "Flushing flashcache: Flushes to $BACKEND_DISK1"
    $DMSETUP remove $CACHEDEV_NAME1

    /bin/umount $MOUNTPOINT2
    echo "Flushing flashcache: Flushes to $BACKEND_DISK2"
    $DMSETUP remove $CACHEDEV_NAME2

    #unlock subsys
    rm -f $SUBSYS_LOCK
}

status() {
    [ -f $SUBSYS_LOCK ] && echo "Flashcache status: loaded" || echo "Flashcache status: NOT loaded";
    $DMSETUP status $CACHEDEV_NAME1
    $DMSETUP status $CACHEDEV_NAME2
    exit $?
}

case $1 in
    start)
 start
 ;;
    stop)
 stop
 ;;
    status)
 status
 ;;
    forcestop)
 stop --force
 ;;
    *)
 echo "Usage: $0 {start|stop|status}"
 exit 1
esac
exit 0


[edit]=============================
Configuring the script using chkconfig:
[edit]=============================


1. Copy '/usr/src/flashcache/utils/flashcache' from the repo to '/etc/init.d/flashcache' 2. Make sure this file has execute permissions, 3. Edit this file and specify the values for the following variables
  SSD_DISK, BACKEND_DISK, CACHEDEV_NAME, MOUNTPOINT, FLASHCACHE_NAME

SSD_DISK- Name of SSD disk. here it is /dev/sdb BACKEND_DISK- Actual device, here /dev/mapper/VG1-vz CACHEDEV_NAME-flash cache name(here vz_cache) MOUNTPOINT- here /vz FLASHCACHE_NAME-flash cache name(here vz_cache)

4. Modify the headers in the file if necessary.
  By default, it starts in runlevel 3, with start-stop priority 90-10
5. Register this file using chkconfig
  'chkconfig flashcache on'






Comments

Popular posts from this blog

How to tweak linux server harddisk using hdparm

hdparm switches explained http://manpages.ubuntu.com/manpages/intrepid/man8/hdparm.8.html   First of all you have to install hdparm in linux. apt-get install hdparm #hdparm /dev/sda /dev/sda: readonly = 0 (off) readahead = 120 (on) geometry = 8850/255/63, sectors = 142182912, start = 0 Hard disk Performance Information # hdparm -tT /dev/hda /dev/hdd: Timing cached reads: 496 MB in 2.00 seconds = 247.42 MB/sec Timing buffered disk reads: 60 MB in 3.03 seconds = 19.81 MB/sec Hard drive set to low, slow settings # hdparm -cuda /dev/hda /dev/hda: IO_support = 0 (default 16-bit) unmaskirq = 0 (off) using_dma = 0 (off) readahead = 256 (on) Use below tweaks to increase disk read write performance. For sda drive ~]# hdparm -a 2048 /dev/sda /dev/sda: setting fs readahead to 2048 readahead = 2048 (on) For sdb drive [root@439298a ~]# hdparm -a 2048 /dev/sdb /dev/sdb: setting fs readahead to 2048 readahead = 2048 (on) ]# echo “anticipatory” >...

RAID

Check the Raid installed lspci | grep RAID     Software Raid ============== Linux Support For Software RAID Currently, Linux supports the following RAID levels (quoting from the man page): LINEAR RAID0 (striping) RAID1 (mirroring) RAID4 RAID5 RAID6 RAID10 MULTIPATH, and FAULTY. MULTIPATH is not a Software RAID mechanism, but does involve multiple devices: each device is a path to one common physical storage device. FAULTY is also not true RAID, and it only involves one device. It provides a layer over a true device that can be used to inject faults. Install mdadm Type the following command under RHEL / CentOS / Fedora Linux: # yum install mdadm Type the following command under Debian / Ubuntu Linux: # apt-get update && apt-get install mdadm How Do I Create RAID1 Using mdadm? Type the following command to create RAID1 using /dev/sdc1 and /dev/sdd1 (20GB size each). First run fdisk on /dev/sdc and /dev/sdd with " Softwa...

SystemD commands

[root@centos7 ~]# systemctl -t target UNIT                   LOAD   ACTIVE SUB    DESCRIPTION basic.target           loaded active active Basic System cryptsetup.target      loaded active active Encrypted Volumes getty.target           loaded active active Login Prompts graphical.target       loaded active active Graphical Interface local-fs-pre.target    loaded active active Local File Systems (Pre) local-fs.target        loaded active active Local File Systems multi-user.target      loaded active active Multi-User System network-online.target  loaded active active Network is Online network.target         loaded active active Network nfs-client.target      loaded active active NFS client services nss-user-lookup.target loaded active active User and Gr...