Skip to main content

Multiple variable using array in for loop; cPanel user IP address change

I always wonder how to assign multiple variables in for loop. Suppose I want to change the IP address of some users listed.


[root@server4 ~]# cat test1
user1
user2
user3
user4
user5


[root@server4 ~]# cat test2
190.10.20.1
20.1.1.2
30.12.13.13
13.23.23.12
32.12.12.1


test1 are my users and test2 contains IP address that I want to assign

[root@server4 ~]# xargs < test1
user1 user2 user3 user4 user5
[root@server4 ~]# xargs < test2
190.10.20.1 20.1.1.2 30.12.13.13 13.23.23.12 32.12.12.1


Now I am going define array


[root@server4 ~]# a=(user1 user2 user3 user4 user5)
[root@server4 ~]# b=(190.10.20.1 20.1.1.2 30.12.13.13 13.23.23.12 32.12.12.1)

[root@server4 ~]# c=${#a[@]}  #length of array

Lets see if it works

[root@server4 ~]# for ((i=0;i<c;i++));do echo "${a[i]} ${b[i]}"; done
user1 190.10.20.1
user2 20.1.1.2
user3 30.12.13.13
user4 13.23.23.12
user5 32.12.12.1


what if we implement this in 

[root@server4 ~]# for ((i=0;i<c;i++));do /usr/local/cpanel/bin/setsiteip ${a[i]} ${b[i]};done

let me test and will let you know.  Please let me know if you have any other simple shortcuts.


Comments

Post a Comment

Popular posts from this blog

RAID

Check the Raid installed lspci | grep RAID     Software Raid ============== Linux Support For Software RAID Currently, Linux supports the following RAID levels (quoting from the man page): LINEAR RAID0 (striping) RAID1 (mirroring) RAID4 RAID5 RAID6 RAID10 MULTIPATH, and FAULTY. MULTIPATH is not a Software RAID mechanism, but does involve multiple devices: each device is a path to one common physical storage device. FAULTY is also not true RAID, and it only involves one device. It provides a layer over a true device that can be used to inject faults. Install mdadm Type the following command under RHEL / CentOS / Fedora Linux: # yum install mdadm Type the following command under Debian / Ubuntu Linux: # apt-get update && apt-get install mdadm How Do I Create RAID1 Using mdadm? Type the following command to create RAID1 using /dev/sdc1 and /dev/sdd1 (20GB size each). First run fdisk on /dev/sdc and /dev/sdd with " Software R

How to tweak linux server harddisk using hdparm

hdparm switches explained http://manpages.ubuntu.com/manpages/intrepid/man8/hdparm.8.html   First of all you have to install hdparm in linux. apt-get install hdparm #hdparm /dev/sda /dev/sda: readonly = 0 (off) readahead = 120 (on) geometry = 8850/255/63, sectors = 142182912, start = 0 Hard disk Performance Information # hdparm -tT /dev/hda /dev/hdd: Timing cached reads: 496 MB in 2.00 seconds = 247.42 MB/sec Timing buffered disk reads: 60 MB in 3.03 seconds = 19.81 MB/sec Hard drive set to low, slow settings # hdparm -cuda /dev/hda /dev/hda: IO_support = 0 (default 16-bit) unmaskirq = 0 (off) using_dma = 0 (off) readahead = 256 (on) Use below tweaks to increase disk read write performance. For sda drive ~]# hdparm -a 2048 /dev/sda /dev/sda: setting fs readahead to 2048 readahead = 2048 (on) For sdb drive [root@439298a ~]# hdparm -a 2048 /dev/sdb /dev/sdb: setting fs readahead to 2048 readahead = 2048 (on) ]# echo “anticipatory” > /sy

SystemD commands

[root@centos7 ~]# systemctl -t target UNIT                   LOAD   ACTIVE SUB    DESCRIPTION basic.target           loaded active active Basic System cryptsetup.target      loaded active active Encrypted Volumes getty.target           loaded active active Login Prompts graphical.target       loaded active active Graphical Interface local-fs-pre.target    loaded active active Local File Systems (Pre) local-fs.target        loaded active active Local File Systems multi-user.target      loaded active active Multi-User System network-online.target  loaded active active Network is Online network.target         loaded active active Network nfs-client.target      loaded active active NFS client services nss-user-lookup.target loaded active active User and Group Name Lookups paths.target           loaded active active Paths remote-fs-pre.target   loaded active active Remote File Systems (Pre) remote-fs.target       loaded active active Remote File Systems slices.target