Isilon: Remove or Add a Node from/to Multiple Network Pools

It’s a pain to change networks in bulk in an Isilon cluster, particularly if it’s a complex environment. Adding a new node that will be serving multiple network pools in the same subnet is particularly time consuming. Similarly, tracking down all the interfaces and pools a node is in in order to remove it for maintenance or other purposes can be messy.

This script takes the node’s number as the first input and add or remove as the second. It checks if the interface is active before performing any operations on it.

 
#!/bin/bash
node=$1
operation=$2

#check if node number is valid
if [ "$(isi_nodes -n$node %{name})" != "clustername-$node" ]; then
 echo "Not a valid node"
 exit
fi

#check if operation is either add or remove
if [ "$operation" != add -a "$operation" != remove ]; then
 echo "Not a valid operation: $operation"
 exit
fi


#function to check if the interface's connection is active. 
check_ifaces_active(){
 isi_for_array -n$node "ifconfig $iface" | awk '/active/ {print 1}'
}

#function to perform the operation on the interface for a set of pools
operate_interfaces() {
 echo $isi_iface
 isi networks modify pool --$operation-ifaces=$node:$isi_iface subnet2:pool4-synciq
 sleep 5
 isi networks modify pool --$operation-ifaces=$node:$isi_iface subnet2:pool0
 sleep 5
 isi networks modify pool --$operation-ifaces=$node:$isi_iface subnet2:pool2
 sleep 5
}

check both 10GbE interfaces
for iface in bxe0 bxe1; do
 if [ "$(check_ifaces_active)" = "1" ]; then
  if [ $iface = bxe0 ]; then
   isi_iface=10gige-1 #Isilon uses different interface naming schemes for different things...
  elif [ $iface = bxe1 ]; then
   isi_iface=10gige-2
  else
   print "Something went wrong"
   exit
  fi
  operate_interfaces
  isi_for_array -n$node "ifconfig $iface"
 fi
done

Breaking Down the Monster III

So, finishing this off.

It-sa bunch-a case lines!

Write first:

 

echo $1 $2 "filesize: "$3 "totalsize: "$4"G" "filesperdir: "$5
case $1 in
	write)
        if [ $2 = scality ]; then
            filecount=$totfilecount
            time scalitywrite
            exit 0
        fi
        

So if it’s a Scality (or other pure object storage), it’s simple. Just run the write and time it, which will output the info you need. OTHERWISE…

#Chunk file groups into folders if count is too high
	if [ $totfilecount -ge 10000 ]; then
	    for dir in `seq 1 $foldercount`; do
	        createdir $fspath/$dir
	    done
	    time for dir in `seq 1 $foldercount`; do
	        path=$fspath/$dir
		filecount=$(( $totfilecount / $foldercount ))
	        writefiles
	    done
	else
	    path=$fspath
            createdir $path
            filecount=$totfilecount
            time writefiles
	fi
	;;

 

Do what the comment says. Chunk the files into folders, since if you write to a filesystem, count of files in directories makes a big difference. . Make sure you create the directories before you try to write to them… and then time how long it takes to write all of them. If it’s less than the critical file count number, then just write them and time it.

Neeeext….

 

read) #in order read
	sync; echo 1 > /proc/sys/vm/drop_caches
        if [ $2 = scality ]; then
            filecount=$totfilecount
            time scalityread
            exit 0
        fi
	if [ $totfilecount -ge 10000 ]; then
		time for dir in `seq 1 $foldercount`; do
			path=$fspath/$dir
			filecount=$(( $totfilecount / $foldercount ))
			readfiles
		done
	else
		path=$fspath
		filecount=$totfilecount
		time readfiles
	fi
	;;

That sync line is how you clear the filesystem cache (as root) on a Linux system. This is important for benchmarking, because let me tell you, 6.4GB/sec is not a speed that most network storage systems can reach. Again, we split it and time all of the reads, or we just straight up time the reads if the file count is low enough. This routine reads files in the order they were written.

 

	rm) #serial remove files
        if [ $2 = scality ]; then
            time for i in `seq 1 $totfilecount`; do
                curl -s -X DELETE http://localhost:81/proxy/bparc/$fspath/$i-$suffix > /dev/null
            done
            exit 0
        fi
		if [ $totfilecount -ge 10000 ]; then
			time for i in `seq 1 $foldercount`; do
				rm -f $fspath/$i/*-$suffix
				rmdir $fspath/$i
			done
		elif [ -d $fspath/$3 ]; then 
			time rm -f $fspath/*-$suffix
		fi
	;;

Similar to the other two routines, if it’s an object based, do something completely different, otherwise remove based on file path and count of files.

 

	parrm) #parallel remove files
		time ls $fspath | parallel -N 64 rm -rf $fspath/{}
	;;

This one is remarkably simple. Just run parallel against an ls of the top level directory, and pipe it into rm -rf. The {} is stdin for parallel. The -N 64 is number of threads to run.

 

This one’s kind of neat:

	shufread) #shuffled read
		sync; echo 1 > /proc/sys/vm/drop_caches
		if [ $totfilecount -ge 10000 ]; then
			folderarray=(`shuf -i 1-$foldercount`)
			time for dir in ${folderarray[*]}; do
				path=$fspath/$dir
				filecount=$(( $totfilecount / $foldercount ))
				shufreadfiles
			done
		else
			path=$fspath
			filecount=$totfilecount
			time shufreadfiles
		fi
	;;
	

I needed a way to do random reads over the files I’d written, in order to simulate that on filesystems with little caching (ie, make the drives do a lot of random seeks.)

	shufread) #shuffled read
		sync; echo 1 > /proc/sys/vm/drop_caches
		if [ $totfilecount -ge 10000 ]; then
			folderarray=(`shuf -i 1-$foldercount`)
			time for dir in ${folderarray[*]}; do
				path=$fspath/$dir
				filecount=$(( $totfilecount / $foldercount ))
				shufreadfiles
			done
		else
			path=$fspath
			filecount=$totfilecount
			time shufreadfiles
		fi
	;;
	

At first, I tried writing the file paths to a file, then reading that, but that has waaaay too much latency when you’re doing performance testing. So, after some digging, I found the shuf command, which shuffles a list. You can pass an arbitrary list with the -i flag. I tossed this all into an array, and then it proceeds like the read section.

 

	*) usage && exit 1;;
esac
echo '------------------------'

Fairly self explanatory. I tossed an echo with some characters in to keep the output clean if you’re running the command inside a for loop.

And that’s it!

Breaking down that monster

Or should I use Beast? No, this isn’t an XtremIO. (sorry, I just got back from EMCWorld 2015. The marketing gobbledygook is still strong in me.)

So, first part of the script, like many others, is a function (cleverly called usage), followed by the snippet that calls the function:


usage () {
	echo "Command syntax is $(basename $0) [write|read|shufread|rm|parrm] [test|tier1|tier2|gpfs|localscratch|localssd|object]"
        echo "[filesizeG|M|K] [totalsize in GB] (optional) [file count per directory] (optional)"
}

if [ "$#" -lt 3 ]; then
	usage
	exit 1
fi

Not much to see here if you already know what functions are and how they’re formatted in bash. Basically, if it starts with () { and then is closed with }, it’s a function, and you can call it like a script inside the main script. The code is not executed until it is called by name. You can even pass it input variables–more on that later.

Next, we come to a case block:


case $2 in
	test) fspath=/mnt/dmtest/scicomp/scicompsys/ddcompare/$3 ;;
	tier1) fspath=/mnt/node-64-dm11/ddcompare/$3 ;;
	tier2) fspath=/mnt/node-64-tier2/ddcompare/$3 ;;
	gpfs) fspath=/gpfs1/nlsata/ddcompare/$3 ;;
        localscratch) fspath=/scratch/carlilek/ddcompare/$3 ;;
        localssd) fspath=/ssd/ddcompare/$3 ;;
        object) fspath=/srttest/ddcompare/$3 ;;
	*) usage && exit 1;;
esac

This checks the second variable and sets the base path to be used in the testing. Note that object will be used differently than the rest, because all of the rest are file storage paths. Object ain’t.

Then, we set the size of the files (or objects) to be written, read, or deleted:


case $3 in
	*G) filesize=$(( 1024 * 1024 * `echo $3 | tr -d G`));;
	*M) filesize=$(( 1024 * `echo $3 | tr -d M` ));;
	*K) filesize=`echo $3 | tr -d K`;;
	*) usage && exit 1;;
esac

Note that I should probably be using the newer call out to command style of $( ) here, rather than backticks. I’ll get around to it at some point.

The bizarre $(( blah op blah )) setup is how you do math in bash. Really.

The next few bits are all prepping how many files to write to a given subdirectory, how big the files are, etc.


#set the suffix for file names
suffix=$3

#set the total size of the test set
if [ ! -z $4 ]; then
	totalsize=$(( 1024 * 1024 * $4 ))
else
	totalsize=52428800 #The size of the test set in kb
fi
	
#set the number of files in subdirectories
if [ ! -z $5 ]; then
	filesperdir=$5
else
	filesperdir=5120 #Number of subdirs to use for large file counts
fi

#set up variables for dd commands
if [ $filesize -ge 1024 ]; then
	blocksize=1048576
else
	blocksize=$(( $filesize * 1024 ))
fi

#set up variables for subdirectories
totfilecount=$(( $totalsize / $filesize ))
blockcount=$(( $filesize * 1024 / $blocksize ))
if [ $filesperdir -le $totfilecount ]; then
	foldercount=$(( $totfilecount / $filesperdir ))
fi

OK, I’ll get into the meat of the code in my next post. But I’m done now.

The first of several benchmarking scripts

I’m currently a file storage administrator, specializing in EMC Isilon. We have a rather large install (~60 heterogeneous nodes, ~4PB) as well as some smaller systems, an HPC dedicated GPFS filer from DDN, and an object based storage system from Scality. Obviously, all of these things have different performance characteristics, including the differing tiers of Isilon.

I’ve been benchmarking the various systems using the script below. I’ll walk through the various parts of the script. To date, this is probably one of my more ambitious attempts with Bash, and it would probably work better in Python, but I haven’t learned that yet. 😉


#!/bin/bash
usage () {
	echo "Command syntax is $(basename $0) [write|read|shufread|rm|parrm] [test|tier1|tier2|gpfs|localscratch|localssd|object]"
        echo "[filesizeG|M|K] [totalsize in GB] (optional) [file count per directory] (optional)"
}

if [ "$#" -lt 3 ]; then
	usage
	exit 1
fi

#CHANGE THESE PATHS TO FIT YOUR ENVIRONMENT
#set paths
case $2 in
	test) fspath=/mnt/dmtest/scicomp/scicompsys/ddcompare/$3 ;;
	tier1) fspath=/mnt/node-64-dm11/ddcompare/$3 ;;
	tier2) fspath=/mnt/node-64-tier2/ddcompare/$3 ;;
	gpfs) fspath=/gpfs1/nlsata/ddcompare/$3 ;;
        localscratch) fspath=/scratch/carlilek/ddcompare/$3 ;;
        localssd) fspath=/ssd/ddcompare/$3 ;;
        object) fspath=/srttest/ddcompare/$3 ;;
	*) usage && exit 1;;
esac

#some math to get the filesize in kilobytes
case $3 in
	*G) filesize=$(( 1024 * 1024 * `echo $3 | tr -d G`));;
	*M) filesize=$(( 1024 * `echo $3 | tr -d M` ));;
	*K) filesize=`echo $3 | tr -d K`;;
	*) usage && exit 1;;
esac	

#set the suffix for file names
suffix=$3

#set the total size of the test set
if [ ! -z $4 ]; then
	totalsize=$(( 1024 * 1024 * $4 ))
else
	totalsize=52428800 #The size of the test set in kb
fi
	
#set the number of files in subdirectories
if [ ! -z $5 ]; then
	filesperdir=$5
else
	filesperdir=5120 #Number of subdirs to use for large file counts
fi

#set up variables for dd commands
if [ $filesize -ge 1024 ]; then
	blocksize=1048576
else
	blocksize=$(( $filesize * 1024 ))
fi

#set up variables for subdirectories
totfilecount=$(( $totalsize / $filesize ))
blockcount=$(( $filesize * 1024 / $blocksize ))
if [ $filesperdir -le $totfilecount ]; then
	foldercount=$(( $totfilecount / $filesperdir ))
fi

#debug output
#echo $fspath
#echo filecount $totfilecount
#echo totalsize $totalsize KB
#echo filesize $filesize KB
#echo blockcount $blockcount
#echo blocksize $blocksize bytes

#defines output of time in realtime seconds to one decimal place
TIMEFORMAT=%1R

#creates directory to write to
createdir () {
	if [ ! -d $1 ]; then
		mkdir -p $1
	fi
}

#write test
writefiles () {
	#echo WRITE
	for i in `seq 1 $filecount`; do 
		#echo -n .
		dd if=/dev/zero of=$path/$i-$suffix bs=$blocksize count=$blockcount 2> /dev/null
	done
}

#read test
readfiles () {
	#echo READ
	for i in `seq 1 $filecount`; do 
		#echo -n .
		dd if=$path/$i-$suffix of=/dev/null bs=$blocksize 2> /dev/null
		#dd if=$path/$i-$suffix of=/dev/null bs=$blocksize
	done
}

#shuffled read test
shufreadfiles () {
	#echo SHUFFLE READ
	filearray=(`shuf -i 1-$filecount`)
	for i in ${filearray[*]}; do 
		#echo -n .
		#echo $path/$i-$suffix
		dd if=$path/$i-$suffix of=/dev/null bs=$blocksize 2> /dev/null
		#dd if=$path/$i-$suffix of=/dev/null bs=$blocksize
	done
}

#ObjectWrite
scalitywrite () {
    for i in `seq 1 $filecount`; do
        dd if=/dev/zero bs=$blocksize count=$blockcount 2> /dev/null | curl -s -X PUT http://localhost:81/proxy/bparc$fspath/$i-$suffix -T- > /dev/null
    done
}

#ObjectRead
scalityread () {
    for i in `seq 1 $filecount`; do
        curl -s -X GET http://localhost:81/proxy/bparc/$fspath/$i-$suffix > /dev/null
    done
}

#Do the work based on the work type

echo $1 $2 "filesize: "$3 "totalsize: "$4"G" "filesperdir: "$5
case $1 in
	write) 
        if [ $2 = scality ]; then
            filecount=$totfilecount
            time scalitywrite
            exit 0
        fi
        #Chunk file groups into folders if count is too high
	    if [ $totfilecount -ge 10000 ]; then
			for dir in `seq 1 $foldercount`; do
				createdir $fspath/$dir
			done
			time for dir in `seq 1 $foldercount`; do
				path=$fspath/$dir
				filecount=$(( $totfilecount / $foldercount ))
				writefiles
			done
		else
			path=$fspath
            createdir $path
			filecount=$totfilecount
			time writefiles
		fi
	;;
	read) #in order read
		sync; echo 1 > /proc/sys/vm/drop_caches
        if [ $2 = scality ]; then
            filecount=$totfilecount
            time scalityread
            exit 0
        fi
		if [ $totfilecount -ge 10000 ]; then
			time for dir in `seq 1 $foldercount`; do
				path=$fspath/$dir
				filecount=$(( $totfilecount / $foldercount ))
				readfiles
			done
		else
			path=$fspath
			filecount=$totfilecount
			time readfiles
		fi
	;;
	rm) #serial remove files
        if [ $2 = scality ]; then
            time for i in `seq 1 $totfilecount`; do
                curl -s -X DELETE http://localhost:81/proxy/bparc/$fspath/$i-$suffix > /dev/null
            done
            exit 0
        fi
		if [ $totfilecount -ge 10000 ]; then
			time for i in `seq 1 $foldercount`; do
				rm -f $fspath/$i/*-$suffix
				rmdir $fspath/$i
			done
		elif [ -d $fspath/$3 ]; then 
			time rm -f $fspath/*-$suffix
		fi
	;;
	parrm) #parallel remove files
		time ls $fspath | parallel -N 64 rm -rf $fspath/{}
	;;
	shufread) #shuffled read
		sync; echo 1 > /proc/sys/vm/drop_caches
		if [ $totfilecount -ge 10000 ]; then
			folderarray=(`shuf -i 1-$foldercount`)
			time for dir in ${folderarray[*]}; do
				path=$fspath/$dir
				filecount=$(( $totfilecount / $foldercount ))
				shufreadfiles
			done
		else
			path=$fspath
			filecount=$totfilecount
			time shufreadfiles
		fi
	;;
		
	*) usage && exit 1;;
esac
echo '------------------------'

I’ll break this all down in my next post.

Simple script for restarting the CELOG on Isilon

If your Isilon cluster has its CELOG fill up to the point where it no longer sends you email alerts (and/or smtp traps) and you can’t clear it yourself, even with the CLI, you’ll probably need this script. It’s a compilation of what support told me several times that I got tired of looking up in my old emails.


#!/bin/bash
isi services -a celog_coalescer disable
isi services -a celog_monitor disable
isi services -a celog_notification disable
sleep 120
isi_for_array killall isi_mcp
isi_for_array pkill isi_celog_
sleep 60
isi_for_array rm -rf /var/db/celog/*
isi_for_array rm -rf /var/db/celog_master/*
rm -rf /ifs/.ifsvar/db/celog/*
isi_for_array isi_mcp
sleep 30
isi services -a celog_coalescer enable
sleep 30
isi services -a celog_monitor enable
sleep 30
isi services -a celog_notification enable
sleep 30
isi services -a celog_coalescer enable
isi services -a celog_monitor enable
isi services -a celog_notification enable

Nothing special here, but perhaps it will come in handy for someone. I have heard that they are aware of the bug and it will be fixed in a future release of OneFS.

Repairing quotas after you delete them all

As I mentioned in my earlier post, I managed to delete the quotas on one of my Isilon clusters by accident. Still haven’t figured out exactly how it happened, but it happened.

By a happy coincidence, we do dumps of our quota lists on a daily basis (I recommend you do too). The command you could use for this is:

isilon-1# isi quota quotas list --format csv

From there, I cut it down to an output that looks like:

type,path,hard-threshold

and tossed that into a file called quotadata.txt.

Then I used this script:

#!/bin/bash
OIFS=$IFS
IFS=','
INPUT=./quotadata.txt
[ ! -f $INPUT ] &while read TYPE QPATH SIZE
do
type=$TYPE
qpath=$QPATH
size=$SIZE
if [ -z $size ]; then
isi quota quotas create $qpath $type
else
isi quota quotas create $qpath $type --hard-threshold $size --container=yes
fi
done <$INPUT
IFS=$OIFS
isi quota quotas list

It threw a bunch of errors about stdin not being a tty, but those are safely ignored (and you could probably fix them through some kind of flag on the isi quota quotas commands.

In any case, that put all my quotas back. At that point, it was a (relatively) simple matter of running the quotascan job. 

As usual with my Isilon posts, this is in regards to OneFS 7.1.0. 

Moving data between quota protected directories on Isilon

Updated version of the script here: https://unscrupulousmodifier.wordpress.com/2015/10/08/moving-data-between-quota-protected-directories-on-isilon-take-ii/

In the current versions of Isilon OneFS, it is impossible to move files and directories between two directories with quotas on them (regardless of the directory quota type; even if it’s advisory, it won’t allow it). This is really annoying, and although I’ve put in a feature request for it, who knows if it will ever be fixed. So I wrote this script that will make a note of the quota location and threshold (if it’s a hard threshold), remove the quota, move the items, and reapply the quotas.

#!/bin/bash

#Tests whether there is a valid path
testexist () {
        if [ ! -r $1 ]; then
                echo "$1 is an invalid path. Please try again."
                exit
        fi
}

#Iterates through path backwards to find most closely related quota
findquota () {
        RIGHTPATH=0
        i=`echo $1 | awk -F'/' '{print NF}'` #define quantity of fields
        while [ $RIGHTPATH -eq 0 ]; do
                QUOTA=`echo $1 | cut -d"/" -f "1-$i"`
                if [ -n "`isi quota list | grep $QUOTA`" ]; then
                        RIGHTPATH=1
                fi
                i=$(($i-1))
        done
        echo $QUOTA
}

testquota () {
        if [ "$1" = "-" ]; then
                echo "No hard directory quota on this directory."
                exit
        fi
}

if [[ $# -ne 2 ]]; then
        #Gets paths from user
        echo "Enter source:"
        read SOURCE
        echo "Enter target:"
        read TARGET
else
        SOURCE=$1
        TARGET=$2
fi

testexist $SOURCE
testexist $TARGET

#Verifies paths with user
echo "Moving $SOURCE to $TARGET. Is this correct? (y/n)"
read ANSWER
if [ $ANSWER != 'y' ] ; then
        exit
fi

#Defines quotas
SOURCEQUOTA=$(findquota $SOURCE)
TARGETQUOTA=$(findquota $TARGET)

#Gets size of hard threshold from quota
SOURCETHRESH=$(isi quota view $SOURCEQUOTA directory | awk -F" : " '$1~/Hard Threshold/ {print $2}')
TARGETTHRESH=$(isi quota view $TARGETQUOTA directory | awk -F" : " '$1~/Hard Threshold/ {print $2}')
testquota $SOURCETHRESH
testquota $TARGETTHRESH

echo $SOURCEQUOTA $SOURCETHRESH
echo $TARGETQUOTA $TARGETTHRESH

isi quota quotas delete --type=directory --path=$SOURCEQUOTA -f
isi quota quotas delete --type=directory --path=$TARGETQUOTA -f

isi quota quotas view $SOURCEQUOTA directory
isi quota quotas view $TARGETQUOTA directory

mv $SOURCE $TARGET

isi quota quotas create $SOURCEQUOTA directory --hard-threshold=$SOURCETHRESH --container=yes
isi quota quotas create $TARGETQUOTA directory --hard-threshold=$TARGETTHRESH --container=yes

isi quota quotas view $SOURCEQUOTA directory
isi quota quotas view $TARGETQUOTA directory

Here’s how I use it:

bash /ifs/data/scripts/qmv /ifs/source/path /ifs/target/path

First I’ve got some functions in there:
testexist (): test if it’s a sane path
findquota (): find the quota info for the given path
testquota (): check if it’s a hard quota. If it’s not, the script fails, because that’s all we use around here. Feel free to fix it up and post something better in the comments.

Then we get to the bit where if it’s not given two arguments for source and target, it asks for them. It then tests if the source and target both exist. Please note that this script expects a fully qualified path including the bit you want to move for the source, and the place you want to move it for the target (ie, not source=/ifs/data/somethingdir/something target=/ifs/data/otherdir/something).

Of course, there’s a bit of error checking you’ll pretty much start ignoring and answering y to all the time…

Then we find the quotas for the directories. What the findquota () function does is it iterates back through the path until it finds an actual quota on it. I think this will break if you have nested quotas, but again, feel free to fix it up and let me know. It’ll then throw out which quota applies. Once it’s found both the quota paths, it saves the hard threshold in a variable. Now we’ve got variables for the source quota directory, the target quota directory, and both of their hard thresholds.

From there, it’s an easy move to delete the quotas, move the actual data, and then put the quotas back.

Don’t forget to use the –container=yes flag on those isi quota quotas create commands if you don’t want to show your end users the entire size of the filesystem.

**** Please note, and I found this after I made this post… if you comment out the echo $QUOTA line in the findquota () function, it kinda breaks the whole script. And then deletes all of your quotas without asking you. So, uh, don’t comment that out. That echo is what populates the $SOURCEQUOTA and $TARGETQUOTA variables. ****

This script works as of OneFS 7.1. I make no guarantees they won’t switch around the isi commands again in their quest to make commands as long and convoluted as possible.