Using iSCSI disks under Fedora 15

My main development server had 80Gb unused in one of the PVs; so I thought how can I use that. It wasn’t needed on that server but it was part of a large internal disk so I couldn’t just physically move it.

So of course I immediately realised I hadn’t played with iSCSI since I last played with virtual tape drives (covered in an earlier post) and obviously realised that as having a ‘physical’ hard drive for one server being dependant upon another remote server running (I don’t have a SAN, I’m using spare space on my dev server here remember) was a really silly idea; I immediately set out to do just that. Why ?, just because I hadn’t done it before.

Most of the info I needed came from http://fedoraproject.org/wiki/Scsi-target-utils_Quickstart_Guide

What I started with was
(initiator) Desktop server: phoenix 192.168.1.187 1Gb network card
(target) Disk (Dev) Server: falcon 192.168.1.183 10Mb network card , server had lots of free disk space
Connected via: a 10Mb/100Mb network switch
Unrelated: yes I keep meaning to put a 100Mb card in my dev server, no comments on that please. Having only a 10Mb card on that server didn’t cause any problems during my testing.

Anyway the first step was to create a new LV on falcon of 15Gb in the copious free space available there.
I also formatted it as an ext3 filesystem when I created it (so I could test it could be mounted locally, add test files etc).
None of that is covered in this post, which is about iSCSI not logical volume management. But for the pupose of this post there was a test logical volume named /dev/VolGroup00/iSCSI01 created on server falcon preformated as ext3.

Of course there were additional packages to install, which were simply

  • on falcon, the iSCSI target (the one serving the disk), ‘yum install scsi-target-utils’
  • on phoenix, the iSCSI initator (the one that will use the disk), ‘yum install iscsi-initator-utils’

One other consideration is firewall, port 3260 needs to be opened so the initator can connect to the target. For my test I just flushed the iptables on both servers to make them both wide open.

On the target (serving the disk)

The target utils package installs a startup script for tgtd. Just enable it (if you want it to start automatically) and start it.

[root@falcon ~]# chkconfig tgtd on
[root@falcon ~]# service tgtd start
Starting tgtd (via systemctl):                             [  OK  ]

Then add a new target, add the test LUN (my precreated LV) to the target
Note: you can add multiple LUNs (LVs or physical disk partitions (or I believe whole disks)) to a target name, for this test I just used my single test LV. Refer to the considerations section at the end of this post.

Anyway, on the target falcon I created a new target as below. The 2011-07 is just the date I created it, the falcon is of course the hostname serving the disk that I put in for self-documentation purposes, and I’m guessing that the stuff after the : is the important bit so I just called it scsi.pool1 in case I want to play with adding additional LUNs to that target one day to see what happens.

[root@falcon ~]# tgtadm --lld iscsi --mode target --op show
[root@falcon ~]# tgtadm --lld iscsi --mode target --op new --tid=1 --targetname iqn.2011-07.com.falcon:scsi.pool1
[root@falcon ~]# tgtadm --lld iscsi --mode logicalunit --op new --tid 1 --lun 1 -b /dev/VolGroup00/iSCSI01
[root@falcon ~]# tgtadm --lld iscsi --mode target --op show
Target 1: iqn.2011-07.com.falcon:scsi.pool1
    System information:
        Driver: iscsi
        State: ready
    I_T nexus information:
    LUN information:
        LUN: 0
            Type: controller
            SCSI ID: IET     00010000
            SCSI SN: beaf10
            Size: 0 MB
            Online: Yes
            Removable media: No
            Backing store type: rdwr
            Backing store path: None
        LUN: 1
            Type: disk
            SCSI ID: IET     00010001
            SCSI SN: beaf11
            Size: 16106 MB
            Online: Yes
            Removable media: No
            Backing store type: rdwr
            Backing store path: /dev/VolGroup00/iSCSI01
    Account information:
    ACL information:

So we get to the what caused problems stage. Ignore the commands below that were entered at the initiator (phoenix) prompt as they are there just to show the errors I had (initiator setup is covered below), the commands issued on the target with tgtadm show what is possible; if you have the time to get them working.

What I did and backed out again

It is possible to limit an iSCSI target to a specific ip-address or an ip range as well as require a authentication in order to access the target disk.

# ---- allow connections only from local subnet ----
[root@falcon ~]# tgtadm --lld iscsi --mode target --op bind --tid 1 -I 192.168.0.0/24
# ---- create an account to use the target (target id (tid) 1) ----
[root@falcon ~]# tgtadm --lld iscsi --mode account --op show
[root@falcon ~]# tgtadm --lld iscsi --mode account --op new --user ''phoenix'' --password ''phoenix''
[root@falcon ~]# tgtadm --lld iscsi --mode account --op bind --tid 1 --user ''phoenix''
# ---- check ----
[root@falcon ~]# tgtadm --lld iscsi --mode account --op show
Account list:
    phoenix
[root@falcon ~]# tgtadm --lld iscsi --mode target --op show
Target 1: iqn.2011-07.com.falcon:scsi.pool1
    System information:
        Driver: iscsi
        State: ready
    I_T nexus information:
    LUN information:
        LUN: 0
            Type: controller
            SCSI ID: IET     00010000
            SCSI SN: beaf10
            Size: 0 MB
            Online: Yes
            Removable media: No
            Backing store type: rdwr
            Backing store path: None
        LUN: 1
            Type: disk
            SCSI ID: IET     00010001
            SCSI SN: beaf11
            Size: 16106 MB
            Online: Yes
            Removable media: No
            Backing store type: rdwr
            Backing store path: /dev/VolGroup00/iSCSI01
    Account information:
        phoenix
    ACL information:
        192.168.0.0/24
[root@falcon ~]# 

On the iSCSI initator (phoenix in this case) you should then update /etc/iscsi/iscsid.conf to have the userid/passwords you applied to the target disk. That just didn’t work (I tried many combinations, but was in a hurry so a manual somewhere may have explained what I was soing wrong), a bunch of errors is all I got when trying to test the connection from the iniator
[root@phoenix iscsi]# iscsiadm -m discovery -t sendtargets -p falcon
iscsiadm: Connection to Discovery Address 192.168.1.183 failed
iscsiadm: Login I/O error, failed to receive a PDU
iscsiadm: retrying discovery login to 192.168.1.183
iscsiadm: Connection to Discovery Address 192.168.1.183 failed
iscsiadm: Login I/O error, failed to receive a PDU
iscsiadm: retrying discovery login to 192.168.1.183
It’s possible that user/password authentication isn’t fully implemented yet, but more likely I was missing something in the config file.

What I did instead that worked

Just don’t complicate things with needing login credientials, after all it’s my personal home network after all.
Use the bind op below instead when you add the target to let everybody use it.

[root@falcon iscsi]# tgtadm --lld iscsi --mode target --op show
[root@falcon iscsi]# tgtadm --lld iscsi --mode target --op new --tid=1 --targetname iqn.2011-07.com.falcon:scsi.pool1
[root@falcon iscsi]# tgtadm --lld iscsi --mode logicalunit --op new --tid 1 --lun 1 -b /dev/VolGroup00/iSCSI01
[root@falcon iscsi]# tgtadm --lld iscsi --mode target --op bind --tid 1 -I ALL

The initator can then see the target
[root@phoenix iscsi]# iscsiadm –mode discovery –type sendtargets –portal falcon
192.168.1.183:3260,1 iqn.2011-07.com.falcon:scsi.pool1

On the initator

If you put authentication on the target, which I backed out you will have to edit /etc/iscsi/iscsid.conf to try to get the darn thing working.
If as I did you decide not to use authentication don’t edit a thing; it will all just work like magic.

You must on the initiator enable the iSCSI tasks installed by yum, simply

# ---- enable and start ----
[root@phoenix iscsi]# chkconfig iscsid on
[root@phoenix iscsi]# chkconfig iscsi on
[root@phoenix iscsi]# service iscsid start
Starting iscsid (via systemctl):                           [  OK  ]
[root@phoenix iscsi]#

After which discovery and login will just work.
Note: when you do the ‘login’ the iSCSI disk will become a physical disk on the initiator, so /var/log/messages should show it being discovered and you will have a new /dev/sdX disk on the server.

[root@phoenix iscsi]# iscsiadm --mode discovery --type sendtargets --portal falcon
192.168.1.183:3260,1 iqn.2011-07.com.falcon:scsi.pool1

[root@phoenix iscsi]# iscsiadm -m node -T iqn.2011-07.com.falcon:scsi.pool1 -p 192.168.1.183 --login
Logging in to [iface: default, target: iqn.2011-07.com.falcon:scsi.pool1, portal: 192.168.1.183,3260]
Login to [iface: default, target: iqn.2011-07.com.falcon:scsi.pool1, portal: 192.168.1.183,3260] successful.
[root@phoenix iscsi]# 

You can check your new disk that suddenly appeared is the iSCSI one from the dev directory

[root@phoenix iscsi]# ls -la /dev/disk/by-path
total 0
drwxr-xr-x. 2 root root 140 Jun 21 19:14 .
drwxr-xr-x. 5 root root 100 Jun 21 17:17 ..
lrwxrwxrwx. 1 root root   9 Jun 21 19:14 ip-192.168.1.183:3260-iscsi-iqn.2011-07.com.falcon:scsi.pool1-lun-1 -> ../../sdb
lrwxrwxrwx. 1 root root   9 Jun 21 17:17 pci-0000:00:1f.1-scsi-0:0:1:0 -> ../../sr0
lrwxrwxrwx. 1 root root   9 Jun 21 17:17 pci-0000:00:1f.2-scsi-0:0:0:0 -> ../../sda
lrwxrwxrwx. 1 root root  10 Jun 21 17:17 pci-0000:00:1f.2-scsi-0:0:0:0-part1 -> ../../sda1
lrwxrwxrwx. 1 root root  10 Jun 21 17:17 pci-0000:00:1f.2-scsi-0:0:0:0-part2 -> ../../sda2
[root@phoenix iscsi]# 

To test it create a test mount point, add it to /etc/fstab and mount it (change /dev/sdb to whatever your new disk appeared as). Using vi is of course better than echoing the new mount point but I want to highlight the noauto because if you forget that you next reboot will hang for a long time trying to mount a disk that is not available.
And of course after you have tested it mounts, umount it and logout of the iSCSI target again.

echo "/dev/sdb	/test	ext3	_netdev,noauto	0	0" >> /etc/fstab
[root@phoenix iscsi]# mkdir /test
[root@phoenix iscsi]# mount /test
[root@phoenix iscsi]# df -k
Filesystem           1K-blocks      Used Available Use% Mounted on
rootfs                36961864  10502896  24581364  30% /
udev                   1665200         0   1665200   0% /dev
tmpfs                  1672492       448   1672044   1% /dev/shm
tmpfs                  1672492       728   1671764   1% /run
/dev/mapper/vg_phoenix-lv_root
                      36961864  10502896  24581364  30% /
tmpfs                  1672492         0   1672492   0% /sys/fs/cgroup
tmpfs                  1672492       728   1671764   1% /var/lock
tmpfs                  1672492       728   1671764   1% /var/run
tmpfs                  1672492         0   1672492   0% /media
/dev/sda1               495844     51718    418526  11% /boot
/dev/mapper/vg_phoenix-lv_home
                      34091804  23615948   8744068  73% /home
/dev/sdb              15481840    169588  14525820   2% /test
[root@phoenix iscsi]# 
[root@phoenix iscsi]# umount /test
[root@phoenix iscsi]# iscsiadm -m node -T iqn.2011-07.com.falcon:scsi.pool1 -p 192.168.1.183 --logout
Logging out of session [sid: 1, target: iqn.2011-07.com.falcon:scsi.pool1, portal: 192.168.1.183,3260]
Logout of [sid: 1, target: iqn.2011-07.com.falcon:scsi.pool1, portal: 192.168.1.183,3260] successful.
[root@phoenix iscsi]# ls /dev/sdb*
ls: cannot access /dev/sdb*: No such file or directory
[root@phoenix iscsi]# 

As you can see from the last ‘ls /dev/sdb*’ command above, logout of the iSCSI target also removed the physical disk /dev entry on the initiator server.

Now repeat the steps above to mount it and get the UUID of the iSCSI disk with ‘blkid /dev/sdb’ or whatever your new disk was, the reason is simply that you cannot guarantee what order disks will be detected at reboot, the disk names will change. This is not an iSCSI hiccup but simply the way Fedora boots up and detects disks, the order does change (believe me it changes, even for internal SATA disks as I found on a three disk server; it is supposed to; there was not all that effort spent on implementing UUID support in /etc/fstab by the developers for fun, it’s expected physical devices will change names between reboots)

Depending upon what blkid gave you your /etc/fstab entry should change something like the below.

#/dev/sdb	/test	ext3	_netdev,noauto	0	0
UUID=d46c135c-3a57-415b-8207-0e55f90e94d9	/test	ext3	_netdev,noauto	0	0

So on remounts the iSCSI target will be mounted on the correct mountpoint based on the UUID.

To automatically login to the iSCSI target from the initiator

There is an interesting example of using UDEV rules to detect iSCSI mounts and assign them to specific aliased device names at http://www.cyberciti.biz/tips/howto-centos-rhel4-iscsi-initiators.html if you want to automaticically login the iSCSI disks at server boot time and use /dev names and keep them consistent, but that is not going to work in my environment. To setup the initiator to automatically login you just update the target entry as below

[root@phoenix iscsi]# iscsiadm -m node -T iqn.2011-07.com.falcon:scsi.pool1 \
                         -p 192.168.1.183 --op update -n node.startup -v automatic

You would need UDEV rules and a udev script to mount it correctly, thats explained in the URL above so I won’t cover that here.

As I mentioned that won’t work for me, simple because my test iSCSI target is on my Dev server which may not always be powered on, and as I discovered trying to mount a disk that isn’t available at boot time hangs the boot startup for about 5mins until the mount gives up. I’ve also seen the same thing in the real world when solaris and aix servers are rebooted and remote nfs mounts were not available, so its for any network mount type that this issue would occur and it’s not Fedora specific.

My solution while I am playing with iSCSI mounts is just to run a test and mount script kicked off from my rc.local that makes sure (a) the remote server is available, (b) the iSCSI target is detectable, and only then tries to mount it by a UUID mount entry in my fstab. A simplified version of the script I use is below.

#!/bin/bash
#
# On boot see if the iSCSI disks we can mount are available.
# Mount if they are, otherwise don't.
#
ping -c1 192.168.1.183
if [ $? = 0 ];
then
   iscsiadm -m discovery -t sendtargets -p falcon
   if [ $? = 0 ];
   then
      iscsiadm -m node -T iqn.2011-07.com.falcon:scsi.pool1 -p 192.168.1.183 --login
      if [ $? = 0 ];
      then
         mount /test
      else
         echo "iSCSI LUN iqn.2011-07.com.falcon:scsi.pool1 is not available"
      fi
   else
      echo "No iSCSI target running on 192.168.1.183 at the moment"
   fi
else
   echo "falcon is offline, no iSCSI disks will be mounted"
fi
exit 0

Anyway, this post covered how easy it is to use iSCSI on Fedora (tested under Fedora 15) to make spare disks on a remote ‘target’ server available as if they were local disks on the local ‘initiator’ server.

References:
http://fedoraproject.org/wiki/Scsi-target-utils_Quickstart_Guide
http://www.cyberciti.biz/tips/howto-centos-rhel4-iscsi-initiators.html

Considerations

  • You must open port 3260 on your firewalls
  • Don’t automount unless your remote server (or SAN) is always available or startup will hang
  • the important one. fdisk will of course report that the new iSCSI disk, /dev/sdb in my examples, does not have a valid partition table. Thats fine, try fdisk -l on any server and LVM volumes all report that. As for this test I was using an LVM precreated volume thats what I expected. However I don’t see any reason why you couldn’t fdisk /dev/sdb, format it up, add it as a PV to be used by existing physical internal disks. The only reason I haven’t tried that is that I don’t want to break my Dev server which knows it’s a LV; I have a vague idea that using a remote server to fiddle about with it’s structure will break the LV on the Dev server, one day when I feel like writing a post on how to recover a corrupted LVM system I will try it

About mark

At work, been working on Tandems for around 30yrs (programming + sysadmin), plus AIX and Solaris sysadmin also thrown in during the last 20yrs; also about 5yrs on MVS (mainly operations and automation but also smp/e work). At home I have been using linux for decades. Programming background is commercially in TAL/COBOL/SCOBOL/C(Tandem); 370 assembler(MVS); C, perl and shell scripting in *nix; and Microsoft Macro Assembler(windows).
This entry was posted in Uncategorized. Bookmark the permalink.