0
1

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

More than 5 years have passed since last update.

REX-Ray + Isilon Troubleshooting

Last updated at Posted at 2017-06-16

When I installed REX-Ray + Isilon for the first time, I got a lot of errors and struggled to solve those issues. So I post the troubleshooting logs for the record.

Initial configuration parameters

It seemed to be fine for me, however, causes a lot of errors so don't refer this configuration parameters as an example.

[root@sunny ~]# cat  /etc/rexray/config.yml_for_Isilon
libstorage:
  service: isilon
isilon:
  endpoint: https://192.168.20.40:8080
  group: wheel
  username: root
  password: ****
  insecure: true
  volumePath: /ifs/volumes
  nfsHost: 192.168.20.40
  dataSubnet: 192.168.20.0

Issue 1: Rex-Ray failed to create a new volume

After setting /etc/rexray/config.yml file and REX-Ray serivce started, the following error showed up when createing a rexray volume.

[root@sunny ~]# rexray volume create rexray_isilon_1 --size=20
WARN[0000] executor not supported                        host=unix:///var/run/libstorage/480463663.sock server=rainbow-leg-mu service=isilon storageDriver=libstorage time=1497501440712
WARN[0000] cannot get local deviecs                      error=executor not supported host=unix:///var/run/libstorage/480463663.sock integrationDriver=linux osDriver=linux server=rainbow-leg-mu service=isilon storageDriver=libstorage time=1497501440753 txCR=1497501440 txID=d5c2a9b5-284c-49b0-5cb8-c2fcce550e78
FATA[0000] error creating volume                         error.status=404 error.inner.resourceID="rexray_isilon_1" volume=rexray_isilon_1
Check if API endpoint is working

Ran "rexray volume ls", then the following result came back. Even though it throwbacked a few warnings, seemingly API got information from the directory set as volumePath in the config file. So I assumed Isilon API was working normally.

[root@sunny ~]# rexray volume ls
WARN[0000] executor not supported                        host=unix:///var/run/libstorage/025799782.sock server=puzzle-hawk-tm service=isilon storageDriver=libstorage time=1497503127890
WARN[0000] cannot get local deviecs                      error=executor not supported host=unix:///var/run/libstorage/025799782.sock integrationDriver=linux osDriver=linux server=puzzle-hawk-tm service=isilon storageDriver=libstorage time=1497503127927 txCR=1497503127 txID=443f84df-3b2d-4101-41a0-d9638f34503a
ID          Name        Status     Size
.snapshot   .snapshot   available  0
README.txt  README.txt  available  0
data        data        available  0
home        home        available  0
Set Isilon NFS access privilege

Assuming API is working normally, then, there should be some reasons REX-Ray failed to create a new volume under the volumePath directory.
I searched google about Isilon NFS privilege and found the link below.

https://github.com/codedellemc/libstorage/issues/236
To use Isilon, create a new directory named volumes under menu File System -> File System Explorer. Use all defaults for RW access.
Next go to menu item Protocols -> UNIX Sharing (NFS). Create a New Export. Select the /ifs/volumes folder . Enable Mounting of sub-directories. Change the Root Permissions from nobody to root. Root can be found under FILE:system.

So I moved to Isilon OneFS management, change the NFS export to enable:

  • Mount access to subdirectory
  • Root user mapping to "root" user

image

But I still got the same error. Looking at the error log, it says "error=executor not supported". So next, I doubted if the REX-Ray was properly configured to use Isilon NFS.

Mount Isilon NFS

Tried to mount Isilon NFS to the REX-Ray server, then, this error returned.

[root@sunny ~]# mount -t nfs 192.168.20.40:/ifs/volumes /mnt/isilon
mount: wrong fs type, bad option, bad superblock on 192.168.20.40:/ifs/volumes,
       missing codepage or helper program, or other error
       (for several filesystems (e.g. nfs, cifs) you might
       need a /sbin/mount.<type> helper program)

       In some cases useful info is found in syslog - try
       dmesg | tail or so.

It means, the server was not ready to mount NFS. So installed nfs-utils.

yum install nfs-utils

As a result, NFS became successfully able to be mounted and writable.

[root@sunny ~]# mount -t nfs 192.168.20.40:/ifs/volumes  /mnt/isilon
[root@sunny ~]# ls /mnt/isilon/
[root@sunny ~]# vi /mnt/isilon/test_at_isilon_ifs_volumes
[root@sunny ~]# ls /mnt/isilon/
test_at_isilon_ifs_volumes

Issue 2: REX-Ray service failed to start

After the configuration change, however, REX-Ray came to fail starting its service.

[root@sunny ~]# rexray status
● rexray.service - rexray
   Loaded: loaded (/etc/systemd/system/rexray.service; enabled; vendor preset: disabled)
   Active: failed (Result: exit-code) since Thu 2017-06-15 16:30:52 JST; 6s ago
  Process: 7819 ExecStart=/usr/bin/rexray start -f (code=exited, status=2)
 Main PID: 7819 (code=exited, status=2)
(omitted)
Jun 15 16:30:52 sunny.weather.local systemd[1]: Unit rexray.service entered failed state.
Jun 15 16:30:52 sunny.weather.local systemd[1]: rexray.service failed.

To investigate this issue, used debug mode like below. Option "-l" means logging level. By setting logging level as "debug", REX-Ray shows you debug info when running a command.

rexray start -f -l debug
Set CIDR explicitly

In the debug info when REX-Ray failed to start the service, there was the following line saying CIDR is invalid.

ERRO[0001] error: api call failed                        error.dataSubnet=192.168.20.0 host=unix:///var/run/libstorage/602344592.sock route=serviceInspect server=plume-bolt-ki time=1497513046332 tls=false txCR=1497513046 txID=44eeacc4-6b3d-4e76-7a69-1acdeef71f00
DEBU[0001] api call error json                           apiErr={"message":"invalid data subnet","status":500,"error":{"dataSubnet":"192.168.20.0","inner":"invalid CIDR address: 192.168.20.0"}} host=unix:///var/run/libstorage/602344592.sock route=serviceInspect server=plume-bolt-ki time=1497513046333 tls=false txCR=1497513046 txID=44eeacc4-6b3d-4e76-7a69-1acdeef71f00
PANI[0001] error initializing instance ID cache          inner.inner.dataSubnet=192.168.20.0 inner.status=500

So I added CIDR explicitly to "dataSubnet" in the config.yml file like "192.168.20.0/24". As a result, REX-Ray started the service successfully. Here, "test_at_isilon_ifs_volumes" is the file created during Isilon NFS mounting test earlier.

[root@sunny ~]# rexray volume ls
ID                          Name                        Status     Size
test_at_isilon_ifs_volumes  test_at_isilon_ifs_volumes  available  0

Issue 3: Still REX-Ray failed to create a new volume

It all seemed to be fine, but unfortunately, I still got an error when creating a new volume.

[root@sunny ~]# rexray volume create rexray_isilon_1 --size=20
FATA[0000] error creating volume                         error.inner.resourceID="rexray_isilon_1" error.status=404 volume=rexray_isilon_1

Debug mode returned enormous amount of logs to me, but it didn't have any useful information for me to get over this issue.

[root@sunny ~]# rexray volume create rexray_isilon_1 --size=20 -l debug
INFO[0000] updated log level                             logLevel=debug
DEBU[0000] os.args                                       time=1497516974875 val=[rexray volume create rexray_isilon_1 --size=20 -l debug]
DEBU[0000] activating libStorage                         cmd=create time=1497516974875
DEBU[0000] parseSafeHost - no change                     postParse=unix:///var/run/libstorage
(omitted)
{{end}} time=1497516975398
FATA[0000] error creating volume                         error.inner.resourceID="rexray_isilon_1" error.status=404 volume=rexray_isilon_1

Is this because of privilege? In libstorage document, it says the user access to the Isilon cluster needs to have the following privilge. But I use "root" for REX-Ray, so this must be no problem.

The account used to access the Isilon cluster must be in a role with the following privileges:
Namespace Access (ISI_PRIV_NS_IFS_ACCESS)
Platform API (ISI_PRIV_LOGIN_PAPI)
NFS (ISI_PRIV_NFS)
Restore (ISI_PRIV_IFS_RESTORE)

Solution: Upgrade REX-Ray

I didn't have no clue at all at that time. There was nothing I could do. Only thing left was to ask help to REX-Ray developers. So I posted about this issue at https://github.com/codedellemc/rexray/issues/.

Then I got a reply from a developer saying like "this behavior was a regression with REX-Ray 0.9.0, which was released the last Friday. So it should be fixed in the latest release."

So I upgraded REX-Ray to the latest version.
https://github.com/codedellemc/rexray/blob/master/.docs/about/release-notes.md

[root@sunny ~]# curl -sSL https://dl.bintray.com/emccode/rexray/install | sh

rexray has been installed to /usr/bin/rexray

REX-Ray
-------
Binary: /usr/bin/rexray
Flavor: client+agent+controller
SemVer: 0.9.1
OsArch: Linux-x86_64
Branch: v0.9.1
Commit: 2373541479478b817a8b143629e552f404f75226
Formed: Sat, 10 Jun 2017 04:23:38 JST

libStorage
----------
SemVer: 0.6.1
OsArch: Linux-x86_64
Branch: v0.9.1
Commit: fd26f0ec72b077ffa7c82160fbd12a276e12c2ad
Formed: Sat, 10 Jun 2017 04:23:05 JST

After the upgrade, finally, a new volume was successfully created.

[root@sunny ~]# rexray volume create rexray_isilon_1 --size=2
ID               Name             Status     Size
rexray_isilon_1  rexray_isilon_1  available  0

Memo: Proper configuration parameters

This is the final version of /etc/rexray/config.yml file.

[root@sunny ~]# cat  /etc/rexray/config.yml_for_Isilon
libstorage:
  service: isilon
isilon:
  endpoint: https://192.168.20.40:8080
  group: wheel
  username: root
  password: ****
  insecure: true
  volumePath: /ifs/volumes
  nfsHost: 192.168.20.40
  dataSubnet: 192.168.20.0/24
0
1
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
0
1

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?