跳转到: 导航, 搜索

Manila/docs/Setting up DevStack with Manila on Fedora 20

< Manila‎ | docs

Manila + DevStack 在 Fedora 20 上的设置

目标

记录在 F20 上使用 openstack Manila 设置 DevStack 所需的步骤。

先决条件

F20 安装在虚拟机或物理系统上。

在此文档中,我正在使用虚拟机作为 F20 系统。因此,我的 DevStack 将托管在虚拟机内部,而 DevStack 创建的实例(也称为客户机)将是虚拟机内部的虚拟机(也称为嵌套 KVM)。

最好创建一个至少具有 4G 内存、4 个 vcpu 和足够磁盘空间(我的例子中为 50G)的 F20 虚拟机。

禁用 selinux 或将其设置为 permissive 模式。

安装并运行 DevStack(适用于 Kilo 及更高版本的 devstack)

从 Kilo 开始,可以使用 devstack 插件机制在 devstack 中配置 Manila。请按照 KiloDevstack 中提到的步骤,在 devstack 中启动并运行 Manila

健全性检查与故障排除

  1. 如果 ./stack.sh 对您来说没有成功,请尝试从看到的最后一个错误中找出脚本抱怨的内容。有时它会因为某些系统服务未启动而失败。理想情况下,DevStack 应该启动所有需要的系统服务,但可能会出现一些特殊情况。在这种情况下,使用 systemctl 启动所需的服务。例如

    [stack@devstack-large-vm ~]$ sudo systemctl start rabbitmq-server.service

    等等。我还看到,有时重新运行 ./stack.sh 也能解决问题!

  2. 否则,在 Google 上搜索 :) 或在 openstack-dev@lists.openstack.org 上提问,邮件主题中带有 [DevStack] 标签,以便让相关人员注意到它。

  3. 另一种方法是在 #openstack-dev 频道上提问,该频道托管在 irc.freenode.net 上。

  4. 假设 ./stack.sh 成功 执行,下一步是进行一些基本的健全性检查并设置开发 shell 环境。DevStack(默认情况下)在一个多窗口 screen 会话中安排所有服务的控制台,可以通过以下方式访问

    [stack@devstack-large-vm ~]$ screen -x stack

    这将启动 screen 会话,屏幕底部显示每个服务及其对应的窗口。

    Ctrl-A," 调出服务选择窗口,选择要进入控制台的服务并按 Enter

  5. 确保 n-net 服务未列在服务选择窗口中,因为我们禁用了 nova-network

  6. 通过进入每个服务的控制台,确保所有其他服务都正常运行。

  7. DevStack 设置了 admindemo tenants,因此除了托管 screen 窗口的终端之外,我通常还会打开另外两个终端到我的 DevStack 虚拟机,并运行以下脚本以获取 admindemo shell,这些 shell 可以在将来快速运行具有 admindemo tenants 权限的 openstack 命令。

    对于具有 admin 权限的终端 

    [root@devstack-large-vm ~]# su - stack
    
    [stack@devstack-large-vm ~]$ cat ~/mytools/setenv_admin
    # source this file to set env and then run os cmds like `cinder list` etc
    export OS_USERNAME=admin`
    export OS_TENANT_NAME=admin`
    export OS_PASSWORD=abc123`
    export OS_AUTH_URL=http://192.168.122.219:5000/v2.0/`
    export PS1=$PS1\[admin\]\
    
    [stack@devstack-large-vm ~]$ source ~/mytools/setenv_admin
    [stack@devstack-large-vm ~]$ [admin]

    对于具有 demo 权限的终端 

    [root@devstack-large-vm ~]# su - stack
    
    [stack@devstack-large-vm ~]$ cat ~/mytools/setenv_demo
    # source this file to set env and then run os cmds like `cinder list` etc
    export OS_USERNAME=demo
    export OS_TENANT_NAME=demo
    export OS_PASSWORD=abc123
    export OS_AUTH_URL=http://192.168.122.219:5000/v2.0/
    export PS1=$PS1\[demo\]\ 
    
    [stack@devstack-large-vm ~]$ source ~/mytools/setenv_demo
    [stack@devstack-large-vm ~]$ [demo]
  8. 现在在您的 admindemo shell 中使用一些基本的 openstack 命令进行健全性检查。

    [stack@devstack-large-vm ~]$ [demo] cinder list
    +----+--------+--------------+------+-------------+----------+-------------+
    | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |
    +----+--------+--------------+------+-------------+----------+-------------+
    +----+--------+--------------+------+-------------+----------+-------------+
    
    [stack@devstack-large-vm ~]$ [demo] nova list
    +----+------+--------+------------+-------------+----------+
    | ID | Name | Status | Task State | Power State | Networks |
    +----+------+--------+------------+-------------+----------+
    +----+------+--------+------------+-------------+----------+
    
    [stack@devstack-large-vm ~]$ [demo] manila list
    +----+------+------+-------------+--------+-----------------+
    | ID | Name | Size | Share Proto | Status | Export location |
    +----+------+------+-------------+--------+-----------------+
    +----+------+------+-------------+--------+-----------------+
    
    [stack@devstack-large-vm ~]$ [demo] glance image-list
    +--------------------------------------+---------------------------------+-------------+------------------+-----------+--------+
    | ID                                   | Name                            | Disk Format | Container Format | Size      | Status |
    +--------------------------------------+---------------------------------+-------------+------------------+-----------+--------+
    | c3c32496-0b90-4520-a9d4-b9341afa5993 | cirros-0.3.2-x86_64-uec         | ami         | ami              | 25165824  | active |
    | 62a9b748-352f-41a6-9081-25dd48319da8 | cirros-0.3.2-x86_64-uec-kernel  | aki         | aki              | 4969360   | active |
    | e7591991-8bc8-470a-a15c-723031e7b809 | cirros-0.3.2-x86_64-uec-ramdisk | ari         | ari              | 3723817   | active |
    | 5d470fc2-39e3-461d-a4ea-b1b5de795604 | ubuntu_1204_nfs_cifs            | qcow2       | bare             | 318701568 | active |
    +--------------------------------------+---------------------------------+-------------+------------------+-----------+--------+
    • 注意:ubuntu_1204_nfs_cifs 镜像由 Manila 脚本添加到 DevStack。
  9. 有时我看到 manila share 服务 (m-shr) 报错。从其控制台窗口,您可以尝试调试导致其报错的异常/错误。大多数时候,我看到它由于与网络相关的异常而报错。这可能是由于 q-svc 启动时间和 m-shr 启动时间之间的竞争造成的。只是重新启动 m-shr 对我来说几乎总是有效的。

  10. 要重新启动失败的服务,请转到其服务控制台窗口,按 Up-arrow 键一次(仅一次!)获取最后一次运行的命令,然后按 Enter。通常,在得出服务确实失败并且需要进一步调试的结论之前,重新启动失败的服务以检查其是否正常工作是一个好主意。

  11. 您可能遇到的另一个问题是 Cinder volume 服务 c-vol 给出有关无法初始化默认 LVM iSCSI 驱动程序的警告。这通常是因为 stack-volumes VG 的 loop 设备未创建作为 PV。请按照以下步骤为 c-vol 创建 loop 设备 PV。

    [stack@devstack-large-vm ~]$ [admin] sudo pvs
    PV         VG           Fmt  Attr PSize PFree
    /dev/loop0 stack-shares lvm2 a--  8.20g 8.20g
    
    [stack@devstack-large-vm ~]$ [admin] sudo losetup -f --show /opt/stack/data/stack-volumes-backing-file
    /dev/loop1
    
    [stack@devstack-large-vm ~]$ [admin] losetup -a
    /dev/loop0: []: (/opt/stack/data/stack-shares-backing-file)
    /dev/loop1: []: (/opt/stack/data/stack-volumes-backing-file)
    
    [stack@devstack-large-vm ~]$ [admin] sudo vgs
    VG            #PV #LV #SN Attr   VSize  VFree
    stack-shares    1   0   0 wz--n-  8.20g  8.20g
    stack-volumes   1   0   0 wz--n- 10.01g 10.01g

    现在转到 c-vol 服务窗口中的 screen 会话,通过按 Ctrl-C 杀死 c-vol 服务,然后通过运行最后一次命令(可以使用 Up-arrow 键访问)重新启动 c-vol 服务。现在 c-vol 不应该抱怨未初始化的驱动程序。

重新运行 / 重新加入 DevStack

(在重新启动/重启您的 DevStack 虚拟机/系统后)

  1. 如果您重新启动和/或重启 DevStack 虚拟机/主机,可以通过以下方式重新加入相同的 DevStack 设置(假设您已作为 root 重新登录)。

    [root@devstack-large-vm ~]# su - stack
    [stack@devstack-large-vm ~]$ cd devstack
    • 注意:通常,最好检查 stack-volumes VG 是否存在,如果不存在,则在执行 rejoin-stack.sh 之前创建它。这可确保您不会遇到上述 故障排除 部分中的 #11 问题。
    [stack@devstack-large-vm ~]$ sudo losetup -f --show /opt/stack/data/stack-volumes-backing-file

    现在执行 rejoin-stack.sh 以重新创建并加入您现有的 devstack 设置。

    [stack@devstack-large-vm ~]$ ./rejoin-stack.sh
  2. 在执行上述操作之前,最好检查一些重要的系统服务是否正在运行,如果未运行,请使用 systemctl 命令启动它们。使用 openstack-status 命令可以大致了解系统服务。

    [stack@devstack-large-vm devstack]$ openstack-status
    == Support services ==
    mysqld:                                 inactive  (disabled on boot)
    libvirtd:                               active
    openvswitch:                            active
    dbus:                                   active
    rabbitmq-server:                        active
    
    [stack@devstack-large-vm devstack]$ sudo systemctl start mysqld.service
    
    [stack@devstack-large-vm devstack]$ openstack-status
    == Support services ==
    mysqld:                                 active    (disabled on boot)
    libvirtd:                               active
    openvswitch:                            active
    dbus:                                   active
    rabbitmq-server:                        active
    • 注意:您可以使用 chkconfig 命令确保这些服务在系统启动时自动启动,但出于某种原因,它对 mysqld 服务不起作用。
  3. 同样,您也可能会看到一些 openstack 服务未正确启动,这可能是由于不同服务调用之间以及 VG 不存在等依赖关系之间的竞争造成的。有关解决方法,请参阅上面的 故障排除 部分。

    ./rejoin-stack.sh 成功一次,screen 会话中的所有 openstack 服务都在没有错误的情况下正常工作。与往常一样,请按照上述 健全性检查 部分中的步骤进行健全性检查,以确认 DevStack 设置是否成功。

创建一个 Nova 实例

  1. 我们将使用 glance image-list 中存在的 ubuntu 镜像创建一个 Nova 实例(也称为 VM / Guest)。在执行此操作之前,需要在 /etc/nova/nova.conf 中进行一些更改,如下所示

    由于某种原因(可能是 F20 上的嵌套 kvm 已损坏),如果 Nova 实例在启动时挂起,请删除 virt_type = kvm。为此

    在 [libvirt] 部分,确保 virt_type = qemu

    有时我看到 nova-scheduler 没有选择 DevStack 主机,可能是由于内存/cpu 资源不足,导致实例创建失败。由于我们是一个 all-in-one (AIO) 开发设置,我们希望确保我们的 DevStack 虚拟机/主机始终被 nova-scheduler 选择(也称为过滤),即使内存/cpu 资源不足。为此,最好执行

    在 [default] 部分,附加 scheduler_default_filters = AllHostsFilter

    不要忘记重新启动 n-cpun-sch 服务,以使上述 nova.conf 更改生效。

  2. 切换到 demo tenants shell 并使用以下命令创建一个 Nova 实例。

    [stack@devstack-large-vm ~]$ [demo] nova keypair-add --pub_key ~/.ssh/id_rsa.pub mykey
    
    [stack@devstack-large-vm ~]$ [demo] nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
    +-------------+-----------+---------+-----------+--------------+
    | IP Protocol | From Port | To Port | IP Range  | Source Group |
    +-------------+-----------+---------+-----------+--------------+
    | tcp         | 22        | 22      | 0.0.0.0/0 |              |
    +-------------+-----------+---------+-----------+--------------+
    
    [stack@devstack-large-vm ~]$ [demo] nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
    +-------------+-----------+---------+-----------+--------------+
    | IP Protocol | From Port | To Port | IP Range  | Source Group |
    +-------------+-----------+---------+-----------+--------------+
    | icmp        | -1        | -1      | 0.0.0.0/0 |              |
    +-------------+-----------+---------+-----------+--------------+
    
    
    [stack@devstack-large-vm ~]$ [demo] nova boot --flavor m1.micro --image ubuntu_1204_nfs_cifs --key-name mykey --security-groups default myvm_ubuntu
  3. 等待实例进入 ACTIVE/Running 状态,然后 ssh 到其中进行健全性检查。

    [stack@devstack-large-vm ~]$ [demo] nova list
     +--------------------------------------+-------------+--------+------------+-------------+------------------+
     | ID                                   | Name        | Status | Task State | Power State | Networks         |
     +--------------------------------------+-------------+--------+------------+-------------+------------------+
     | f92c51fd-de36-402f-b072-a0e515116892 | myvm_ubuntu | ACTIVE | -          | Running     | private=10.0.0.4 |
     +--------------------------------------+-------------+--------+------------+-------------+------------------+

    Openstack 使用 neutron 服务在私有子网中设置 Nova 实例。如您所见,实例 IP 为 10.x.x.x,与您的 DevStack 虚拟机/主机子网不同的子网。

    私有子网由 neutron 使用网络命名空间、linux 桥接和 openvswitch 桥接的组合创建。因此,不能仅使用 ssh 访问实例,而需要使用网络命名空间并从该命名空间内 ssh,如下所示

    [stack@devstack-large-vm ~]$ [demo] ip netns
    qrouter-7587cea0-4015-4a18-a191-20ce7be410e4
    qdhcp-26f7e398-39e7-465f-8997-43062a825c27  
    
    [stack@devstack-large-vm ~]$ [demo] sudo ip netns exec qdhcp-26f7e398-39e7-465f-8997-43062a825c27 ssh ubuntu@10.0.0.4

    如果一切设置正确,您应该能够使用上述命令成功 ssh 到实例。

    • 注意:ubuntu_1204_nfs_cifs 镜像的 ubuntu 用户密码为 ubuntu。您可以使用 sudo 在实例内部以 root 身份运行命令。

创建 Manila share 并从 Nova 实例访问

  1. 为 tenants 使用创建一个新的 share 网络。

    • 注意:share 网络是 Manila share 的私有 L2 子网,使用 neutron 服务创建,并与 tenants 的私有子网关联,以便使用 L2 级别的隔离来实现多 tenants。

      [stack@devstack-large-vm ~]$ [demo] neutron net-list
       +--------------------------------------+---------+--------------------------------------------------+
       | id                                   | name    | subnets                                          |
       +--------------------------------------+---------+--------------------------------------------------+
       | 8031f472-2b64-430c-8131-7aad456ebfbb | private | 77343a5f-f553-4e20-af42-698890d8a269 10.0.0.0/24 |
      | b5f39b46-6d75-4df2-a2d0-eaa410b184fd | public  | 6bb017ac-bfbf-425d-803b-31b297c4604c             |
       +--------------------------------------+---------+--------------------------------------------------+
      
      
      
      [stack@devstack-large-vm ~]$ [demo] neutron subnet-list 
       +--------------------------------------+----------------+-------------+--------------------------------------------+
       | id                                   | name           | cidr        | allocation_pools                           |
       +--------------------------------------+----------------+-------------+--------------------------------------------+
       | 77343a5f-f553-4e20-af42-698890d8a269 | private-subnet | 10.0.0.0/24 | {"start": "10.0.0.2", "end": "10.0.0.254"} |
       +--------------------------------------+----------------+-------------+--------------------------------------------+
      
      
       [stack@devstack-large-vm ~]$ [demo] manila share-network-create --neutron-net-id 8031f472-2b64-430c-8131-7aad456ebfbb  --neutron-subnet-id 77343a5f-f553-4e20-af42-698890d8a269 --name share_network_for_10xxx --description "Share network for 10.0.0.0/24 subnet"
      
      
       [stack@devstack-large-vm ~]$ [demo] manila share-network-list
       +--------------------------------------+-------------------------+--------+
       |                  id                  |           name          | status |
       +--------------------------------------+-------------------------+--------+
       | 085c596f-feac-4539-97cd-393279e99098 | share_network_for_10xxx |  None  |
       +--------------------------------------+-------------------------+--------+
  2. 创建一个新的 Manila share(也称为 export)。

    Manila 默认使用 GenericShareDriver,它使用 Cinder 服务创建一个新的 cinder volume,将其导出为块设备,对其进行 mkfs 并将其文件系统导出为 NFS share。所有这些都在一个由 Manila 创建和管理的 service VM 中透明地发生!

    [stack@devstack-large-vm ~]$ [demo] grep share_driver /etc/manila/manila.conf
     share_driver = manila.share.drivers.generic.GenericShareDriver
    
    [stack@devstack-large-vm ~]$ [demo] manila create --name cinder_vol_share_using_nfs --share-network-id  085c596f-feac-4539-97cd-393279e99098  NFS 1
    
    [stack@devstack-large-vm ~]$ [demo] manila list
     +--------------------------------------+----------------------------+------+-------------+-----------+---------------------------------------------------------------+
     |                  ID                  |            Name            | Size | Share Proto |   Status  |                        Export location                        |
     +--------------------------------------+----------------------------+------+-------------+-----------+---------------------------------------------------------------+
     | 1edf541e-5fc5-49c4-8931-6eb8ecaed7c3 | cinder_vol_share_using_nfs |  1   |     NFS     | available | 10.254.0.3:/shares/share-1edf541e-5fc5-49c4-8931-6eb8ecaed7c3 |
     +--------------------------------------+----------------------------+------+-------------+-----------+---------------------------------------------------------------+
    • 注意:10.254.0.3 是 service VM 的 IP,/shares/share-1edf541e-5fc5-49c4-8931-6eb8ecaed7c3 是导出路径。
  3. 允许 Nova 实例访问 share。

    [stack@devstack-large-vm ~]$ [demo] manila access-allow 1edf541e-5fc5-49c4-8931-6eb8ecaed7c3 ip 10.0.0.4 
    • 注意:作为 access-allow 的一部分,Manila 确保 service VM 仅为指定的 tenants IP 导出导出路径。
  4. 登录到 Nova 实例并挂载 share。

    [stack@devstack-large-vm ~]$ [demo] sudo ip netns exec qdhcp-26f7e398-39e7-465f-8997-43062a825c27 ssh ubuntu@10.0.0.4
    • 注意:密码为 ubuntu

      ubuntu@ubuntu:~$ sudo mount -t nfs -o vers=4  10.254.0.3:/shares/share-1edf541e-5fc5-49c4-8931-6eb8ecaed7c3 /mnt
      ubuntu@ubuntu:~$ df -h
       Filesystem                                                     Size  Used Avail Use% Mounted on`
       /dev/vda1                                                      1.4G  524M  793M  40% /`
       udev                                                            56M  4.0K   56M   1% /dev`
       tmpfs                                                           24M  360K   23M   2% /run`
       none                                                           5.0M     0  5.0M   0% /run/lock`
       none                                                            59M     0   59M   0% /run/shm`
       10.254.0.3:/shares/share-1edf541e-5fc5-49c4-8931-6eb8ecaed7c3 1008M   34M  924M   4% /mnt
    • 注意:如果一切按预期进行,您应该能够成功地在 Nova 实例中挂载 Manila share,如上所示。

    祝你好运!

故障排除

  1. manila create ..._attach_volume_ 中的异常而报错。

    m-shr 日志显示以下异常。

     Traceback (most recent call last):
       File "/opt/stack/manila/manila/openstack/common/rpc/amqp.py", line 433, in _process_data
         **args)
       File "/opt/stack/manila/manila/openstack/common/rpc/dispatcher.py", line 148, in dispatch
         return getattr(proxyobj, method)(ctxt, **kwargs)
       File "/opt/stack/manila/manila/share/manager.py", line 165, in create_share
         self.db.share_update(context, share_id, {'status': 'error'})
       File "/usr/lib64/python2.7/contextlib.py", line 24, in exit
         self.gen.next()
       File "/opt/stack/manila/manila/share/manager.py", line 159, in create_share
         context, share_ref, share_server=share_server)
       File "/opt/stack/manila/manila/share/drivers/generic.py", line 132, in create_share
         volume = self._attach_volume(self.admin_context, share, server, volume)
       File "/opt/stack/manila/manila/share/drivers/service_instance.py", line 112, in wrapped_func
         return f(self, *args, **kwargs)
       File "/opt/stack/manila/manila/share/drivers/generic.py", line 198, in _attach_volume
         % volume['id'])
     ManilaException: Failed to attach volume 2a5bf78f-313d-463e-9b07-bb7a98080ce1

    同时,c-vol 日志显示以下异常。

    2014-06-16 16:39:38.440 DEBUG cinder.openstack.common.processutils [req-083cd582-1b4d-4e7c-a70c-2c6282d8d799 d9bb59a6a2394483902b382a991ffea2 b65a066f32df4aca80fa9a6d5c
    795095] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf tgt-admin --update iqn.2010-10.org.openstack:volume-8bfd424d-9877-4c20-a9d1-058c06b9bdd
    a from (pid=4623) execute /opt/stack/cinder/cinder/openstack/common/processutils.py:142
    2014-06-16 16:39:38.981 DEBUG cinder.openstack.common.processutils [req-083cd582-1b4d-4e7c-a70c-2c6282d8d799 d9bb59a6a2394483902b382a991ffea2 b65a066f32df4aca80fa9a6d5c
    795095] Result was 107 from (pid=4623) execute /opt/stack/cinder/cinder/openstack/common/processutils.py:167
    2014-06-16 16:39:38.981 WARNING cinder.brick.iscsi.iscsi [req-083cd582-1b4d-4e7c-a70c-2c6282d8d799 d9bb59a6a2394483902b382a991ffea2 b65a066f32df4aca80fa9a6d5c795095] Fa
    iled to create iscsi target for volume id:volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda: Unexpected error while running command.
    Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf tgt-admin --update iqn.2010-10.org.openstack:volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda
    Exit code: 107
    Stdout: 'Command:\n\ttgtadm -C 0 --lld iscsi --op new --mode target --tid 1 -T iqn.2010-10.org.openstack:volume-8bfd424d-9877-4c20-a9d1-058c06b9bdda\nexited with code: 107.\n'
    Stderr: 'tgtadm: failed to send request hdr to tgt daemon, Transport endpoint is not connected\ntgtadm: failed to send request hdr to tgt daemon, Transport endpoint is not connected\ntgtadm: failed to send request hdr to tgt daemon, Transport endpoint is not connected\ntgtadm: failed to send request hdr to tgt daemon, Transport endpoint is not connected\n'
    2014-06-16 16:39:38.982 ERROR oslo.messaging.rpc.dispatcher [req-083cd582-1b4d-4e7c-a70c-2c6282d8d799 d9bb59a6a2394483902b382a991ffea2 
    b65a066f32df4aca80fa9a6d5c795095] Exception during message handling: Failed to create iscsi target for volume volume-2a5bf78f-313d-463e-9b07-bb7a98080ce1.
    2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher Traceback (most recent call last):
    2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 134, in _dispatch_and_reply
    2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher     incoming.message))
    2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 177, in _dispatch
    2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher     return self._do_dispatch(endpoint, method, ctxt, args)
    2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 123, in _do_dispatch
    2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher     result = getattr(endpoint, method)(ctxt, **new_args)
    2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/cinder/cinder/volume/manager.py", line 783, in initialize_connection
    2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher     volume)
    2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/cinder/cinder/volume/drivers/lvm.py", line 524, in create_export
    2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher     return self._create_export(context, volume)
    2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/cinder/cinder/volume/drivers/lvm.py", line 533, in _create_export
    2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher     data = self.target_helper.create_export(context, volume, volume_path)
    2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/cinder/cinder/volume/iscsi.py", line 53, in create_export
    2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher     chap_auth)
    2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/cinder/cinder/brick/iscsi/iscsi.py", line 219, in create_iscsi_target
    2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher     raise exception.ISCSITargetCreateFailed(volume_id=vol_id)
    2014-06-16 16:39:38.982 TRACE oslo.messaging.rpc.dispatcher ISCSITargetCreateFailed: Failed to create iscsi target for volume volume-2a5bf78f-313d-463e-9b07-bb7a98080ce1.
    • 正如我们所看到的,tgt-admin 无法为 cinder volume 创建 iSCSI target,因此无法将 cinder volume 附加到 manila service VM。

    • 解决方案是检查 tgtd.service 是否正在运行,如果未运行,请启动它。

       [root@devstack-large-vm ~]# systemctl status tgtd.service
      tgtd.service - tgtd iSCSI target daemon
          Loaded: loaded (/usr/lib/systemd/system/tgtd.service; disabled)
          Active: inactive (dead)
      
      
      [root@devstack-large-vm ~]# systemctl start tgtd.service
      [root@devstack-large-vm ~]# chkconfig tgtd on
      Note: Forwarding request to 'systemctl enable tgtd.service'.
      ln -s '/usr/lib/systemd/system/tgtd.service' '/etc/systemd/system/multi-user.target.wants/tgtd.service'
      [root@devstack-large-vm ~]# 
      
      
      [root@devstack-large-vm ~]# systemctl status tgtd.service
      tgtd.service - tgtd iSCSI target daemon
         Loaded: loaded (/usr/lib/systemd/system/tgtd.service; enabled)
         Active: active (running) since Tue 2014-06-17 05:50:42 UTC; 29s ago
       Main PID: 10623 (tgtd)
         CGroup: /system.slice/tgtd.service
                 └─10623 /usr/sbin/tgtd -f
      Jun 17 05:50:37 devstack-large-vm.localdomain tgtd[10623]: librdmacm: Warning: couldn't read ABI version.
      Jun 17 05:50:37 devstack-large-vm.localdomain tgtd[10623]: librdmacm: Warning: assuming: 4
      Jun 17 05:50:37 devstack-large-vm.localdomain tgtd[10623]: librdmacm: Fatal: unable to get RDMA device list
      Jun 17 05:50:37 devstack-large-vm.localdomain tgtd[10623]: tgtd: iser_ib_init(3355) Failed to initialize RDMA; load kernel modules?
      Jun 17 05:50:37 devstack-large-vm.localdomain tgtd[10623]: tgtd: work_timer_start(146) use timer_fd based scheduler
      Jun 17 05:50:37 devstack-large-vm.localdomain tgtd[10623]: tgtd: bs_init_signalfd(271) could not open backing-store module directory /usr/lib64/tgt/backing-store
      Jun 17 05:50:37 devstack-large-vm.localdomain tgtd[10623]: tgtd: bs_init(390) use signalfd notification
      Jun 17 05:50:42 devstack-large-vm.localdomain systemd[1]: Started tgtd iSCSI target daemon.
    • 现在 manila create ... 应该可以顺利进行!

  2. 删除最后一个 Manila share 即使在 /etc/manila/manila.conf 中设置了 delete_share_server_with_last_share=True,也不会关闭 service VM

    您可以按以下方式删除 Manila share

    [stack@devstack-large-vm ~]$ [demo] manila delete de45c4db-aa89-4887-ab3c-153d7b909708
    
    [stack@devstack-large-vm ~]$ [demo] manila list
    +----+------+------+-------------+--------+-----------------+
    | ID | Name | Size | Share Proto | Status | Export location |
    +----+------+------+-------------+--------+-----------------+
    +----+------+------+-------------+--------+-----------------+

    使用 admin 权限列出所有 tenants 的 VM

    [stack@devstack-large-vm ~]$ [admin] nova list --all-tenants
    +--------------------------------------+-----------------------------------------------------------------------+---------+------------+-------------+-----------------------------------+
    | ID                                   | Name                                                                  | Status  | Task State | Power State | Networks                          |
    +--------------------------------------+-----------------------------------------------------------------------+---------+------------+-------------+-----------------------------------+
    | 1317b8e6-0d02-4e6b-934a-225752dd809c | manila_service_instance_backend1_8c2fc21d-3dd8-42a2-8363-cacc726df9fa | ACTIVE  | -          | Running    | manila_service_network=10.254.0.3 |
    | 28c1aeff-ed98-4d53-b9ba-36028558ebf3 | myvm_ubuntu                                                           | SHUTOFF | -          | Shutdown     | private=10.0.0.3                  |
    +--------------------------------------+-----------------------------------------------------------------------+---------+------------+-------------+-----------------------------------+

    如您所见,service VM (manila_service_instance_xxxx) 仍在运行。service VM 使用 tenants service 和用户 nova 创建,因此切换到这些凭据并按以下方式关闭 service VM 

    • 注意:最好为此创建一个新的源文件

      [stack@devstack-large-vm ~]$ cat ~/mytools/setenv_service
      # source this file to set service tenant's priviledges
      export OS_USERNAME=nova
      export OS_TENANT_NAME=service
      export OS_PASSWORD=abc123
      export OS_AUTH_URL=http://192.168.122.219:5000/v2.0/
      
      export PS1=$PS1\[service\]\ 
      
      [stack@devstack-large-vm ~]$ source ~/mytools/setenv_service
      
      [stack@devstack-large-vm ~]$ [service] nova stop 1317b8e6-0d02-4e6b-934a-225752dd809c

    现在切换到 admin 并检查 service VM 的状态

    [stack@devstack-large-vm ~]$ [admin] nova list --all-tenants
    +--------------------------------------+-----------------------------------------------------------------------+---------+------------+-------------+-----------------------------------+
    | ID                                   | Name                                                                  | Status  | Task State | Power State | Networks                          |
    +--------------------------------------+-----------------------------------------------------------------------+---------+------------+-------------+-----------------------------------+
    | 1317b8e6-0d02-4e6b-934a-225752dd809c | manila_service_instance_backend1_8c2fc21d-3dd8-42a2-8363-cacc726df9fa | SHUTOFF | -          | Shutdown    | manila_service_network=10.254.0.3 |
    | 28c1aeff-ed98-4d53-b9ba-36028558ebf3 | myvm_ubuntu                                                           | SHUTOFF | -          | Shutdown    | private=10.0.0.3                  |
    +--------------------------------------+-----------------------------------------------------------------------+---------+------------+-------------+-----------------------------------+
  3. service VM 在 rejoin-stack.sh 之后不会重新启动,因此创建新的 share 会报错。

    作为重新加入 DevStack 的一部分,如果 Manila DB 中至少有一个活动的 share,service VM 应该自动重新启动。有时这不会发生,我们需要手动重新启动 service VM 才能使 Manila create 和其他 API 正常工作。

    检查 service VM 是否已启动

    [stack@devstack-large-vm ~]$ [admin] nova list --all-tenants
    +--------------------------------------+-----------------------------------------------------------------------+---------+------------+-------------+-----------------------------------+
    | ID                                   | Name                                                                  | Status  | Task State | Power State | Networks                          |
    +--------------------------------------+-----------------------------------------------------------------------+---------+------------+-------------+-----------------------------------+
    | 1317b8e6-0d02-4e6b-934a-225752dd809c | manila_service_instance_backend1_8c2fc21d-3dd8-42a2-8363-cacc726df9fa | SHUTOFF | -          | Shutdown    | manila_service_network=10.254.0.3 |
    | 28c1aeff-ed98-4d53-b9ba-36028558ebf3 | myvm_ubuntu                                                           | SHUTOFF | -          | Shutdown    | private=10.0.0.3                  |
    +--------------------------------------+-----------------------------------------------------------------------+---------+------------+-------------+-----------------------------------+

    使用正确的凭据启动 service VM

    [stack@devstack-large-vm ~]$ source ~/mytools/setenv_service
    
    [stack@devstack-large-vm ~]$ [service] nova start 1317b8e6-0d02-4e6b-934a-225752dd809c
    
    [stack@devstack-large-vm ~]$ [service] nova list
    +--------------------------------------+-----------------------------------------------------------------------+--------+------------+-------------+-----------------------------------+
    | ID                                   | Name                                                                  | Status | Task State | Power State | Networks                          |
    +--------------------------------------+-----------------------------------------------------------------------+--------+------------+-------------+-----------------------------------+
    | 1317b8e6-0d02-4e6b-934a-225752dd809c | manila_service_instance_backend1_8c2fc21d-3dd8-42a2-8363-cacc726df9fa | ACTIVE | -          | Running     | manila_service_network=10.254.0.3 |
    +--------------------------------------+-----------------------------------------------------------------------+--------+------------+-------------+-----------------------------------+

    现在 manila create ... 和其他操作应该可以正常工作