NetApp NFS Storage Migration between backends is Failing

Bug #1969531 reported by Felipe Rodrigues
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Cinder
In Progress
Medium
Matheus Andrade

Bug Description

Description
===========
Migrating a Cinder volume from a NetApp NFS backend to another (different SVM, but same Cluster) is failing in case those backends have same FlexVol name.

The NFS NetApp driver, the pool is the FlexVol volume in ONTAP, while a Cinder backend is one of the SVMs. So, we can have two backends each one a different SVM and defining the pools with FlexVol volumes with same name. In this case, migrating a Cinder volume from one to another will fail using the storage assisted migration.

Steps to reproduce
==================

- Configuring two backends with different SVMs: BACKEND1 and BACKEND2
- Configuring the pools of those backends with volumes with same name (REPEATED_VOL)

Running a get pools command in Cinder would have 2 pools:
host@BACKEND1#10.10.10.1:/REPEATED_VOL
host@BACKEND2#10.10.10.2:/REPEATED_VOL

1) Create a volume:
cinder create 1 --volume-type netapp --name v1

2) Migrate the volume to the other backend:
cinder migrate v1 host@BACKEND2#10.10.10.2:/REPEATED_VOL

Expected result
===============

It should migrate the volume to the other backend.

Actual result
=============

It fails during the migration process. The log error message:

https://paste.opendev.org/show/btKB7T3lGj91Uz7hPrOz/

Changed in cinder:
importance: Undecided → Medium
tags: added: drivers migration netapp
Changed in cinder:
assignee: nobody → Matheus Andrade (matheusandrade777)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to cinder (master)

Fix proposed to branch: master
Review: https://review.opendev.org/c/openstack/cinder/+/843018

Changed in cinder:
status: New → In Progress
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.