Storage provisioner timeouts spawning extra volumes on AWS
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
juju-core |
Fix Released
|
High
|
Andrew Wilkins |
Bug Description
I'm testing integration of the new storage support in 1.24, and I've run into an issue deploying to Amazon where it appears the storage provisioner is timing out and then re-requesting a new volume be created.
I attempted to create this three-node cluster; by the time I killed the environment, it had successfully created 12 SSD volumes per Amazon's console but none of the units had passed the allocation state.
Environment:
Vagrant VM w/trusty
Juju 1.24.3-trusty-amd64
Steps to recreate:
juju switch amazon
juju bootstrap
juju deploy -n3 --repository=
debug-log: http://
The charm used (though I suspect it's more the instance type+storage request at fault): https:/
A screenshot of the Volumes console, several minutes after the juju environment was destroyed:
http://
Changed in juju-core: | |
status: | New → Triaged |
importance: | Undecided → High |
assignee: | nobody → Andrew Wilkins (axwalk) |
milestone: | none → 1.25.1 |
Changed in juju-core: | |
milestone: | 1.25.1 → 1.25.0 |
Changed in juju-core: | |
status: | In Progress → Fix Committed |
Changed in juju-core: | |
status: | Fix Committed → Fix Released |
Storage is a bit "experimental" in 1.24; 1.25 is much more solid. Can you try with master? I'll see if I can repro with 1.24 later today.