Somehow the swift backend can time out and stop an from being uploaded. Debug level errors from the registry look like this:
{"err.code":"unknown","err.detail":"swift: Timeout expired while waiting for segments of /docker/registry/v2/blobs/sha256/03/0330ca45a200e1fcef05ae97f434366d262a1c50b3dc053e7928b58dd37211dd/data to show up","err.message":"unknown error","go.version":"go1.10.4","http.request.host":"registry.jujucharms.com","http.request.id":"cdc35629-693b-452d-b6a6-60e214c4d9ca","http.request.method":"PUT","http.request.remoteaddr":"....","http.request.uri":"/v2/jamon/kubeflow-tf-hub/jupyterhub-image/blobs/uploads/92fe2adb-00f8-4515-8743-88bb23e76450?_state=....digest=sha256%3A0330ca45a200e1fcef05ae97f434366d262a1c50b3dc053e7928b58dd37211dd","http.request.useragent":"docker/18.06.1-ce go/go1.10.4 git-commit/e68fc7a kernel/4.15.0-42-generic os/linux arch/amd64 UpstreamClient(Go-http-client/1.1)","http.response.contenttype":"application/json; charset=utf-8","http.response.duration":"13.171153577s","http.response.status":500,"http.response.written":104,"level":"error","msg":"response completed with error","time":"2018-12-18T20:06:53.271720479Z","vars.name":"jamon/kubeflow-tf-hub/jupyterhub-image","vars.uuid":"92fe2adb-00f8-4515-8743-88bb23e76450"}
For example, in that failed upload, the files in question look like this in swift:
$ swift stat --lh docker-registry-blobs files/docker/registry/v2/repositories/juju/kubeflow-tf-hub/jupyterhub-image/_layers/sha256/0330ca45a200e1fcef05ae97f434366d262a1c50b3dc053e7928b58dd37211dd/link
Account: AUTH_18fdda09da1747f4885b940cadff4cc0
Container: docker-registry-blobs
Object: files/docker/registry/v2/repositories/juju/kubeflow-tf-hub/jupyterhub-image/_layers/sha256/0330ca45a200e1fcef05ae97f434366d262a1c50b3dc053e7928b58dd37211dd/link
Content Type: application/octet-stream
Content Length: 71
Last Modified: Tue, 11 Dec 2018 05:39:19 GMT
ETag: 239c76eecd4fd38d448173f554cb4a36
Accept-Ranges: bytes
X-Timestamp: 1544506758.53727
X-Trans-Id: txae919e1f3a5a4b0ea0622-005c1955de
$ swift stat --lh docker-registry-blobs files/docker/registry/v2/blobs/sha256/03/0330ca45a200e1fcef05ae97f434366d262a1c50b3dc053e7928b58dd37211dd/data
Account: AUTH_18fdda09da1747f4885b940cadff4cc0
Container: docker-registry-blobs
Object: files/docker/registry/v2/blobs/sha256/03/0330ca45a200e1fcef05ae97f434366d262a1c50b3dc053e7928b58dd37211dd/data
Content Type: application/octet-stream
Content Length: 0
Last Modified: Tue, 11 Dec 2018 05:39:15 GMT
ETag: "d41d8cd98f00b204e9800998ecf8427e"
Manifest: docker-registry-blobs/segments/2f6/46f636b65722f72656769737472792f76322f7265706f7369746f726965732f6a756a752f6b756265666c6f772d74662d6875622f6a7570797465726875622d696d6167652f5f75706c6f6164732f33353635356461622d346234652d343263382d613532312d6536316433336366613434632f64617461425d87ada26ee6929920326a4d60f263d0cd2c939385f948139e4791eb2d69a6da39a3ee5e6b4b0d3255bfef95601890afd80709
Accept-Ranges: bytes
X-Timestamp: 1544506754.17360
X-Trans-Id: tx2f2d17d03c0248a8a7444-005c1955fb
Anyone pushing an image that contains the layer in question will run into the error, since the files in swift are named based on the sha256 hash. The upstream issue doesn't really have much in the way of fixes.
The workaround for now is to delete the files, and then re-push the image in question.
Somehow the swift backend can time out and stop an from being uploaded. Debug level errors from the registry look like this:
{"err.code" :"unknown" ,"err.detail" :"swift: Timeout expired while waiting for segments of /docker/ registry/ v2/blobs/ sha256/ 03/0330ca45a200 e1fcef05ae97f43 4366d262a1c50b3 dc053e7928b58dd 37211dd/ data to show up","err. message" :"unknown error", "go.version" :"go1.10. 4","http. request. host":" registry. jujucharms. com","http. request. id":"cdc35629- 693b-452d- b6a6-60e214c4d9 ca","http. request. method" :"PUT", "http.request. remoteaddr" :"...." ,"http. request. uri":"/ v2/jamon/ kubeflow- tf-hub/ jupyterhub- image/blobs/ uploads/ 92fe2adb- 00f8-4515- 8743-88bb23e764 50?_state= ....digest= sha256% 3A0330ca45a200e 1fcef05ae97f434 366d262a1c50b3d c053e7928b58dd3 7211dd" ,"http. request. useragent" :"docker/ 18.06.1- ce go/go1.10.4 git-commit/e68fc7a kernel/ 4.15.0- 42-generic os/linux arch/amd64 UpstreamClient( Go-http- client/ 1.1)"," http.response. contenttype" :"application/ json; charset= utf-8", "http.response. duration" :"13.171153577s ","http. response. status" :500,"http. response. written" :104,"level" :"error" ,"msg": "response completed with error", "time": "2018-12- 18T20:06: 53.271720479Z" ,"vars. name":" jamon/kubeflow- tf-hub/ jupyterhub- image", "vars.uuid" :"92fe2adb- 00f8-4515- 8743-88bb23e764 50"}
The " Timeout expired while waiting for segments" message, and 0 length data file make me think that the issue is this upstream bug: https:/ /github. com/docker/ distribution/ issues/ 1013
For example, in that failed upload, the files in question look like this in swift:
$ swift stat --lh docker- registry- blobs files/docker/ registry/ v2/repositories /juju/kubeflow- tf-hub/ jupyterhub- image/_ layers/ sha256/ 0330ca45a200e1f cef05ae97f43436 6d262a1c50b3dc0 53e7928b58dd372 11dd/link 1747f4885b940ca dff4cc0 registry- blobs registry/ v2/repositories /juju/kubeflow- tf-hub/ jupyterhub- image/_ layers/ sha256/ 0330ca45a200e1f cef05ae97f43436 6d262a1c50b3dc0 53e7928b58dd372 11dd/link octet-stream d448173f554cb4a 36 b0ea0622- 005c1955de
Account: AUTH_18fdda09da
Container: docker-
Object: files/docker/
Content Type: application/
Content Length: 71
Last Modified: Tue, 11 Dec 2018 05:39:19 GMT
ETag: 239c76eecd4fd38
Accept-Ranges: bytes
X-Timestamp: 1544506758.53727
X-Trans-Id: txae919e1f3a5a4
$ swift stat --lh docker- registry- blobs files/docker/ registry/ v2/blobs/ sha256/ 03/0330ca45a200 e1fcef05ae97f43 4366d262a1c50b3 dc053e7928b58dd 37211dd/ data 1747f4885b940ca dff4cc0 registry- blobs registry/ v2/blobs/ sha256/ 03/0330ca45a200 e1fcef05ae97f43 4366d262a1c50b3 dc053e7928b58dd 37211dd/ data octet-stream 04e9800998ecf84 27e" registry- blobs/segments/ 2f6/46f636b6572 2f7265676973747 2792f76322f7265 706f7369746f726 965732f6a756a75 2f6b756265666c6 f772d74662d6875 622f6a757079746 5726875622d696d 6167652f5f75706 c6f6164732f3335 3635356461622d3 46234652d343263 382d613532312d6 536316433336366 613434632f64617 461425d87ada26e e6929920326a4d6 0f263d0cd2c9393 85f948139e4791e b2d69a6da39a3ee 5e6b4b0d3255bfe f95601890afd807 09 8a8a7444- 005c1955fb
Account: AUTH_18fdda09da
Container: docker-
Object: files/docker/
Content Type: application/
Content Length: 0
Last Modified: Tue, 11 Dec 2018 05:39:15 GMT
ETag: "d41d8cd98f00b2
Manifest: docker-
Accept-Ranges: bytes
X-Timestamp: 1544506754.17360
X-Trans-Id: tx2f2d17d03c024
Anyone pushing an image that contains the layer in question will run into the error, since the files in swift are named based on the sha256 hash. The upstream issue doesn't really have much in the way of fixes.
The workaround for now is to delete the files, and then re-push the image in question.