Docker Pull Timeout

Posted on  by  admin
Docker Pull Timeout Rating: 3,7/5 9654 reviews
Docker

Is everyone else still getting problems downloading images on TP5.No matter what image I try and pull it always results in an unknown blob. Also the 'pulling fs layer' Keeps retrying and timing out. I get the following when I try that:- PS C: DATA docker pull microsoft/windowsservercore Using default tag: latest latest: Pulling from microsoft/windowsservercore error pulling image configuration: Get 14d5d5ce11e9cccfe583ba3bff92/data?Expires=&Signature=f5lEwXStRiA4YDOOd9MvKBhwxI6GyZvf-2UubF-ERKhflVhCpIGlcjCDGnjrecw2crG1YLgzhBWtF8VZNaPfl0VkNMS45sv Wo-jAcG9-FKZ58AtSERO58zLlSapQqnRhrn2l9QjIls3uAGGFlDhlsWgJQsafMtn7Lt1bVcNnKc&Key-Pair-Id=APKAJECH5M7VWIS5YZ6Q: net/http: TLS handshake timeout PS C: DATA Same error on other images though, so very strange. OK, server finally came back. Thank you:) Thats exactly what was wrong, 1.12 was in System32.

Docker Pull Timeout

I'm getting this log entry over an over again in the nginx log: 2015/08/14 17:34:57 error 1054#0:.24820 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 127.0.0.1, server: registry.thinknode.dev, request: 'GET /v2/0b/math/manifests/sha256:10af39c70b7b7f3bc6d9539c38180d67a626c033487e706e5396d3c5af6c58a8 HTTP/1.1', upstream: 'host: 'registry.thinknode.dev'. The 'random id' is intentional. Trying some new things this morning.

We upgraded docker to the latest version. Here is our new docker version information: Client: Version: 1.8.1 API version: 1.20 Go version: go1.4.2 Git commit: d12ea79 Built: Thu Aug 13 02:35:49 UTC 2015 OS/Arch: linux/amd64 Server: Version: 1.8.1 API version: 1.20 Go version: go1.4.2 Git commit: d12ea79 Built: Thu Aug 13 02:35:49 UTC 2015 OS/Arch: linux/amd64 We also changed the registry to use silly authentication. The errors are a bit different, but they still seem to be related to timeouts. Again, the errors happen intermittently. Ok, here is where I'm at. I have upgraded docker from 1.7.0 to 1.8.1 (the latest). I added these lines in the nginx configuration.

Proxyconnecttimeout 60; proxysendtimeout 60; proxyreadtimeout 60; sendtimeout 60; It still works sometimes. And So, after quite a bit of trial-and-error (and a little bit of luck), it seems as though the issue is fixed if I change the storage: cache: layerinfo setting in the config.yml from inmemory to redis (specifically, I also changed layerinfo to blobdescriptor, though I believe those two are synonymous).

I'm not sure why this fixes the issue, but after making this change, our tests are reliably passing without incident. Perhaps the inmemory caching was having an issue under heavy load during our automated test suite?

Docker Connection Timeout

Either way, I'm closing this issue since there is really no reason to not use redis for caching (that is obviously what we were planning to use in production, so it isn't a big deal to use it in development as well). System trader mac, systemtrader for mac.

Coments are closed