Now regarding your problem. I don't have enough visibility of your deployment setup to give you clear answer. To give more informed answer I would need to know:
- Where is the mlflow server deployed? (is it with our charm?)
- What's the object store?
- Whats the relational db?
- How is mlflow-server deployed (what paras for server do you use)?
- Can you store and retrieve smaller models?
- What's the model size you are trying to retrieve?
To increase the timeout you need to set the `MLFLOW_HTTP_REQUEST_TIMEOUT` env variable in the container. You can also try to set the `GUNICORN_CMD_ARGS` with "--timeout 600" so running sth like
docker run -p 5000:5000 docker.io/ubuntu/mlflow:2.1.1_1.0-22.04 GUNICORN_CMD_ARGS="--timeout 600" mlflow server --host 0.0.0.0
Hi Ikram,
glad that it works for you.
Now regarding your problem. I don't have enough visibility of your deployment setup to give you clear answer. To give more informed answer I would need to know:
- Where is the mlflow server deployed? (is it with our charm?)
- What's the object store?
- Whats the relational db?
- How is mlflow-server deployed (what paras for server do you use)?
- Can you store and retrieve smaller models?
- What's the model size you are trying to retrieve?
To increase the timeout you need to set the `MLFLOW_ HTTP_REQUEST_ TIMEOUT` env variable in the container. You can also try to set the `GUNICORN_CMD_ARGS` with "--timeout 600" so running sth like
docker run -p 5000:5000 docker. io/ubuntu/ mlflow: 2.1.1_1. 0-22.04 GUNICORN_ CMD_ARGS= "--timeout 600" mlflow server --host 0.0.0.0
Let me know if it helped.
Michal