When I deploy a number of charms to MicroK8s running on an AWS instance, I'm also getting 1.5G of memory used, and I think that this is responsible for causing some CI errors due to a Kubeflow deploy timing out, even though I later SSHed into the AWS instance and checked that all of the pods were running properly (so it looks like it was just really slow to deploy).
When I deploy a number of charms to MicroK8s running on an AWS instance, I'm also getting 1.5G of memory used, and I think that this is responsible for causing some CI errors due to a Kubeflow deploy timing out, even though I later SSHed into the AWS instance and checked that all of the pods were running properly (so it looks like it was just really slow to deploy).