No auto-scaling after errors despite available resources
At the beginning everything worked fine, every new job spawned a new instance. After some time errors happened in openstack, as instances could not be started. Slurm seems to not recognize these openstack errors and tries to allocate jobs to these machines. These jobs fail after some time and new jobs get submitted to these failed instances, so forth so on. These loop can only be stopped by deleting these error instances by hand.
After that no new instances get spawned, despite me deleting old instances of the same size in my project. I also tried to start these instances by hand, this works. There is definitely enough space for these kind of instance in openstack, slurm just never asks again.