Monday, February 29, 2016

Common Problems Using Ansible 2.0 OpenStack Modules

I've seen some recurring issues with the new Ansible 2.0 OpenStack modules popping up either on IRC or on the Ansible mailing lists. It's probably time to address these in a blog post.

Issue #1: Putting clouds.yaml in current directory


As I discussed in a previous blog post, all of the new OpenStack modules support using a clouds.yaml file to hold the cloud authentication details. The library that reads this file in supports reading it from the current directory, among other places. I think this is what trips people up: they assume they can create the file in the same directory where they run their playbook and think it will be read in.

This is not so. Why?

The answer lies in how Ansible works. It's not documented in a very obvious place, but the important aspect is documented with this Ansible configuration variable: http://docs.ansible.com/ansible/intro_configuration.html#remote-tmp
"Ansible works by transferring modules to your remote machines, running them, and then cleaning up after itself."
The modules are copied to the remote_tmp directory on the target host (even if it is localhost) and run from there. Your clouds.yaml is NOT copied along with the module, so it will not be found.

The solution? Place clouds.yaml in one of the other locations on the target host where it can be found:
  • /etc/openstack/
  • ~/.config/openstack/
The second option uses the home directory of the user executing the playbook task. The simplest option is to simply use the /etc/openstack/ directory.

Issue #2: Running Ansible from a virtual Python environment


This second issue also comes from a fundamental misunderstanding of how Ansible works.

New users of Ansible, understandably excited by such an awesome piece of software, will often begin experimenting with it by installing Ansible in a virtual Python environment rather than installing from a distro package or installing it as part of the system Python. There's nothing wrong with doing this. But users are usually confused when they think they've installed all of the required Python libraries in this virtual environment, along with the version of Ansible they are testing, and, when they run their playbook (using localhost as the target host), get a "missing library" or similar error message. But you KNOW that library is installed in the virtual environment! What gives?

As discussed in the issue above, Ansible copies modules to the target host and runs them remotely. The Python interpreter it uses to run the modules, by default, is the system Python (i.e, not your virtual environment, but, rather /usr/bin/python). It doesn't care about your virtual environment because it sees even localhost as a different machine.

How do you tell it to use your virtual environment Python when connecting to localhost? You have to set this option in your Ansible inventory file. The option you are looking for is ansible_python_interpreter. You will have to search the inventory documentation page for reference to that option. It is also discussed in the Ansible FAQ. Basically, you want an entry that looks something like:

localhost   ansible_python_interpreter=/path/to/venv/python

That should force Ansible to use your virtual environment.

1 comment:

  1. It's been pointed out to me that setting ansible_connection=local for the localhost in the inventory file should use the virtual environment Python for Issue #2. I have not confirmed that, but that is something you can try.

    ReplyDelete