Skip to content

Latest commit

 

History

History
252 lines (201 loc) · 7.71 KB

lab2_running_ansible.adoc

File metadata and controls

252 lines (201 loc) · 7.71 KB

Lab 2: Configuring and Running Ansible

In lab two, we focus on creating static inventory files, configuring the ansible.cfg file, and running Ansible ad-hoc commands against our OpenStack instance.

Static Inventory File

While Ansible supports both static and dynamic inventory files, this lab will only discuss static inventory files. Throughout this lab series this will be the only type of inventory that is used.

An inventory file is a text file that specifices the nodes that will be managed by the control machine. The nodes to be managed may include a list of hostnames or IP addresses of those nodes. The inventory file allows for nodes to be organized into groups by declaring a host group name within square brackets ([]).

Below is an example of an inventory file that creates two host groups: production and development. Each host group contains node hostnames and/or IPs of nodes that are to be managed by that particular host group.

[production]
prod1.example.com
prod2.example.com
prod3.example.com
10.19.1.200

[development]
dev1.example.com
dev2.example.com
dev3.example.com
10.19.0.100

It is considered best practices to group managed nodes in host groups for better organization. If there is a long list of nodes to be managed that have a predictable name such as _prodX.example.com` where X is the node number. Ansible inventory files provide the ability of specifying ranges in hostnames or IP addresses. For example, the above inventory file can be simplified as follows:

[production]
prod[1:3].example.com
10.19.1.200

[development]
dev[1:3].example.com
10.19.0.100

The value [1:3] tells the Ansible file that there are multiple nodes starting with 1 and ending with 3.

Aside from being able to create host groups that contain nodes, and specifying ranges, a host group can also be a collection of other host groups using the :children suffix. For example, review the following inventory file:

[texas]
travis.example.com
williamson.example.com

[florida]
miami-dade.example.com
broward.example.com

[usa:children]
texas
florida

In the example above, the host group usa inherits the managed nodes within the texas host group and the florida host group.

Guided Exercise: Lab Inventory File

While the previous section gave an introduction to inventory files, for the purposes of this exercise, we will create a very simplistic inventory file that only contains the IP address of our OVH public cloud instance that was provided by your instructor.

  1. On your laptop/workstation, create a directory labeled openstack-ansible

    $ mkdir ~/openstack-ansible
  2. Change into the openstack-ansible directory and create a file labeled inventory

    $ cd ~/openstack-ansible
    $ vi inventory
    Example Inventory File
    <ip-provided-by-instructor>

In order to verify access to your host, test the above inventory with the following Ansible command on your laptop/workstation:

$ ansible all -i ~/openstack-ansible/inventory --list-hosts
  hosts (1):
    <ip-provided-by-instructor>
Note
The all part of the command is a special Ansible group that queries all the managed nodes. If the inventory file had host groups that we wanted to check against, we would replace all with the name of the host group in the example above.

YAML Configuration File

In order to use the OpenStack cloud modules from Ansible, a special YAML file is required within the /etc/openstack/clouds.yml. This file includes all the information provided within your keystonerc_demo and keystonerc_admin files that were created once the packstack installation completed. An example of the file is shown below. Notice how there are two clouds which represent two different types of access. mycloud represents our demo environment and admin represents our administrative cloud access.

clouds:
  mycloud:
    auth:
      auth_url: http://10.19.114.177:5000
      username: demo
      password: demo
      project_name: demo-tenant
    region_name: regionOne
  admin:
    auth:
      auth_url: http://10.19.114.177:5000
      username: admin
      password: YZagwYUFPMpm9pxWe9sWYvNJn
      project_name: admin
    region_name: regionOne
ansible:
  use_hostnames: True
  expand_hostvars: False

Ansible Configuration File

The Ansible configuration file consists of multiple sections that are defined as key-value pairs. When Ansible is installed, it contains the default ansible.cfg file in the location /etc/ansible/ansible.cfg. It is recommended to open the /etc/ansible/ansible.cfg to view all the different options and settings that be can modified. For the purposes of this lab, our focus will be on two sections: [defaults] and [privilege_escalation].

Most of the changes of an ansible.cfg file are done within the [defaults] section, while the [privilege_escalation] section provides how operations should run when requiring escalated privileges.

When dealing with the ansible.cfg file, it can be stored in multiple locations. The locations include:

  • /etc/ansible/ansible.cfg

  • ~/.ansible.cfg

  • local directory from where you run Ansible commands.

The location of the configuration file is important as it will dictate which ansible.cfg is used.

It is best practice to store your ansible.cfg file in the same location as where the playbooks for this lab will be created.

Guided Exercise: Create Ansible.cfg

In this exercise, create a local ansible.cfg file within the openstack-ansible directory.

  1. Change into the openstack-ansible directory.

    $ cd ~/openstack-ansible
  2. Create an ansible.cfg file with the following settings.

    $ vi ~/openstack-ansible/ansible.cfg
    Contents of ansible.cfg
    [defaults]
    remote_user = centos
    inventory = ./inventory
    
    [privilege_escalation]
    become = true

The OpenStack instance that has been created for you uses a user labeled centos that contains sudo privileges. The file above tells Ansible to use the user centos when attempting ssh connectivity, use the file inventory for the IP address of our managed node, and when required, to use sudo if privilege escalation is required.

Guided Exercise: Verify Connectivity to our OpenStack Instance

In order to ensure that our inventory file and ansible.cfg file have been properly setup, we will use Ansible ad hoc commands to execute a simple Ansible task to test if we can ping our OpenStack instance.

The first thing we want to do is ensure we are using the appropriate ansible.cfg file using the following command:

$ ansible --version

$ ansible 2.5.0
  config file = */path/to/openstack-ansible/ansible.cfg*
  configured module search path = [u'/home/rlopez/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.14 (default, Feb 27 2018, 20:43:24) [GCC 7.3.1 20180130 (Red Hat 7.3.1-2)]
Note
Ensure that the config file location points to the ansible.cfg located within our openstack-ansible directory.

Once the correct ansible.cfg being used as been identified, run the following Ansible ad hoc commands:

$ ansible all -m ping

<ip-provided-by-instructor> | SUCCESS => {
    "changed": false,
    "ping": "pong"
}

$ ansible all -m command -a "hostname"
<ip-provided-by-instructor> | SUCCESS | rc=0 >>
<hostname>

The ansible all -m ping attempts to ping the OpenStack instance and will send output whether or not it was successful.

The ansible all -m command -a "hostname" runs the command module (-m), with the argument (-a) hostname on the remote node. This should report the hostname provided by your instructor in the beginning of the lab.