Customizing your deployment

Manage NTP configuration

You can set the time settings in the all.yml file. By default, the deployment dont change your timezone but enable the NTP service and set 4 NTP servers

all.yml
---
# NTP
ntp_enabled: true
ntp_manage_config: true
ntp_manage_timezone: false
ntp_timezone: "Etc/UTC"
ntp_area: ""
ntp_servers:
  - "0{{ ntp_area }}.pool.ntp.org iburst"
  - "1{{ ntp_area }}.pool.ntp.org iburst"
  - "2{{ ntp_area }}.pool.ntp.org iburst"
  - "3{{ ntp_area }}.pool.ntp.org iburst"
ntp_restrict:
  - "127.0.0.1"
  - "::1"
...

If needed, you can set your own settings:

all.yml
---
# NTP
ntp_enabled: true
ntp_manage_config: true
ntp_manage_timezone: true
ntp_timezone: "Europe/Paris"
ntp_area: ".fr"
ntp_servers:
  - "0{{ ntp_area }}.pool.ntp.org iburst"
  - "1{{ ntp_area }}.pool.ntp.org iburst"
  - "2{{ ntp_area }}.pool.ntp.org iburst"
  - "3{{ ntp_area }}.pool.ntp.org iburst"
ntp_restrict:
  - "127.0.0.1"
  - "::1"
...

Manage storage volumes

You can customize all storage devices by node in the host_vars folder. In this example, the nodes have two mounted volumes to store data and one to store metadata:

node1.yml
---
openio_data_mounts:
  - { mountpoint: "/mnt/sda1" }
  - { mountpoint: "/mnt/sda2" }
openio_metadata_mounts:
  - { mountpoint: "/mnt/ssd1" }
...

Manage the ssh connection

If one of your nodes doesn’t have the same ssh user configured, you can define a specific ssh user (or key) for the deployment of this node.

node1.yml
---
ansible_user: my_user
ansible_ssh_private_key_file: /home/john/.ssh/id_rsa
#ansible_port: 2222
#ansible_python_interpreter: /usr/local/bin/python
...

Manage the data network interface used

Globally, the interface used for data is defined by openio_bind_interface in the openio.yml. You can define a specific interface for a node in its host_vars file.

node1.yml
---
openio_bind_interface: eth2
...

Manage the data network interface

If you prefer to define each IP address instead of using a global interface, you can set it in the host_vars files.

node1.yml
---
openio_bind_address: 172.16.20.1
...

Manage S3 authentification

Set name, password, and role in openio.yml.

openio.yml
---
# S3 users
openio_oioswift_users:
  - name: "demo:demo"
    password: "DEMO_PASS"
    roles:
      - admin
  - name: "test:tester"
    password: "testing"
    roles:
      - admin
      - reseller_admin
...

Docker nodes

If you don’t have physical nodes to test our solution, you can spawn some Docker containers with the script provided.

Example:
$ ./spawn_my_lab.sh 3
Replace with the following in the file named "01_inventory.ini"
[all]
node1 ansible_host=11ce9e9fecde ansible_user=root ansible_connection=docker
node2 ansible_host=12cd8e2fxdel ansible_user=root ansible_connection=docker
node3 ansible_host=13fe6e4ehier ansible_user=root ansible_connection=docker

Change the variables in group_vars/openio.yml and adapt them to your host capacity