Customizing your deployment

Manage NTP configuration

You can set the time settings in the inventory file.

By default, the deployment doesn’t change your timezone but enable the NTP service and set four NTP servers

ntp
---
all:
  hosts:
  
  vars:
    ntp_enabled: true
    ntp_manage_config: true
    ntp_manage_timezone: false
    ntp_timezone: "Etc/UTC"
    ntp_area: ""
    ntp_servers:
      - "0{{ ntp_area }}.pool.ntp.org iburst"
      - "1{{ ntp_area }}.pool.ntp.org iburst"
      - "2{{ ntp_area }}.pool.ntp.org iburst"
      - "3{{ ntp_area }}.pool.ntp.org iburst"
    ntp_restrict:
      - "127.0.0.1"
      - "::1"

If needed, you can add your own settings:

custom ntp
---
all:
  hosts:
  
  vars:
    ntp_enabled: true
    ntp_manage_config: true
    ntp_manage_timezone: true
    ntp_timezone: "Europe/Paris"
    ntp_area: ".fr"
    ntp_servers:
      - "0{{ ntp_area }}.pool.ntp.org iburst"
      - "1{{ ntp_area }}.pool.ntp.org iburst"
      - "2{{ ntp_area }}.pool.ntp.org iburst"
      - "3{{ ntp_area }}.pool.ntp.org iburst"
    ntp_restrict:
      - "127.0.0.1"
      - "::1"

Manage storage volumes

You can customize all storage devices by node in the host declaration part. Each storage device can be used for either data or metadata. In order to make a storage device available to OpenIO, you need to partition, format and mount it first. The choice of tools and methods is left to the operator, as long as the resulting configuration doesn’t conflict with the requirements. The resulting mount point and partition/device names are to be used below in the openio_data_mounts and openio_metadata_mounts.

In this example, the nodes have two mounted volumes to store data and one to store metadata:

storage definition
  ---
  all:
    hosts:
      node1:
        ansible_host: IP_ADDRESS_OF_NODE1
        openio_data_mounts:
          - mountpoint: /mnt/data1
            partition: /dev/vdb
          - mountpoint: /mnt/data2
            partition: /dev/vdc
        openio_metadata_mounts:
          - mountpoint: /mnt/metadata1
            partition: /dev/vdd
            meta2_count: 2
      node2:
        ansible_host: IP_ADDRESS_OF_NODE2
        openio_data_mounts:
          - mountpoint: /mnt/data1
            partition: /dev/vdb
          - mountpoint: /mnt/data2
            partition: /dev/vdc
        openio_metadata_mounts:
          - mountpoint: /mnt/metadata1
            partition: /dev/vdd
            meta2_count: 2
      node3:
        ansible_host: IP_ADDRESS_OF_NODE3
        openio_data_mounts:
          - mountpoint: /mnt/data1
            partition: /dev/vdb
          - mountpoint: /mnt/data2
            partition: /dev/vdc
        openio_metadata_mounts:
          - mountpoint: /mnt/metadata1
            partition: /dev/vdd
            meta2_count: 2
    vars:
      ansible_user: root

The meta2_count define how many meta2 instance you want for the device.

If you want to lose one server (of 3) but still create new containers, you need at least 3 meta2 up. Without this parameter, you can read data from an existing container but you can’t create or delete containers.

Manage the ssh connection

If your nodes don’t all have the same ssh user configured, you can define a specific ssh user (or key) for the deployment of each node.

global ssh
---
all:
  hosts:
  
  vars:
    ansible_user: my_user
    ansible_ssh_private_key_file: /home/john/.ssh/id_rsa
specific ssh
  ---
  all:
    hosts:
      node1:
        ansible_host: IP_ADDRESS_OF_NODE1
        
      node2:
        ansible_host: IP_ADDRESS_OF_NODE2
        
      node3:
        ansible_host: IP_ADDRESS_OF_NODE3
        
        ansible_user: my_other_user
        ansible_ssh_private_key_file: /home/john/.ssh/id_rsa_2

    vars:
      ansible_user: my_user
      ansible_ssh_private_key_file: /home/john/.ssh/id_rsa

Manage the data network interface used

The servers can have many network interfaces. The most common is to have a management interface and another for the data. Obviously these 2 interfaces can be the same.

global interface
  ---
  all:
    
    children:
      openio:
      
      vars:
        openio_bind_interface: bond0
        openio_bind_address: "{{ ansible_bond0.ipv4.address }}"

As for ssh connections, these settings can be by server.

Manage S3 authentification

Set name, password, and role in the inventory file.

S3 users
  ---
  all:
    
    children:
      openio:
      
      vars:
        # S3 users
        openio_oioswift_users:
          - name: "demo:demo"
            password: "DEMO_PASS"
            roles:
              - member

          - name: "test:tester"
            password: "testing"
            roles:
              - member
              - reseller_admin

Change user openio’s UID/GID

You can define the uid and the gid of the user openio in the inventory file.

uid/gid user openio
  ---
  all:
    hosts:
      
    vars:
      openio_user_openio_uid: 120
      openio_group_openio_gid: 220

Proxy

Set your variables environment in the inventory file.

http proxy
---
all:
  hosts:
  
  vars:
    openio_environment:
      http_proxy: http://proxy.example.com:8080
      https_proxy: http://proxy.example.com:8080
      no_proxy: "localhost,172.28.0.2,172.28.0.3,172.28.0.4,172.28.0.5"