Multi Nodes Installation

Tip

New! Ultra high performance features. Learn more through our subscription plans.

Get in touch with the team or visit the page describing our plans.

Install a three nodes on-premises object storage backend, using the deployment tools provided by OpenIO.

Requirements

Hardware

When run as the backend layer, OpenIO SDS is lightweight and requires few resources. The front layer consists of the gateways (Openstack Swift, Amazon S3) and their services do not require many resources.

  • CPU: any dual core at 1 Ghz or faster
  • RAM: 2GB recommended
  • Network: 1Gb/s NIC

Operating system

As explained on our Supported Linux Distributions page, OpenIO supports the following distributions:

System

  • Root privileges are required (using sudo).
  • SELinux or AppArmor must be disabled.
SELinux
$> sudo sed -i -e 's@^SELINUX=enforcing$@SELINUX=disabled@g' /etc/selinux/config
$> sudo setenforce 0
$> sudo systemctl disable selinux.service
AppArmor
$> sudo service apparmor stop
$> sudo apparmor teardown
$> sudo update-rc.d -f apparmor remove
  • All nodes must have different hostnames.
  • All nodes must have a version of python greater than 3.6.
  • All mounted partitions used for data/metadata must support extended attributes. XFS is recommended.

If the device’s mountpoint is /mnt/data1, you can verify the presence and the type of this partition. In this example, SGI XFS is the filesystem:

$> df /mnt/data1
Filesystem     1K-blocks    Used Available Use% Mounted on
/dev/vdb       41931756 1624148  40307608   4% /mnt/data1

$> file -sL /dev/vdb
/dev/vdb: SGI XFS filesystem data (blksz 4096, inosz 512, v2 dirs)
  • The system must be up to date.

If you are running Centos or RedHat, keep your system up-to-date as follows:

RedHat
$> sudo yum update -y
$> sudo reboot

If you are using Ubuntu or Debian, keep your system up-to-date as follows:

Ubuntu
$> sudo apt update -y
$> sudo apt upgrade -y
$> sudo reboot

Network

  • All nodes must be connected to the same LAN through the specified interface (the first one by default).
  • The firewall is disabled.
Red Hat - firewalld
$> sudo systemctl stop firewalld.service
$> sudo systemctl disable firewalld.service
Ubuntu - ufw
$> sudo sudo ufw disable
$> sudo systemctl disable ufw.service

Setup

You only need to perform this setup on one of the nodes in the cluster (or your laptop).

  • Install git.
RedHat
$> sudo yum install git -y
Ubuntu
$> sudo apt install git -y
  • Clone the OpenIO ansible playbook deployment repository
$> git clone https://github.com/open-io/ansible-playbook-openio-deployment.git --branch 19.10 oiosds
$> cd oiosds/products/sds
  • Install Ansible for the current user.
Ansible
$> python3 -m venv openio_venv
$> source openio_venv/bin/activate
$> pip install -r ansible.pip

Architecture

This playbook will deploy a multi-nodes cluster as shown below:

+-----------------+   +-----------------+   +-----------------+
|     OIOSWIFT    |   |     OIOSWIFT    |   |     OIOSWIFT    |
|      FOR S3     |   |      FOR S3     |   |      FOR S3     |
+-----------------+   +-----------------+   +-----------------+
|      OPENIO     |   |      OPENIO     |   |      OPENIO     |
|       SDS       |   |       SDS       |   |       SDS       |
+-----------------+   +-----------------+   +-----------------+

Installation

First, configure the inventory according to your environment:

  • Change the IP addresses and SSH user in the inventory.yml file.

    ip addresses
    ---
    all:
      hosts:
        node1:
          ansible_host: IP_ADDRESS_OF_NODE1 # Change it with the IP of the first server
        node2:
          ansible_host: IP_ADDRESS_OF_NODE2 # Change it with the IP of the second server
        node3:
          ansible_host: IP_ADDRESS_OF_NODE3 # Change it with the IP of the third server
    
    user ssh
    ---
    all:
      vars:
        ansible_user: root    # Change it accordingly
    

Next, ensure you have a ssh access to your nodes

# generate a ssh key
$> ssh-keygen

# copy the key on all nodes
$> for node in <name-of-remote-server1> <name-of-remote-server2> <name-of-remote-server3>; do ssh-copy-id $node; done

# start a ssh-agent
$> eval "$(ssh-agent -s)"

# add the key into the agent
$> ssh-add .ssh/id_rsa

# test connection without password
$> ssh <name-of-remote-server1>

Then, you can check that everything is configured correctly using this command:

RedHat
$> ansible all -i inventory.yml -bv -m ping
Ubuntu
$> ansible all -i inventory.yml -bv -m ping -e 'ansible_python_interpreter=/usr/bin/python3'

Finally, run these commands:

  • To download and install requirements:

    $> ./requirements_install.sh
    
  • To deploy and initialize the cluster:

    $> ./deploy_and_bootstrap.sh
    

Post-installation checks

All the nodes are configured to use openio-cli and aws-cli.

Run this check script on one of the nodes in the cluster sudo /usr/bin/openio-basic-checks.

Sample output:

basic checks
$> sudo /usr/bin/openio-basic-checks
--------
## OpenIO status.
Check the services.
KEY                         STATUS      PID GROUP
OPENIO-account-0            UP         3394 OPENIO,account,0
OPENIO-beanstalkd-0         UP         3390 OPENIO,beanstalkd,0
OPENIO-conscienceagent-0    UP         4926 OPENIO,conscienceagent,0
OPENIO-ecd-0                UP         3998 OPENIO,ecd,0
OPENIO-memcached-0          UP         4395 OPENIO,memcached,0
OPENIO-meta0-0              UP         4156 OPENIO,meta0,0
OPENIO-meta1-0              UP         4155 OPENIO,meta1,0
OPENIO-meta2-0              UP         3449 OPENIO,meta2,0
OPENIO-meta2-1              UP         3476 OPENIO,meta2,1
OPENIO-oio-blob-indexer-0   UP         3541 OPENIO,oio-blob-indexer,0
OPENIO-oio-blob-indexer-1   UP         3552 OPENIO,oio-blob-indexer,1
OPENIO-oio-blob-rebuilder-0 UP         3580 OPENIO,oio-blob-rebuilder,0
OPENIO-oio-event-agent-0    UP         3601 OPENIO,oio-event-agent,0
OPENIO-oio-event-agent-0.1  UP         3588 OPENIO,oio-event-agent,0
OPENIO-oio-meta2-indexer-0  UP         3503 OPENIO,oio-meta2-indexer,0
OPENIO-oioproxy-0           UP         2466 OPENIO,oioproxy,0
OPENIO-oioswift-0           UP         4814 OPENIO,oioswift,0
OPENIO-rawx-0               UP         3512 OPENIO,rawx,0
OPENIO-rawx-1               UP         3528 OPENIO,rawx,1
OPENIO-rdir-0               UP         3563 OPENIO,rdir,0
OPENIO-rdir-1               UP         3573 OPENIO,rdir,1
OPENIO-redis-0              UP         2166 OPENIO,redis,0
OPENIO-redissentinel-0      UP         2275 OPENIO,redissentinel,0
OPENIO-zookeeper-0          UP         3137 OPENIO,zookeeper,0
Task duration: 3ms
--
Check the cluster.
+------------+-----------------+-----------------+------------------------------------+----------+------------+------+-------+--------+
| Type       | Addr            | Service Id      | Volume                             | Location | Slots      | Up   | Score | Locked |
+------------+-----------------+-----------------+------------------------------------+----------+------------+------+-------+--------+
| account    | 172.28.0.3:6009 | n/a             | n/a                                | node2.0  | account    | True |    99 | False  |
| account    | 172.28.0.4:6009 | n/a             | n/a                                | node3.0  | account    | True |    99 | False  |
| account    | 172.28.0.2:6009 | n/a             | n/a                                | node1.0  | account    | True |    99 | False  |
| beanstalkd | 172.28.0.3:6014 | n/a             | /mnt/metadata1/OPENIO/beanstalkd-0 | node2.0  | beanstalkd | True |    94 | False  |
| beanstalkd | 172.28.0.4:6014 | n/a             | /mnt/metadata1/OPENIO/beanstalkd-0 | node3.0  | beanstalkd | True |    94 | False  |
| beanstalkd | 172.28.0.2:6014 | n/a             | /mnt/metadata1/OPENIO/beanstalkd-0 | node1.0  | beanstalkd | True |    94 | False  |
| meta0      | 172.28.0.3:6001 | n/a             | /mnt/metadata1/OPENIO/meta0-0      | node2.0  | meta0      | True |    99 | False  |
| meta0      | 172.28.0.4:6001 | n/a             | /mnt/metadata1/OPENIO/meta0-0      | node3.0  | meta0      | True |    99 | False  |
| meta0      | 172.28.0.2:6001 | n/a             | /mnt/metadata1/OPENIO/meta0-0      | node1.0  | meta0      | True |    99 | False  |
| meta1      | 172.28.0.3:6110 | n/a             | /mnt/metadata1/OPENIO/meta1-0      | node2.0  | meta1      | True |    95 | False  |
| meta1      | 172.28.0.4:6110 | n/a             | /mnt/metadata1/OPENIO/meta1-0      | node3.0  | meta1      | True |    95 | False  |
| meta1      | 172.28.0.2:6110 | n/a             | /mnt/metadata1/OPENIO/meta1-0      | node1.0  | meta1      | True |    95 | False  |
| meta2      | 172.28.0.3:6121 | n/a             | /mnt/metadata1/OPENIO/meta2-1      | node2.1  | meta2      | True |    95 | False  |
| meta2      | 172.28.0.3:6120 | n/a             | /mnt/metadata1/OPENIO/meta2-0      | node2.0  | meta2      | True |    95 | False  |
| meta2      | 172.28.0.4:6121 | n/a             | /mnt/metadata1/OPENIO/meta2-1      | node3.1  | meta2      | True |    95 | False  |
| meta2      | 172.28.0.4:6120 | n/a             | /mnt/metadata1/OPENIO/meta2-0      | node3.0  | meta2      | True |    95 | False  |
| meta2      | 172.28.0.2:6121 | n/a             | /mnt/metadata1/OPENIO/meta2-1      | node1.1  | meta2      | True |    95 | False  |
| meta2      | 172.28.0.2:6120 | n/a             | /mnt/metadata1/OPENIO/meta2-0      | node1.0  | meta2      | True |    95 | False  |
| oioproxy   | 172.28.0.3:6006 | n/a             | n/a                                | node2.0  | oioproxy   | True |    98 | False  |
| oioproxy   | 172.28.0.4:6006 | n/a             | n/a                                | node3.0  | oioproxy   | True |    98 | False  |
| oioproxy   | 172.28.0.2:6006 | n/a             | n/a                                | node1.0  | oioproxy   | True |    98 | False  |
| oioswift   | 172.28.0.3:6007 | n/a             | n/a                                | node2.0  | oioswift   | True |    99 | False  |
| oioswift   | 172.28.0.4:6007 | n/a             | n/a                                | node3.0  | oioswift   | True |    99 | False  |
| oioswift   | 172.28.0.2:6007 | n/a             | n/a                                | node1.0  | oioswift   | True |    99 | False  |
| rawx       | 172.28.0.3:6201 | 172.28.0.3:6201 | /mnt/data2/OPENIO/rawx-1           | node2.1  | rawx       | True |    95 | False  |
| rawx       | 172.28.0.3:6200 | 172.28.0.3:6200 | /mnt/data1/OPENIO/rawx-0           | node2.0  | rawx       | True |    95 | False  |
| rawx       | 172.28.0.4:6201 | 172.28.0.4:6201 | /mnt/data2/OPENIO/rawx-1           | node3.1  | rawx       | True |    95 | False  |
| rawx       | 172.28.0.4:6200 | 172.28.0.4:6200 | /mnt/data1/OPENIO/rawx-0           | node3.0  | rawx       | True |    95 | False  |
| rawx       | 172.28.0.2:6201 | 172.28.0.2:6201 | /mnt/data2/OPENIO/rawx-1           | node1.1  | rawx       | True |    95 | False  |
| rawx       | 172.28.0.2:6200 | 172.28.0.2:6200 | /mnt/data1/OPENIO/rawx-0           | node1.0  | rawx       | True |    95 | False  |
| rdir       | 172.28.0.3:6300 | n/a             | /mnt/data1/OPENIO/rdir-0           | node2.0  | rdir       | True |    99 | False  |
| rdir       | 172.28.0.3:6301 | n/a             | /mnt/data2/OPENIO/rdir-1           | node2.1  | rdir       | True |    99 | False  |
| rdir       | 172.28.0.4:6301 | n/a             | /mnt/data2/OPENIO/rdir-1           | node3.1  | rdir       | True |    99 | False  |
| rdir       | 172.28.0.4:6300 | n/a             | /mnt/data1/OPENIO/rdir-0           | node3.0  | rdir       | True |    99 | False  |
| rdir       | 172.28.0.2:6300 | n/a             | /mnt/data1/OPENIO/rdir-0           | node1.0  | rdir       | True |    99 | False  |
| rdir       | 172.28.0.2:6301 | n/a             | /mnt/data2/OPENIO/rdir-1           | node1.1  | rdir       | True |    99 | False  |
+------------+-----------------+-----------------+------------------------------------+----------+------------+------+-------+--------+
Task duration: 510ms
--

--------
## OpenIO directory consistency.
directory status.
89290606119 5240 EA83 log INF oio.m0v2 Getting a single META0 entry [0000]
89290606726 5240 EA83 log INF oio.m0v2 (Start of META0 content)
89290607347 5240 EA83 log INF oio.m0v2 (End of META0 content)
89290615850 5241 2A48 log INF oio.m0v2 Getting a single META0 entry [0000]
89290616515 5241 2A48 log INF oio.m0v2 (Start of META0 content)
89290617134 5241 2A48 log INF oio.m0v2 (End of META0 content)
89290627651 5242 7A39 log INF oio.m0v2 Getting a single META0 entry [0000]
89290628237 5242 7A39 log INF oio.m0v2 (Start of META0 content)
89290628833 5242 7A39 log INF oio.m0v2 (End of META0 content)
89290637054 5243 6A23 log INF oio.m0v2 Dumping the whole META0
89290639577 5243 6A23 log INF oio.m0v2 (Start of META0 content)
89291003155 5243 6A23 log INF oio.m0v2 (End of META0 content)
89291015798 5244 BBA3 log INF oio.m0v2 Dumping the whole META0
89291018548 5244 BBA3 log INF oio.m0v2 (Start of META0 content)
89291441238 5244 BBA3 log INF oio.m0v2 (End of META0 content)
89291453452 5245 CA91 log INF oio.m0v2 Dumping the whole META0
89291456169 5245 CA91 log INF oio.m0v2 (Start of META0 content)
89291857313 5245 CA91 log INF oio.m0v2 (End of META0 content)
+--------+--------+
| Status | Errors |
+--------+--------+
| OK     | None   |
+--------+--------+
Task duration: 2127ms
--
reverse directory status.
+--------+--------+
| Status | Errors |
+--------+--------+
| OK     | None   |
+--------+--------+
Task duration: 571ms
--
meta0 status.
+--------+--------+
| Status | Errors |
+--------+--------+
| OK     | None   |
+--------+--------+
Task duration: 543ms
--
meta1 status.
+--------+--------+
| Status | Errors |
+--------+--------+
| OK     | None   |
+--------+--------+
Task duration: 530ms
--

--------
## OpenIO API.
Upload the /etc/passwd file to the bucket MY_CONTAINER of the project MY_ACCOUNT.
+--------+------+----------------------------------+--------+
| Name   | Size | Hash                             | Status |
+--------+------+----------------------------------+--------+
| passwd | 1174 | 07384A10FD0BB72685828849F3D50A43 | Ok     |
+--------+------+----------------------------------+--------+
Task duration: 584ms
--
Get some information about your object.
+-----------------+--------------------------------------------------------------------+
| Field           | Value                                                              |
+-----------------+--------------------------------------------------------------------+
| account         | MY_ACCOUNT                                                         |
| base_name       | 7B1F1716BE955DE2D677B68819836E4F75FD2424F6D22DB60F9F2BB40331A741.1 |
| bytes_usage     | 1.174KB                                                            |
| container       | MY_CONTAINER                                                       |
| ctime           | 1575073015                                                         |
| damaged_objects | 0                                                                  |
| max_versions    | Namespace default                                                  |
| missing_chunks  | 0                                                                  |
| objects         | 1                                                                  |
| quota           | Namespace default                                                  |
| status          | Enabled                                                            |
| storage_policy  | Namespace default                                                  |
+-----------------+--------------------------------------------------------------------+
Task duration: 510ms
--
List object in container.
+--------+------+----------------------------------+------------------+
| Name   | Size | Hash                             |          Version |
+--------+------+----------------------------------+------------------+
| passwd | 1174 | 07384A10FD0BB72685828849F3D50A43 | 1575073015290257 |
+--------+------+----------------------------------+------------------+
Task duration: 541ms
--
Find the services involved for your container.
+-----------------+--------------------------------------------------------------------+
| Field           | Value                                                              |
+-----------------+--------------------------------------------------------------------+
| account         | MY_ACCOUNT                                                         |
| base_name       | 7B1F1716BE955DE2D677B68819836E4F75FD2424F6D22DB60F9F2BB40331A741.1 |
| meta0           |                                                                    |
| meta1           | 172.28.0.2:6110, 172.28.0.3:6110, 172.28.0.4:6110                  |
| meta2           | 172.28.0.2:6120, 172.28.0.3:6121, 172.28.0.4:6120                  |
| meta2.sys.peers | 172.28.0.2:6120, 172.28.0.3:6121, 172.28.0.4:6120                  |
| name            | MY_CONTAINER                                                       |
| status          | Enabled                                                            |
+-----------------+--------------------------------------------------------------------+
Task duration: 528ms
--
Save the data stored in the given object to the '--file' destination.
Task duration: 507ms
--
Compare local file against data from SDS.
OK
Task duration: 2ms
--
Show the account informations.
+-----------------+------------+
| Field           | Value      |
+-----------------+------------+
| account         | MY_ACCOUNT |
| bytes           | 1.174KB    |
| containers      | 1          |
| ctime           | 1575073015 |
| damaged_objects | 0          |
| metadata        | {}         |
| missing_chunks  | 0          |
| objects         | 1          |
+-----------------+------------+
Task duration: 532ms
--
Delete your object.
+--------+---------+
| Name   | Deleted |
+--------+---------+
| passwd | True    |
+--------+---------+
Task duration: 527ms
--
Delete your empty container.
Task duration: 529ms
--

--------
## AWS API.
Create a bucket 'mybucket'.
make_bucket: mybucket
Task duration: 534ms
--
Upload the '/etc/passwd' file to the bucket 'mybucket'.
upload: ../etc/passwd to s3://mybucket/passwd
Task duration: 538ms
--
List your buckets.
2019-11-30 00:17:00    1.1 KiB passwd

Total Objects: 1
  Total Size: 1.1 KiB
Task duration: 456ms
--
Save the data stored in the given object to the given file.
download: s3://mybucket/passwd to ../tmp/passwd.aws
Task duration: 493ms
--
Compare local file against data from SDS.
OK
Task duration: 2ms
--
Delete your object.
delete: s3://mybucket/passwd
Task duration: 477ms
--
Delete your empty bucket.
remove_bucket: mybucket
Task duration: 478ms
--
------
Done !

*** Commands summary ***

*** OpenIO status ***
Check the services                                                                OK
Check the cluster                                                                 OK
*** OpenIO directory consistency ***
directory status                                                                  OK
reverse directory status                                                          OK
meta0 status                                                                      OK
meta1 status                                                                      OK
*** OpenIO API ***
Upload the /etc/passwd file to the bucket MY_CONTAINER of the project MY_ACCOUNT  OK
Get some information about your object                                            OK
List object in container                                                          OK
Find the services involved for your container                                     OK
Save the data stored in the given object to the '--file' destination              OK
Compare local file against data from SDS                                          OK
Show the account informations                                                     OK
Delete your object                                                                OK
Delete your empty container                                                       OK
*** AWS API ***
Create a bucket 'mybucket'                                                        OK
Upload the '/etc/passwd' file to the bucket 'mybucket'                            OK
List your buckets                                                                 OK
Save the data stored in the given object to the given file                        OK
Compare local file against data from SDS                                          OK
Delete your object                                                                OK
Delete your empty bucket                                                          OK
-------------------
Overall check result                                                              OK

++++
AWS S3 summary from (/root/.aws/credentials):
  endpoint: http://172.28.0.2:6007
  region:  us-east-1
  access key:  demo:demo
  secret key:  DEMO_PASS
  ssl: false
  signature_version: s3v4
  path style: true

Customizing your deployment

Manage NTP configuration

You can set the time settings in the inventory file.

By default, the deployment doesn’t change your timezone but enable the NTP service and set four NTP servers

ntp
---
all:
  hosts:
  
  vars:
    ntp_enabled: true
    ntp_manage_config: true
    ntp_manage_timezone: false
    ntp_timezone: "Etc/UTC"
    ntp_area: ""
    ntp_servers:
      - "0{{ ntp_area }}.pool.ntp.org iburst"
      - "1{{ ntp_area }}.pool.ntp.org iburst"
      - "2{{ ntp_area }}.pool.ntp.org iburst"
      - "3{{ ntp_area }}.pool.ntp.org iburst"
    ntp_restrict:
      - "127.0.0.1"
      - "::1"

If needed, you can add your own settings:

custom ntp
---
all:
  hosts:
  
  vars:
    ntp_enabled: true
    ntp_manage_config: true
    ntp_manage_timezone: true
    ntp_timezone: "Europe/Paris"
    ntp_area: ".fr"
    ntp_servers:
      - "0{{ ntp_area }}.pool.ntp.org iburst"
      - "1{{ ntp_area }}.pool.ntp.org iburst"
      - "2{{ ntp_area }}.pool.ntp.org iburst"
      - "3{{ ntp_area }}.pool.ntp.org iburst"
    ntp_restrict:
      - "127.0.0.1"
      - "::1"

Manage storage volumes

You can customize all storage devices by node in the host declaration part. Each storage device can be used for either data or metadata. In order to make a storage device available to OpenIO, you need to partition, format and mount it first. The choice of tools and methods is left to the operator, as long as the resulting configuration doesn’t conflict with the requirements. The resulting mount point and partition/device names are to be used below in the openio_data_mounts and openio_metadata_mounts.

In this example, the nodes have two mounted volumes to store data and one to store metadata:

storage definition
  ---
  all:
    hosts:
      node1:
        ansible_host: IP_ADDRESS_OF_NODE1
        openio_data_mounts:
          - mountpoint: /mnt/data1
            partition: /dev/vdb
          - mountpoint: /mnt/data2
            partition: /dev/vdc
        openio_metadata_mounts:
          - mountpoint: /mnt/metadata1
            partition: /dev/vdd
            meta2_count: 2
      node2:
        ansible_host: IP_ADDRESS_OF_NODE2
        openio_data_mounts:
          - mountpoint: /mnt/data1
            partition: /dev/vdb
          - mountpoint: /mnt/data2
            partition: /dev/vdc
        openio_metadata_mounts:
          - mountpoint: /mnt/metadata1
            partition: /dev/vdd
            meta2_count: 2
      node3:
        ansible_host: IP_ADDRESS_OF_NODE3
        openio_data_mounts:
          - mountpoint: /mnt/data1
            partition: /dev/vdb
          - mountpoint: /mnt/data2
            partition: /dev/vdc
        openio_metadata_mounts:
          - mountpoint: /mnt/metadata1
            partition: /dev/vdd
            meta2_count: 2
    vars:
      ansible_user: root

The meta2_count define how many meta2 instance you want for the device.

If you want to lose one server (of 3) but still create new containers, you need at least 3 meta2 up. Without this parameter, you can read data from an existing container but you can’t create or delete containers.

Manage the ssh connection

If your nodes don’t all have the same ssh user configured, you can define a specific ssh user (or key) for the deployment of each node.

global ssh
---
all:
  hosts:
  
  vars:
    ansible_user: my_user
    ansible_ssh_private_key_file: /home/john/.ssh/id_rsa
specific ssh
  ---
  all:
    hosts:
      node1:
        ansible_host: IP_ADDRESS_OF_NODE1
        
      node2:
        ansible_host: IP_ADDRESS_OF_NODE2
        
      node3:
        ansible_host: IP_ADDRESS_OF_NODE3
        
        ansible_user: my_other_user
        ansible_ssh_private_key_file: /home/john/.ssh/id_rsa_2

    vars:
      ansible_user: my_user
      ansible_ssh_private_key_file: /home/john/.ssh/id_rsa

Manage the data network interface used

The servers can have many network interfaces. The most common is to have a management interface and another for the data. Obviously these 2 interfaces can be the same.

global interface
  ---
  all:
    
    children:
      openio:
      
      vars:
        openio_bind_interface: bond0
        openio_bind_address: "{{ ansible_bond0.ipv4.address }}"

As for ssh connections, these settings can be by server.

Manage S3 authentification

Set name, password, and role in the inventory file.

S3 users
  ---
  all:
    
    children:
      openio:
      
      vars:
        # S3 users
        openio_oioswift_users:
          - name: "demo:demo"
            password: "DEMO_PASS"
            roles:
              - member

          - name: "test:tester"
            password: "testing"
            roles:
              - member
              - reseller_admin

Change user openio’s UID/GID

You can define the uid and the gid of the user openio in the inventory file.

uid/gid user openio
  ---
  all:
    hosts:
      
    vars:
      openio_user_openio_uid: 120
      openio_group_openio_gid: 220

Proxy

Set your variables environment in the inventory file.

http proxy
---
all:
  hosts:
  
  vars:
    openio_environment:
      http_proxy: http://proxy.example.com:8080
      https_proxy: http://proxy.example.com:8080
      no_proxy: "localhost,172.28.0.2,172.28.0.3,172.28.0.4,172.28.0.5"

Test on Docker

If you don’t have physical nodes to test our solution, you can spawn some Docker containers with docker-compose.

docker-compose up
$> cd oiosds/products/sds
$> source openio_venv/bin/activate
$> pip install docker-compose

$> docker-compose up -d
Creating node1 ... done
Creating node1 ... done
Creating node3 ... done

$> docker-compose ps
Name            Command            State   Ports
------------------------------------------------
node1   /usr/lib/systemd/systemd   Up
node2   /usr/lib/systemd/systemd   Up
node3   /usr/lib/systemd/systemd   Up

Next, replace the inventory.yml by the inventory provided for this exercise.

replace inventory.yml
  $> cp inventory_docker-compose.yml inventory.yml

Now, you can deploy.

deploy on containers
  $> ./requirements_install.sh
  $> ./deploy_and_bootstrap.sh

Once the deployment finished, you can access with :

ip addresses
---
endpoint: 'http://172.28.0.2:6007'
region: 'us-east-1'
access_key: 'demo:demo'
secret_key:  'DEMO_PASS'
ssl: false
signature_version: 's3v4'
path_style: true
...

Finally, you can remove all.

docker-compose down
  $> docker-compose down --volumes --remove-orphans --rmi all