Multi Nodes Installation
Tip
New! Ultra high performance features. Learn more through our subscription plans.
Get in touch with the team or visit the page describing our plans.
Install a three nodes on-premises object storage backend, using the deployment tools provided by OpenIO.
Requirements
Hardware
When run as the backend layer, OpenIO SDS is lightweight and requires few resources. The front layer consists of the gateways (Openstack Swift, Amazon S3) and their services do not require many resources.
- CPU: any dual core at 1 Ghz or faster
- RAM: 2GB recommended
- Network: 1Gb/s NIC
Operating system
As explained on our Supported Linux Distributions page, OpenIO supports the following distributions:
- Centos 7
- Ubuntu 18.04 (Server), a.k.a
Bionic Beaver
System
$> sudo sed -i -e 's@^SELINUX=enforcing$@SELINUX=disabled@g' /etc/selinux/config $> sudo setenforce 0 $> sudo reboot$> sudo echo 'GRUB_CMDLINE_LINUX_DEFAULT="$GRUB_CMDLINE_LINUX_DEFAULT apparmor=0"' > /etc/default/grub.d/apparmor.cfg $> sudo update-grub $> sudo reboot
- All nodes must have different hostnames.
- All nodes must have a version of python greater than
2.7
. - The node used to run the deployment must have a version of python greater than
3.6
. - All mounted partitions used for data/metadata must support extended attributes.
XFS
is recommended.
If the device’s mountpoint is /mnt/data1
, you can verify the presence and the type of this partition.
In this example, SGI XFS
is the filesystem:
$> df /mnt/data1 Filesystem 1K-blocks Used Available Use% Mounted on /dev/vdb 41931756 1624148 40307608 4% /mnt/data1 $> file -sL /dev/vdb /dev/vdb: SGI XFS filesystem data (blksz 4096, inosz 512, v2 dirs)
- The system must be up to date.
If you are running Centos or RedHat, keep your system up-to-date as follows:
$> sudo yum update -y $> sudo reboot
If you are using Ubuntu or Debian, keep your system up-to-date as follows:
$> sudo apt update -y $> sudo apt upgrade -y $> sudo reboot
Network
- All nodes must be connected to the same LAN through the specified interface (the first one by default).
- The firewall is disabled.
$> sudo systemctl stop firewalld.service $> sudo systemctl disable firewalld.service$> sudo sudo ufw disable $> sudo systemctl disable ufw.service
Setup
You only need to perform this setup on one of the nodes in the cluster (or your laptop).
- Install
git
.
$> sudo yum install git -y$> sudo apt install git -y
- Clone the OpenIO ansible playbook deployment repository
$> git clone https://github.com/open-io/ansible-playbook-openio-deployment.git --branch 20.04 oiosds $> cd oiosds/products/sds
- Install Ansible for the current user.
$> python3 -m venv openio_venv $> source openio_venv/bin/activate $> pip install -r ansible.pip
Architecture
This playbook will deploy a multi-nodes cluster as shown below:
+-----------------+ +-----------------+ +-----------------+ | OIOSWIFT | | OIOSWIFT | | OIOSWIFT | | FOR S3 | | FOR S3 | | FOR S3 | +-----------------+ +-----------------+ +-----------------+ | OPENIO | | OPENIO | | OPENIO | | SDS | | SDS | | SDS | +-----------------+ +-----------------+ +-----------------+
Installation
First, configure the inventory according to your environment:
Change the IP addresses and SSH user in the
inventory.yml
file.--- all: hosts: node1: ansible_host: IP_ADDRESS_OF_NODE1 # Change it with the IP of the first server node2: ansible_host: IP_ADDRESS_OF_NODE2 # Change it with the IP of the second server node3: ansible_host: IP_ADDRESS_OF_NODE3 # Change it with the IP of the third server
--- all: vars: ansible_user: root # Change it accordingly
Next, ensure you have a ssh access to your nodes
# generate a ssh key $> ssh-keygen # copy the key on all nodes $> for node in <name-of-remote-server1> <name-of-remote-server2> <name-of-remote-server3>; do ssh-copy-id $node; done # start a ssh-agent $> eval "$(ssh-agent -s)" # add the key into the agent $> ssh-add .ssh/id_rsa # test connection without password $> ssh <name-of-remote-server1>
Then, you can check that everything is configured correctly using this command:
$> ansible all -i inventory.yml -bv -m ping$> ansible all -i inventory.yml -bv -m ping -e 'ansible_python_interpreter=/usr/bin/python3'
Finally, run these commands:
To download and install requirements:
$> ./requirements_install.sh
To deploy and initialize the cluster:
$> ./deploy_and_bootstrap.sh
Post-installation checks
All the nodes are configured to use openio-cli and aws-cli.
Run this check script on one of the nodes in the cluster sudo /usr/bin/openio-basic-checks
.
Sample output:
Customizing your deployment
Manage NTP configuration
You can set the time settings in the inventory file.
By default, the deployment doesn’t change your timezone but enable the NTP service and set four NTP servers
--- all: hosts: … vars: ntp_enabled: true ntp_manage_config: true ntp_manage_timezone: false ntp_timezone: "Etc/UTC" ntp_area: "" ntp_servers: - "0{{ ntp_area }}.pool.ntp.org iburst" - "1{{ ntp_area }}.pool.ntp.org iburst" - "2{{ ntp_area }}.pool.ntp.org iburst" - "3{{ ntp_area }}.pool.ntp.org iburst" ntp_restrict: - "127.0.0.1" - "::1"
If needed, you can add your own settings:
--- all: hosts: … vars: ntp_enabled: true ntp_manage_config: true ntp_manage_timezone: true ntp_timezone: "Europe/Paris" ntp_area: ".fr" ntp_servers: - "0{{ ntp_area }}.pool.ntp.org iburst" - "1{{ ntp_area }}.pool.ntp.org iburst" - "2{{ ntp_area }}.pool.ntp.org iburst" - "3{{ ntp_area }}.pool.ntp.org iburst" ntp_restrict: - "127.0.0.1" - "::1"
Manage storage volumes
You can customize all storage devices by node in the host declaration part. Each storage device can be used for either data or metadata.
In order to make a storage device available to OpenIO, you need to partition, format and mount it first.
The choice of tools and methods is left to the operator, as long as the resulting configuration doesn’t conflict with the requirements.
The resulting mount point and partition/device names are to be used below in the openio_data_mounts
and openio_metadata_mounts
.
In this example, the nodes have two mounted volumes to store data and one to store metadata:
--- all: hosts: node1: ansible_host: IP_ADDRESS_OF_NODE1 openio_data_mounts: - mountpoint: /mnt/data1 partition: /dev/vdb - mountpoint: /mnt/data2 partition: /dev/vdc openio_metadata_mounts: - mountpoint: /mnt/metadata1 partition: /dev/vdd meta2_count: 2 node2: ansible_host: IP_ADDRESS_OF_NODE2 openio_data_mounts: - mountpoint: /mnt/data1 partition: /dev/vdb - mountpoint: /mnt/data2 partition: /dev/vdc openio_metadata_mounts: - mountpoint: /mnt/metadata1 partition: /dev/vdd meta2_count: 2 node3: ansible_host: IP_ADDRESS_OF_NODE3 openio_data_mounts: - mountpoint: /mnt/data1 partition: /dev/vdb - mountpoint: /mnt/data2 partition: /dev/vdc openio_metadata_mounts: - mountpoint: /mnt/metadata1 partition: /dev/vdd meta2_count: 2 vars: ansible_user: root
The meta2_count
define how many meta2 instance you want for the device.
If you want to lose one server (of 3) but still create new containers, you need at least 3 meta2 up. Without this parameter, you can read data from an existing container but you can’t create or delete containers.
Manage the ssh connection
If your nodes don’t all have the same ssh user configured, you can define a specific ssh user (or key) for the deployment of each node.
--- all: hosts: … vars: ansible_user: my_user ansible_ssh_private_key_file: /home/john/.ssh/id_rsa--- all: hosts: node1: ansible_host: IP_ADDRESS_OF_NODE1 … node2: ansible_host: IP_ADDRESS_OF_NODE2 … node3: ansible_host: IP_ADDRESS_OF_NODE3 … ansible_user: my_other_user ansible_ssh_private_key_file: /home/john/.ssh/id_rsa_2 vars: ansible_user: my_user ansible_ssh_private_key_file: /home/john/.ssh/id_rsa
Manage the data network interface used
The servers can have many network interfaces. The most common is to have a management interface and another for the data. Obviously these 2 interfaces can be the same.
--- all: … children: openio: … vars: openio_bind_interface: bond0 openio_bind_address: "{{ ansible_bond0.ipv4.address }}"
As for ssh connections, these settings can be by server.
Manage S3 authentification
Set name
, password
, and role
in the inventory file.
--- all: … children: openio: … vars: # S3 users openio_oioswift_users: - name: "demo:demo" password: "DEMO_PASS" roles: - member - name: "test:tester" password: "testing" roles: - member - reseller_admin
Change user openio’s UID/GID
You can define the uid and the gid of the user openio
in the inventory file.
--- all: hosts: … vars: openio_user_openio_uid: 120 openio_group_openio_gid: 220
Proxy
Set your variables environment in the inventory file.
--- all: hosts: … vars: openio_environment: http_proxy: http://proxy.example.com:8080 https_proxy: http://proxy.example.com:8080 no_proxy: "localhost,172.28.0.2,172.28.0.3,172.28.0.4,172.28.0.5"
Test on Docker
If you don’t have physical nodes to test our solution, you can spawn some Docker containers with docker-compose
.
$> cd oiosds/products/sds $> source openio_venv/bin/activate $> pip install docker-compose $> docker-compose up -d Creating node1 ... done Creating node1 ... done Creating node3 ... done $> docker-compose ps Name Command State Ports ------------------------------------------------ node1 /usr/lib/systemd/systemd Up node2 /usr/lib/systemd/systemd Up node3 /usr/lib/systemd/systemd Up
Next, replace the inventory.yml
by the inventory provided for this exercise.
$> cp inventory_docker-compose.yml inventory.yml
Now, you can deploy.
$> ./requirements_install.sh $> ./deploy_and_bootstrap.sh
Once the deployment finished, you can access with :
--- endpoint: 'http://172.28.0.2:6007' region: 'us-east-1' access_key: 'demo:demo' secret_key: 'DEMO_PASS' ssl: false signature_version: 's3v4' path_style: true ...
Finally, you can remove all.
$> docker-compose down --volumes --remove-orphans --rmi all
Scale out a storage node
You can find below the procedure to add a new node to your cluster:
Edit the
inventory.yml
.all: … node4: ansible_host: IP_ADDRESS_OF_NODE4 # Change it with the IP of the fourth server openio_data_mounts: - mountpoint: /mnt/data1 partition: /dev/vdb - mountpoint: /mnt/data2 partition: /dev/vdc openio_metadata_mounts: - mountpoint: /mnt/metadata1 partition: /dev/vdd meta2_count: 2 meta1_count: 0
all: … backs: hosts: node1: {} node2: {} node3: {} node4: {} … meta2: hosts: node1: {} node2: {} node3: {} node4: {}
Then, provision services on this node.
$> ./deploy.sh
Log on the new node and check the deployed services. You can notice that the added services are scored to 0, thus not available.
[root@node4 ~]# openio cluster list meta2 rawx rdir +-------+-----------------+-----------------+-------------------------------+----------+-------+------+-------+--------+ | Type | Addr | Service Id | Volume | Location | Slots | Up | Score | Locked | +-------+-----------------+-----------------+-------------------------------+----------+-------+------+-------+--------+ | meta2 | 172.28.0.5:6121 | n/a | /mnt/metadata1/OPENIO/meta2-1 | node4.1 | meta2 | True | 0 | True | | meta2 | 172.28.0.5:6120 | n/a | /mnt/metadata1/OPENIO/meta2-0 | node4.0 | meta2 | True | 0 | True | | meta2 | 172.28.0.4:6121 | n/a | /mnt/metadata1/OPENIO/meta2-1 | node3.1 | meta2 | True | 91 | False | | meta2 | 172.28.0.4:6120 | n/a | /mnt/metadata1/OPENIO/meta2-0 | node3.0 | meta2 | True | 91 | False | | meta2 | 172.28.0.3:6121 | n/a | /mnt/metadata1/OPENIO/meta2-1 | node2.1 | meta2 | True | 91 | False | | meta2 | 172.28.0.3:6120 | n/a | /mnt/metadata1/OPENIO/meta2-0 | node2.0 | meta2 | True | 91 | False | | meta2 | 172.28.0.2:6121 | n/a | /mnt/metadata1/OPENIO/meta2-1 | node1.1 | meta2 | True | 91 | False | | meta2 | 172.28.0.2:6120 | n/a | /mnt/metadata1/OPENIO/meta2-0 | node1.0 | meta2 | True | 91 | False | | rawx | 172.28.0.5:6201 | 172.28.0.5:6201 | /mnt/data2/OPENIO/rawx-1 | node4.1 | rawx | True | 0 | True | | rawx | 172.28.0.5:6200 | 172.28.0.5:6200 | /mnt/data1/OPENIO/rawx-0 | node4.0 | rawx | True | 0 | True | | rawx | 172.28.0.4:6200 | 172.28.0.4:6200 | /mnt/data1/OPENIO/rawx-0 | node3.0 | rawx | True | 91 | False | | rawx | 172.28.0.3:6200 | 172.28.0.3:6200 | /mnt/data1/OPENIO/rawx-0 | node2.0 | rawx | True | 91 | False | | rawx | 172.28.0.3:6201 | 172.28.0.3:6201 | /mnt/data2/OPENIO/rawx-1 | node2.1 | rawx | True | 91 | False | | rawx | 172.28.0.2:6201 | 172.28.0.2:6201 | /mnt/data2/OPENIO/rawx-1 | node1.1 | rawx | True | 91 | False | | rawx | 172.28.0.2:6200 | 172.28.0.2:6200 | /mnt/data1/OPENIO/rawx-0 | node1.0 | rawx | True | 91 | False | | rawx | 172.28.0.4:6201 | 172.28.0.4:6201 | /mnt/data2/OPENIO/rawx-1 | node3.1 | rawx | True | 91 | False | | rdir | 172.28.0.5:6300 | n/a | /mnt/data1/OPENIO/rdir-0 | node4.0 | rdir | True | 0 | True | | rdir | 172.28.0.5:6301 | n/a | /mnt/data2/OPENIO/rdir-1 | node4.1 | rdir | True | 0 | True | | rdir | 172.28.0.4:6300 | n/a | /mnt/data1/OPENIO/rdir-0 | node3.0 | rdir | True | 99 | False | | rdir | 172.28.0.4:6301 | n/a | /mnt/data2/OPENIO/rdir-1 | node3.1 | rdir | True | 99 | False | | rdir | 172.28.0.3:6301 | n/a | /mnt/data2/OPENIO/rdir-1 | node2.1 | rdir | True | 98 | False | | rdir | 172.28.0.3:6300 | n/a | /mnt/data1/OPENIO/rdir-0 | node2.0 | rdir | True | 98 | False | | rdir | 172.28.0.2:6300 | n/a | /mnt/data1/OPENIO/rdir-0 | node1.0 | rdir | True | 99 | False | | rdir | 172.28.0.2:6301 | n/a | /mnt/data2/OPENIO/rdir-1 | node1.1 | rdir | True | 99 | False | +-------+-----------------+-----------------+-------------------------------+----------+-------+------+-------+--------+
Create the assignation of your new rawx/meta2.
[root@node4 /]# openio rdir assignments rawx +-----------------+-----------------+---------------+---------------+ | Rdir | Rawx | Rdir location | Rawx location | +-----------------+-----------------+---------------+---------------+ | 172.28.0.2:6300 | 172.28.0.4:6200 | node1.0 | node3.0 | | 172.28.0.2:6301 | 172.28.0.3:6200 | node1.1 | node2.0 | | 172.28.0.3:6300 | 172.28.0.2:6201 | node2.0 | node1.1 | | 172.28.0.3:6300 | 172.28.0.4:6201 | node2.0 | node3.1 | | 172.28.0.3:6301 | 172.28.0.2:6200 | node2.1 | node1.0 | | 172.28.0.4:6301 | 172.28.0.3:6201 | node3.1 | node2.1 | | n/a | 172.28.0.5:6200 | None | node4.0 | | n/a | 172.28.0.5:6201 | None | node4.1 | +-----------------+-----------------+---------------+---------------+ [root@node4 /]# openio rdir bootstrap rawx +-----------------+-----------------+---------------+---------------+ | Rdir | Rawx | Rdir location | Rawx location | +-----------------+-----------------+---------------+---------------+ | 172.28.0.2:6300 | 172.28.0.4:6200 | node1.0 | node3.0 | | 172.28.0.2:6300 | 172.28.0.5:6200 | node1.0 | node4.0 | | 172.28.0.2:6301 | 172.28.0.3:6200 | node1.1 | node2.0 | | 172.28.0.2:6301 | 172.28.0.5:6201 | node1.1 | node4.1 | | 172.28.0.3:6300 | 172.28.0.2:6201 | node2.0 | node1.1 | | 172.28.0.3:6300 | 172.28.0.4:6201 | node2.0 | node3.1 | | 172.28.0.3:6301 | 172.28.0.2:6200 | node2.1 | node1.0 | | 172.28.0.4:6301 | 172.28.0.3:6201 | node3.1 | node2.1 | +-----------------+-----------------+---------------+---------------+ [root@node4 /]# openio rdir assignments meta2 +-----------------+-----------------+---------------+----------------+ | Rdir | Meta2 | Rdir location | Meta2 location | +-----------------+-----------------+---------------+----------------+ | 172.28.0.2:6300 | 172.28.0.4:6121 | node1.0 | node3.1 | | 172.28.0.2:6301 | 172.28.0.4:6120 | node1.1 | node3.0 | | 172.28.0.3:6300 | 172.28.0.2:6121 | node2.0 | node1.1 | | 172.28.0.3:6301 | 172.28.0.2:6120 | node2.1 | node1.0 | | 172.28.0.4:6300 | 172.28.0.3:6121 | node3.0 | node2.1 | | 172.28.0.4:6301 | 172.28.0.3:6120 | node3.1 | node2.0 | | n/a | 172.28.0.5:6120 | None | node4.0 | | n/a | 172.28.0.5:6121 | None | node4.1 | +-----------------+-----------------+---------------+----------------+ [root@node4 /]# openio rdir bootstrap meta2 +-----------------+-----------------+---------------+----------------+ | Rdir | Meta2 | Rdir location | Meta2 location | +-----------------+-----------------+---------------+----------------+ | 172.28.0.2:6300 | 172.28.0.4:6121 | node1.0 | node3.1 | | 172.28.0.2:6301 | 172.28.0.4:6120 | node1.1 | node3.0 | | 172.28.0.3:6300 | 172.28.0.2:6121 | node2.0 | node1.1 | | 172.28.0.3:6301 | 172.28.0.2:6120 | node2.1 | node1.0 | | 172.28.0.3:6301 | 172.28.0.5:6121 | node2.1 | node4.1 | | 172.28.0.4:6300 | 172.28.0.3:6121 | node3.0 | node2.1 | | 172.28.0.4:6300 | 172.28.0.5:6120 | node3.0 | node4.0 | | 172.28.0.4:6301 | 172.28.0.3:6120 | node3.1 | node2.0 | +-----------------+-----------------+---------------+----------------+
Finally, unlock services to serve requests.
[root@node4 ~]# openio cluster unlockall +------------+-----------------+----------+ | Type | Service | Result | +------------+-----------------+----------+ | account | 172.28.0.5:6009 | unlocked | | account | 172.28.0.4:6009 | unlocked | | account | 172.28.0.3:6009 | unlocked | | account | 172.28.0.2:6009 | unlocked | | beanstalkd | 172.28.0.5:6014 | unlocked | | beanstalkd | 172.28.0.4:6014 | unlocked | | beanstalkd | 172.28.0.3:6014 | unlocked | | beanstalkd | 172.28.0.2:6014 | unlocked | | meta0 | 172.28.0.3:6001 | unlocked | | meta0 | 172.28.0.2:6001 | unlocked | | meta0 | 172.28.0.4:6001 | unlocked | | meta1 | 172.28.0.3:6110 | unlocked | | meta1 | 172.28.0.2:6110 | unlocked | | meta1 | 172.28.0.4:6110 | unlocked | | meta2 | 172.28.0.5:6121 | unlocked | | meta2 | 172.28.0.5:6120 | unlocked | | meta2 | 172.28.0.4:6121 | unlocked | | meta2 | 172.28.0.4:6120 | unlocked | | meta2 | 172.28.0.3:6121 | unlocked | | meta2 | 172.28.0.3:6120 | unlocked | | meta2 | 172.28.0.2:6121 | unlocked | | meta2 | 172.28.0.2:6120 | unlocked | | oioproxy | 172.28.0.5:6006 | unlocked | | oioproxy | 172.28.0.4:6006 | unlocked | | oioproxy | 172.28.0.3:6006 | unlocked | | oioproxy | 172.28.0.2:6006 | unlocked | | oioswift | 172.28.0.5:6007 | unlocked | | oioswift | 172.28.0.4:6007 | unlocked | | oioswift | 172.28.0.3:6007 | unlocked | | oioswift | 172.28.0.2:6007 | unlocked | | rawx | 172.28.0.5:6201 | unlocked | | rawx | 172.28.0.5:6200 | unlocked | | rawx | 172.28.0.4:6200 | unlocked | | rawx | 172.28.0.3:6200 | unlocked | | rawx | 172.28.0.3:6201 | unlocked | | rawx | 172.28.0.2:6201 | unlocked | | rawx | 172.28.0.2:6200 | unlocked | | rawx | 172.28.0.4:6201 | unlocked | | rdir | 172.28.0.5:6300 | unlocked | | rdir | 172.28.0.5:6301 | unlocked | | rdir | 172.28.0.4:6300 | unlocked | | rdir | 172.28.0.4:6301 | unlocked | | rdir | 172.28.0.3:6301 | unlocked | | rdir | 172.28.0.3:6300 | unlocked | | rdir | 172.28.0.2:6300 | unlocked | | rdir | 172.28.0.2:6301 | unlocked | +------------+-----------------+----------+