Multi Nodes Installation
Install a three nodes on-premises object storage backend, using the deployment tools provided by OpenIO.
Requirements
Hardware
When run as the backend layer, OpenIO SDS is lightweight and requires few resources. The front layer consists of the gateways (Openstack Swift, Amazon S3) and their services do not require many resources.
- CPU: any dual core at 1 Ghz or faster
- RAM: 2GB recommended
- Network: 1Gb/s NIC
Operating system
As explained on our Supported Linux Distributions page, OpenIO supports the following distributions:
- Centos 7
- Ubuntu 16.04 (Server), a.k.a.
Xenial Xerus
- Ubuntu 18.04 (Server), a.k.a
Bionic Beaver
System
- Root privileges are required (using sudo).
- SELinux or AppArmor are disabled (managed at deployment).
- All nodes must have different hostnames.
- The
/var/lib
partition must support extended attributes.XFS
is recommended. - The system must be up to date.
Check the presence and the type of the /var/lib
partition. In this example, SGI XFS
is the filesystem:
[root@centos ~]# df /var/lib Filesystem 1K-blocks Used Available Use% Mounted on /dev/vda1 41931756 1624148 40307608 4% / [root@centos ~]# file -sL /dev/vda1 /dev/vda1: SGI XFS filesystem data (blksz 4096, inosz 512, v2 dirs)
If you are running Centos or RedHat, keep your system up-to-date as follows:
RedHatsudo yum update -y sudo reboot
If you are using Ubuntu or Debian, keep your system up-to-date as follows:
Ubuntusudo apt update -y sudo apt upgrade -y sudo reboot
Network
- All nodes are connected to the same LAN through the specified interface (the first one by default).
- The firewall is disabled (this is managed at deployment).
Ubuntusudo sudo ufw disable sudo systemctl disable ufw.service
Setup
You only need to perform this setup on one of the nodes in the cluster (or your laptop).
- Install Ansible (official guide).
- Install
git
andpython-netaddr
(this is managed at deployment).
RedHatsudo yum install git -yUbuntusudo apt install git -y
- Clone the OpenIO ansible playbook deployment repository
git clone https://github.com/open-io/ansible-playbook-openio-deployment.git --branch 19.04 oiosds cd oiosds/products/sds
Architecture
This playbook will deploy a multi-nodes cluster as shown below:
+-----------------+ +-----------------+ +-----------------+ | OIOSWIFT | | OIOSWIFT | | OIOSWIFT | | FOR S3 | | FOR S3 | | FOR S3 | +-----------------+ +-----------------+ +-----------------+ | OPENIO | | OPENIO | | OPENIO | | SDS | | SDS | | SDS | +-----------------+ +-----------------+ +-----------------+
Installation
First, configure the inventory according to your environment:
Change the IP addresses and SSH user in the
inventory.yml
fileip addresses--- all: hosts: node1: ansible_host: IP_ADDRESS_OF_NODE1 # Change it with the IP of the first server node2: ansible_host: IP_ADDRESS_OF_NODE2 # Change it with the IP of the second server node3: ansible_host: IP_ADDRESS_OF_NODE3 # Change it with the IP of the third server
user ssh--- all: vars: ansible_user: root # Change it accordingly
Next, ensure you have a ssh access to your nodes
# generate a ssh key $> ssh-keygen # copy the key on all nodes $> for node in <name-of-remote-server1> <name-of-remote-server2> <name-of-remote-server3>; do ssh-copy-id $node; done # start a ssh-agent $> eval "$(ssh-agent -s)" # add the key into the agent $> ssh-add .ssh/id_rsa # test connection without password $> ssh <name-of-remote-server1>
Then, you can check that everything is configured correctly using this command:
RedHatansible all -i inventory.yml -bv -m pingUbuntuansible all -i inventory.yml -bv -m ping -e 'ansible_python_interpreter=/usr/bin/python3'
Finally, run these commands:
To download and install requirements:
./requirements_install.sh
To deploy and initialize the cluster:
./deploy_and_bootstrap.sh
Post-installation checks
All the nodes are configured to use openio-cli and aws-cli.
Run this check script on one of the nodes in the cluster sudo /root/checks.sh
.
Sample output:
local check[root@node3 /]# /root/checks.sh ## OPENIO Status of services. KEY STATUS PID GROUP OPENIO-account-0 UP 3531 OPENIO,account,0 OPENIO-beanstalkd-0 UP 3492 OPENIO,beanstalkd,0 OPENIO-conscience-0 UP 3508 OPENIO,conscience,0 OPENIO-conscienceagent-0 UP 4696 OPENIO,conscienceagent,0 OPENIO-ecd-0 UP 4611 OPENIO,ecd,0 OPENIO-memcached-0 UP 5209 OPENIO,memcached,0 OPENIO-meta0-0 UP 4858 OPENIO,meta0,0 OPENIO-meta1-0 UP 4895 OPENIO,meta1,0 OPENIO-meta2-0 UP 3593 OPENIO,meta2,0 OPENIO-meta2-1 UP 3632 OPENIO,meta2,1 OPENIO-oio-blob-indexer-0 UP 4542 OPENIO,oio-blob-indexer,0 OPENIO-oio-blob-indexer-1 UP 4549 OPENIO,oio-blob-indexer,1 OPENIO-oio-blob-rebuilder-0 UP 4581 OPENIO,oio-blob-rebuilder,0 OPENIO-oio-event-agent-0 UP 4595 OPENIO,oio-event-agent,0 OPENIO-oio-event-agent-0.1 UP 4585 OPENIO,oio-event-agent,0 OPENIO-oio-meta2-indexer-0 UP 3655 OPENIO,oio-meta2-indexer,0 OPENIO-oioproxy-0 UP 2698 OPENIO,oioproxy,0 OPENIO-oioswift-0 UP 5634 OPENIO,oioswift,0 OPENIO-rawx-0 UP 4317 OPENIO,rawx,0 OPENIO-rawx-1 UP 4460 OPENIO,rawx,1 OPENIO-rdir-0 UP 4559 OPENIO,rdir,0 OPENIO-rdir-1 UP 4569 OPENIO,rdir,1 OPENIO-redis-0 UP 2411 OPENIO,redis,0 OPENIO-redissentinel-0 UP 2508 OPENIO,redissentinel,0 OPENIO-zookeeper-0 UP 3259 OPENIO,zookeeper,0 -- Display the cluster status. +------------+-----------------+------------+------------------------------------+----------+------------+------+-------+--------+ | Type | Addr | Service Id | Volume | Location | Slots | Up | Score | Locked | +------------+-----------------+------------+------------------------------------+----------+------------+------+-------+--------+ | account | 172.28.0.4:6009 | n/a | n/a | node3.0 | account | True | 69 | False | | account | 172.28.0.3:6009 | n/a | n/a | node2.0 | account | True | 69 | False | | account | 172.28.0.2:6009 | n/a | n/a | node1.0 | account | True | 66 | False | | beanstalkd | 172.28.0.4:6014 | n/a | /mnt/metadata1/OPENIO/beanstalkd-0 | node3.0 | beanstalkd | True | 70 | False | | beanstalkd | 172.28.0.3:6014 | n/a | /mnt/metadata1/OPENIO/beanstalkd-0 | node2.0 | beanstalkd | True | 70 | False | | beanstalkd | 172.28.0.2:6014 | n/a | /mnt/metadata1/OPENIO/beanstalkd-0 | node1.0 | beanstalkd | True | 70 | False | | meta0 | 172.28.0.4:6001 | n/a | /mnt/metadata1/OPENIO/meta0-0 | node3.0 | meta0 | True | 91 | False | | meta0 | 172.28.0.3:6001 | n/a | /mnt/metadata1/OPENIO/meta0-0 | node2.0 | meta0 | True | 91 | False | | meta0 | 172.28.0.2:6001 | n/a | /mnt/metadata1/OPENIO/meta0-0 | node1.0 | meta0 | True | 90 | False | | meta1 | 172.28.0.4:6110 | n/a | /mnt/metadata1/OPENIO/meta1-0 | node3.0 | meta1 | True | 71 | False | | meta1 | 172.28.0.3:6110 | n/a | /mnt/metadata1/OPENIO/meta1-0 | node2.0 | meta1 | True | 71 | False | | meta1 | 172.28.0.2:6110 | n/a | /mnt/metadata1/OPENIO/meta1-0 | node1.0 | meta1 | True | 71 | False | | meta2 | 172.28.0.4:6121 | n/a | /mnt/metadata1/OPENIO/meta2-1 | node3.1 | meta2 | True | 71 | False | | meta2 | 172.28.0.4:6120 | n/a | /mnt/metadata1/OPENIO/meta2-0 | node3.0 | meta2 | True | 71 | False | | meta2 | 172.28.0.3:6120 | n/a | /mnt/metadata1/OPENIO/meta2-0 | node2.0 | meta2 | True | 71 | False | | meta2 | 172.28.0.3:6121 | n/a | /mnt/metadata1/OPENIO/meta2-1 | node2.1 | meta2 | True | 71 | False | | meta2 | 172.28.0.2:6120 | n/a | /mnt/metadata1/OPENIO/meta2-0 | node1.0 | meta2 | True | 71 | False | | meta2 | 172.28.0.2:6121 | n/a | /mnt/metadata1/OPENIO/meta2-1 | node1.1 | meta2 | True | 71 | False | | oioproxy | 172.28.0.4:6006 | n/a | n/a | node3.0 | oioproxy | True | 68 | False | | oioproxy | 172.28.0.3:6006 | n/a | n/a | node2.0 | oioproxy | True | 68 | False | | oioproxy | 172.28.0.2:6006 | n/a | n/a | node1.0 | oioproxy | True | 65 | False | | rawx | 172.28.0.4:6201 | n/a | /mnt/data2/OPENIO/rawx-1 | node3.1 | rawx | True | 71 | False | | rawx | 172.28.0.4:6200 | n/a | /mnt/data1/OPENIO/rawx-0 | node3.0 | rawx | True | 71 | False | | rawx | 172.28.0.3:6200 | n/a | /mnt/data1/OPENIO/rawx-0 | node2.0 | rawx | True | 71 | False | | rawx | 172.28.0.3:6201 | n/a | /mnt/data2/OPENIO/rawx-1 | node2.1 | rawx | True | 71 | False | | rawx | 172.28.0.2:6200 | n/a | /mnt/data1/OPENIO/rawx-0 | node1.0 | rawx | True | 71 | False | | rawx | 172.28.0.2:6201 | n/a | /mnt/data2/OPENIO/rawx-1 | node1.1 | rawx | True | 71 | False | | rdir | 172.28.0.4:6301 | n/a | /mnt/data2/OPENIO/rdir-1 | node3.1 | rdir | True | 69 | False | | rdir | 172.28.0.4:6300 | n/a | /mnt/data1/OPENIO/rdir-0 | node3.0 | rdir | True | 69 | False | | rdir | 172.28.0.3:6301 | n/a | /mnt/data2/OPENIO/rdir-1 | node2.1 | rdir | True | 69 | False | | rdir | 172.28.0.3:6300 | n/a | /mnt/data1/OPENIO/rdir-0 | node2.0 | rdir | True | 69 | False | | rdir | 172.28.0.2:6301 | n/a | /mnt/data2/OPENIO/rdir-1 | node1.1 | rdir | True | 66 | False | | rdir | 172.28.0.2:6300 | n/a | /mnt/data1/OPENIO/rdir-0 | node1.0 | rdir | True | 66 | False | +------------+-----------------+------------+------------------------------------+----------+------------+------+-------+--------+ -- Upload the /etc/passwd file to the bucket MY_CONTAINER of the project MY_ACCOUNT. +--------+------+----------------------------------+--------+ | Name | Size | Hash | Status | +--------+------+----------------------------------+--------+ | passwd | 1135 | 75B70E178C3EA23671CF9B9C677FED0E | Ok | +--------+------+----------------------------------+--------+ -- Get some information about your object. +-----------------+--------------------------------------------------------------------+ | Field | Value | +-----------------+--------------------------------------------------------------------+ | account | MY_ACCOUNT | | base_name | 7B1F1716BE955DE2D677B68819836E4F75FD2424F6D22DB60F9F2BB40331A741.1 | | bytes_usage | 1.135KB | | container | MY_CONTAINER | | ctime | 1558039094 | | damaged_objects | 0 | | max_versions | Namespace default | | missing_chunks | 0 | | objects | 1 | | quota | Namespace default | | status | Enabled | | storage_policy | Namespace default | +-----------------+--------------------------------------------------------------------+ -- List object in container. +--------+------+----------------------------------+------------------+ | Name | Size | Hash | Version | +--------+------+----------------------------------+------------------+ | passwd | 1135 | 75B70E178C3EA23671CF9B9C677FED0E | 1558039094586565 | +--------+------+----------------------------------+------------------+ -- Find the services involved for your container. +-----------------+--------------------------------------------------------------------+ | Field | Value | +-----------------+--------------------------------------------------------------------+ | account | MY_ACCOUNT | | base_name | 7B1F1716BE955DE2D677B68819836E4F75FD2424F6D22DB60F9F2BB40331A741.1 | | meta0 | 172.28.0.4:6001, 172.28.0.3:6001, 172.28.0.2:6001 | | meta1 | 172.28.0.2:6110, 172.28.0.3:6110, 172.28.0.4:6110 | | meta2 | 172.28.0.4:6121, 172.28.0.3:6120, 172.28.0.2:6120 | | meta2.sys.peers | 172.28.0.2:6120, 172.28.0.3:6120, 172.28.0.4:6121 | | name | MY_CONTAINER | | status | Enabled | +-----------------+--------------------------------------------------------------------+ -- Save the data stored in the given object to the --file destination. root:x:0:0:root:/root:/bin/bash bin:x:1:1:bin:/bin:/sbin/nologin daemon:x:2:2:daemon:/sbin:/sbin/nologin adm:x:3:4:adm:/var/adm:/sbin/nologin lp:x:4:7:lp:/var/spool/lpd:/sbin/nologin sync:x:5:0:sync:/sbin:/bin/sync shutdown:x:6:0:shutdown:/sbin:/sbin/shutdown halt:x:7:0:halt:/sbin:/sbin/halt mail:x:8:12:mail:/var/spool/mail:/sbin/nologin operator:x:11:0:operator:/root:/sbin/nologin -- Show the account informations. +-----------------+------------+ | Field | Value | +-----------------+------------+ | account | MY_ACCOUNT | | bytes | 1.135KB | | containers | 1 | | ctime | 1558039094 | | damaged_objects | 0 | | metadata | {} | | missing_chunks | 0 | | objects | 1 | +-----------------+------------+ -- Delete your object. +--------+---------+ | Name | Deleted | +--------+---------+ | passwd | True | +--------+---------+ -- Delete your empty container. -- ------ ## AWS Create a bucket mybucket. make_bucket: mybucket -- Upload the /etc/passwd file to the bucket mybucket. upload: etc/passwd to s3://mybucket/passwd -- List your buckets. 2019-05-16 20:38:21 1.1 KiB passwd Total Objects: 1 Total Size: 1.1 KiB -- Save the data stored in the given object to the given file. download: s3://mybucket/passwd to tmp/passwd.aws root:x:0:0:root:/root:/bin/bash bin:x:1:1:bin:/bin:/sbin/nologin daemon:x:2:2:daemon:/sbin:/sbin/nologin adm:x:3:4:adm:/var/adm:/sbin/nologin lp:x:4:7:lp:/var/spool/lpd:/sbin/nologin sync:x:5:0:sync:/sbin:/bin/sync shutdown:x:6:0:shutdown:/sbin:/sbin/shutdown halt:x:7:0:halt:/sbin:/sbin/halt mail:x:8:12:mail:/var/spool/mail:/sbin/nologin operator:x:11:0:operator:/root:/sbin/nologin -- Delete your object. delete: s3://mybucket/passwd -- Delete your empty bucket. remove_bucket: mybucket -- Done ! ++++ AWS S3 summary: endpoint: http://172.28.0.4:6007 region: us-east-1 access key: demo:demo secret key: DEMO_PASS ssl: false signature_version: s3v4 path style: true
Manual requirements
This deployment is designed to be as simple as possible.
Set openio_manage_os_requirement
to false
in the inventory file, if you wish to manually manage your requirements.
manual requirements--- all: hosts: … vars: openio_manage_os_requirement: false
SELinux and AppArmor
SELinux or AppArmor must be disabled:
SELinuxsudo sed -i -e 's@^SELINUX=enforcing$@SELINUX=disabled@g' /etc/selinux/config sudo setenforce 0 sudo systemctl disable selinux.serviceAppArmorsudo service apparmor stop sudo apparmor teardown sudo update-rc.d -f apparmor remove
Firewall
The firewall must be disabled:
firewalldsudo systemctl stop firewalld.service sudo systemctl disable firewalld.serviceufwsudo sudo ufw disable sudo systemctl disable ufw.service
Proxy
Set your variables environment in the inventory file.
http proxy--- all: hosts: … vars: openio_environment: http_proxy: http://proxy.example.com:8080 https_proxy: http://proxy.bos.example.com:8080
Customizing your deployment
Manage NTP configuration
You can set the time settings in the inventory file.
By default, the deployment does not change your timezone but enable the NTP service and set four NTP servers
ntp--- all: hosts: … vars: ntp_enabled: true ntp_manage_config: true ntp_manage_timezone: false ntp_timezone: "Etc/UTC" ntp_area: "" ntp_servers: - "0{{ ntp_area }}.pool.ntp.org iburst" - "1{{ ntp_area }}.pool.ntp.org iburst" - "2{{ ntp_area }}.pool.ntp.org iburst" - "3{{ ntp_area }}.pool.ntp.org iburst" ntp_restrict: - "127.0.0.1" - "::1"
If needed, you can add your own settings:
custom ntp--- all: hosts: … vars: ntp_enabled: true ntp_manage_config: true ntp_manage_timezone: true ntp_timezone: "Europe/Paris" ntp_area: ".fr" ntp_servers: - "0{{ ntp_area }}.pool.ntp.org iburst" - "1{{ ntp_area }}.pool.ntp.org iburst" - "2{{ ntp_area }}.pool.ntp.org iburst" - "3{{ ntp_area }}.pool.ntp.org iburst" ntp_restrict: - "127.0.0.1" - "::1"
Manage storage volumes
You can customize all storage devices by node in the host declaration part.
In this example, the nodes have two mounted volumes to store data and one to store metadata:
storage definition--- all: hosts: node1: ansible_host: IP_ADDRESS_OF_NODE1 openio_data_mounts: - mountpoint: /mnt/data1 partition: /dev/vdb - mountpoint: /mnt/data2 partition: /dev/vdc openio_metadata_mounts: - mountpoint: /mnt/metadata1 partition: /dev/vdd meta2_count: 2 node2: ansible_host: IP_ADDRESS_OF_NODE2 openio_data_mounts: - mountpoint: /mnt/data1 partition: /dev/vdb - mountpoint: /mnt/data2 partition: /dev/vdc openio_metadata_mounts: - mountpoint: /mnt/metadata1 partition: /dev/vdd meta2_count: 2 node3: ansible_host: IP_ADDRESS_OF_NODE3 openio_data_mounts: - mountpoint: /mnt/data1 partition: /dev/vdb - mountpoint: /mnt/data2 partition: /dev/vdc openio_metadata_mounts: - mountpoint: /mnt/metadata1 partition: /dev/vdd meta2_count: 2 vars: ansible_user: root
The meta2_count
define how many meta2 instance you want for the device.
If you want to lose one server (of 3) but still create new containers, you need at least 3 meta2 up. Without this parameter, you can read data from an existing container but you can’t create or delete containers.
Manage the ssh connection
If your nodes don’t all have the same ssh user configured, you can define a specific ssh user (or key) for the deployment of each node.
global ssh--- all: hosts: … vars: ansible_user: my_user ansible_ssh_private_key_file: /home/john/.ssh/id_rsaspecific ssh--- all: hosts: node1: ansible_host: IP_ADDRESS_OF_NODE1 … node2: ansible_host: IP_ADDRESS_OF_NODE2 … node3: ansible_host: IP_ADDRESS_OF_NODE3 … ansible_user: my_other_user ansible_ssh_private_key_file: /home/john/.ssh/id_rsa_2 vars: ansible_user: my_user ansible_ssh_private_key_file: /home/john/.ssh/id_rsa
Manage the data network interface used
The servers can have many network interfaces. The most common is to have a management interface and another for the data. Obviously these 2 interfaces can be the same.
global interface--- all: … children: openio: … vars: openio_bind_interface: bond0 openio_bind_address: "{{ ansible_bond0.ipv4.address }}"
As for ssh connections, these settings can be by server.
Manage S3 authentification
Set name
, password
, and role
in the inventory file.
S3 users--- all: … children: openio: … vars: # S3 users openio_oioswift_users: - name: "demo:demo" password: "DEMO_PASS" roles: - member - name: "test:tester" password: "testing" roles: - member - reseller_admin
Change user openio’s UID/GID
You can define the uid and the gid of the user openio
in the inventory file.
uid/gid user openio--- all: hosts: … vars: openio_user_openio_uid: 120 openio_group_openio_gid: 220
Test on Docker
If you don’t have physical nodes to test our solution, you can spawn some Docker containers with docker-compose
.
docker-compose$ docker-compose up -d Creating node1 ... done Creating node1 ... done Creating node3 ... done
Then, configure Docker containers in the inventory file.
inventory with docker--- all: hosts: node1: ansible_host: node1 … node2: ansible_host: node2 … node3: ansible_host: node3 … vars: ansible_user: root ansible_connection: docker # Disable some checks openio_checks_filter: reachability: false mountpoint: false # use less memory openio_account_workers: 1 openio_oioswift_workers: 1 namespace_meta1_digits: "1" openio_event_agent_workers: 1 openio_zookeeper_parallel_gc_threads: 1 openio_zookeeper_memory: "256M" openio_minimal_score_for_volume_admin_bootstrap: 5 openio_minimal_score_for_directory_bootstrap: 5
Next, install the iproute
package in the containers.
install of iproute in containersansible all -i inventory.yml -a "yum install -y iproute"
Finally, you can check and deploy.
deploy on containersansible all -i inventory.yml -m ping ./requirements_install.sh ./deploy_and_bootstrap.sh