OIOFS Node Installation
Tip
oio-fs is part of our paid plans.
Get in touch with the team or visit the page describing our plans.
Requirements
Licence
- A login and a password provided by OpenIO Support
Hardware
- Storage drive: A storage device for cache
Operating system
- Centos 7
- Ubuntu 18.04 (Server)
System
- Root privileges are required (using sudo).
- SELinux or AppArmor are disabled (managed at deployment).
SELinuxsudo sed -i -e 's@^SELINUX=enforcing$@SELINUX=disabled@g' /etc/selinux/config sudo setenforce 0 sudo systemctl disable selinux.serviceAppArmorsudo service apparmor stop sudo apparmor teardown sudo update-rc.d -f apparmor remove
- The system must be up to date.
RedHatsudo yum update -y sudo rebootUbuntusudo apt update -y sudo apt upgrade -y sudo reboot
Network
- This node connected to the same lan of the OpenIO SDS.
Setup
You only need to perform this setup on one of the nodes in the cluster (or your laptop).
- Install
git
.
RedHat$> sudo yum install git -yUbuntu$> sudo apt install git -y
- Clone the OpenIO ansible playbook deployment repository
$> git clone https://github.com/open-io/ansible-playbook-openio-deployment.git --branch 20.04 oiosds $> cd oiosds/products/sds
- Install Ansible for the current user.
Ansible$> python3 -m venv openio_venv $> source openio_venv/bin/activate $> pip install -r ansible.pip

Architecture
This playbook will deploy an oiofs mount connected to a SDS cluster as shown below:
+-------------------------+ | | | |----+ +-------------------------+ .+--+.______ | | | | | +CIFS + | | |----+ | | | ___________ | | | | | | | / / | | | | | | | / / | | | | | | X X XX| / / | | | | | | X +/__________/ | | | | Provide file-oriented access | | XX | 3 | | | | 1 |XXX | OpenIO SDS | | | <----------------------------> | OpenIO FS | XX .+--+.______ | Servers | | | | Server | X +NFS + | | | | on an object storage backend | | X X XX| ___________ | | | | | | | / / | | | | | | | / / | | | | | | | / / +-------------------------+ | | | | +/__________/ | | | | | +--------------------------+ | +-------------------------+ | | +-------------------------+
Configuration
Inventory
Replace the inventory.yml
by the inventory named inventory_with_oiofs.yml
replace inventory.yml$> cp inventory_with_oiofs.yml inventory.yml
Fill the inventory according to your environment:
Edit the
inventory.yml
file and adapt the IP addresses and SSH userip address--- all: hosts: # node for OIOFS node4: ansible_host: IP_ADDRESS_OF_NODE4 # Change it with the IP of the first server ansible_user: root openio_data_mounts: [] openio_metadata_mounts: []
Set your credentials in the inventory file.
oiofs repository credentials--- all: children: openio: vars: … openio_repositories_credentials: oiofs: user: OIOFS_REPO_USER password: OIOFS_REPO_PASSWORD
You can check that everything is configured correctly using this command:
RedHat$> ansible all -i inventory.yml -bv -m ping
Ubuntu$> ansible all -i inventory.yml -bv -m ping -e 'ansible_python_interpreter=/usr/bin/python'
Mandatory SDS services
An ecd
and an oioproxy
are required on the oiofs node.
Run this command to deploy them:
prepare oiofs node$> ansible-playbook -i inventory.yml \ -t check,base,ecd,oioproxy,namespace \ -e "openio_maintenance_mode=false" \ main.yml
Here is the expected result:
result mandatory services[root@node4 /]# cat /etc/oio/sds.conf.d/OPENIO # OpenIO managed [OPENIO] # endpoints conscience=172.28.0.4:6000 zookeeper=172.28.0.2:6005,172.28.0.4:6005,172.28.0.3:6005 proxy=172.28.0.5:6006 event-agent=beanstalk://172.28.0.5:6014 ecd=172.28.0.5:6017 udp_allowed=yes ns.meta1_digits=1 ns.storage_policy=THREECOPIES ns.chunk_size=104857600 ns.service_update_policy=meta2=KEEP|3|1|;rdir=KEEP|1|1|; sqliterepo.repo.soft_max=1000 sqliterepo.repo.hard_max=1000 [root@node4 /]# gridinit_cmd status KEY STATUS PID GROUP OPENIO-ecd-0 UP 2190 OPENIO,ecd,0 OPENIO-oioproxy-0 UP 1799 OPENIO,oioproxy,0
OpenIO FS definition
- The following parameters must be set:
oiofs_global_mount_directory
: defines the directory where the oiofs mounts will be placed.oiofs_cache_device
: defines the dedicated block device used for cache.oiofs_cache_folder
: defines the cache folder (⚠ it should already be mounted).oiofs_cache_low_watermark
andoiofs_cache_high_watermark
: defines an acceptable percentage of cache fill.
mountpoints and cache### OIOFS oiofs: hosts: node4: {} vars: oiofs_global_mount_directory: "/mnt" oiofs_global_redis_sentinel_servers: "{{ groups[oiofs_global_redis_inventory_groupname] \ | map('extract', hostvars, ['openio_bind_address']) \ | map('regex_replace', '$', ':6012') | list }}" ## CACHE oiofs_cache_device: /dev/vdb1 oiofs_cache_folder: "{{ oiofs_global_mount_directory }}/cache" oiofs_cache_high_watermark: 80 oiofs_cache_low_watermark: 50 oiofs_redis: hosts: {}
oiofs_mountpoints
: defines the OIOFS mounts that will be accessible through Samba and NFS.active_mode
: defines the management mode of this mount. If you are not on a high availability deployment, this paramater must be set to true.namespace
,account
andcontainer
: defines the SDS context used by OpenIO FS.state
: present/absent ensures the mount is absent or defined and up.http_server
: defines a socket to request statistics about the mount. Must be unique by mountpoint.
mountpoints and cache### OIOFS oiofs: vars: … oiofs_mountpoints: - active_mode: true namespace: "{{ namespace }}" # account/container account: MY_ACCOUNT1 container: MY_CONTAINER1 state: present http_server: 127.0.0.1:6989 # SDS openio_sds_conscience_url: "{{ openio_namespace_conscience_url }}" oioproxy_url: "{{ openio_bind_address }}:6006" ecd_url: "{{ openio_bind_address }}:6017" redis_sentinel_servers: "{{ oiofs_global_redis_sentinel_servers }}" redis_sentinel_name: "{{ namespace }}-master-1"
Exports
If you need to export the OpenIO FS mount, you must define some additionnal parameters in oiofs_mountpoints
user
,group
: defines the owners of the mountignore_flush
: allows to ignore fsync and fflush calls, so only oio-fs choose when to flush and not the client.auto_retry
: if set to true, retries calls (write, read …) on full cache instead of returning EAGAIN.exports
: defines the unique type of export for the mount.fsid
: For NFS, this parameter must be unique.
exports### OIOFS oiofs: vars: … oiofs_mountpoints: - active_mode: true … # EXPORTS user: root group: root ignore_flush: true auto_retry: false export: nfs nfs_exports: client: "*" options: - "rw" - "async" - "no_root_squash" - "fsid=1" uid: 0 gid: 0
Installation
Run this command to deploy the OpenIO FS:
deploy oiofs$> ansible-playbook -i inventory.yml main.yml -t fact $> ansible-playbook -i inventory.yml playbooks/oiofs.yml -e "openio_maintenance_mode=false"
Post-install Checks
The node is configured and the filesystem is mounted.
Run these commands on the node: gridinit_cmd status
and df -h
.
Sample outputs:
check mounts[root@node4 /]# gridinit_cmd status KEY STATUS PID GROUP OPENIO-ecd-0 UP 2128 OPENIO,ecd,0 OPENIO-oiofs-OPENIO-MY_ACCOUNT1-MY_CONTAINER_EXPORTED_NFS UP 2615 OPENIO,oiofs,OPENIO-MY_ACCOUNT1-MY_CONTAINER_EXPORTED_NFS OPENIO-oiofs-OPENIO-MY_ACCOUNT1-MY_CONTAINER_EXPORTED_SAMBA UP 2653 OPENIO,oiofs,OPENIO-MY_ACCOUNT1-MY_CONTAINER_EXPORTED_SAMBA OPENIO-oioproxy-0 UP 1729 OPENIO,oioproxy,0 [root@node4 /]# df -h Filesystem Size Used Avail Use% Mounted on overlay 60G 8.6G 52G 15% / tmpfs 64M 0 64M 0% /dev /dev/vda1 60G 8.6G 52G 15% /etc/hosts shm 64M 0 64M 0% /dev/shm tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup tmpfs 7.8G 8.1M 7.8G 1% /run oiofs-fuse 16E 0 16E 0% /mnt/oiofs-OPENIO-MY_ACCOUNT1-MY_CONTAINER_EXPORTED_NFS oiofs-fuse 16E 0 16E 0% /mnt/oiofs-OPENIO-MY_ACCOUNT1-MY_CONTAINER_EXPORTED_SAMBAcheck SDS[root@node4 /]# export OIO_NS=OPENIO [root@node4 /]# cp -r /etc/ /mnt/oiofs-OPENIO-MY_ACCOUNT1-MY_CONTAINER_EXPORTED_NFS/ [root@node4 /]# openio --oio-account MY_ACCOUNT1 container list +-------------------------------+---------+-------+------------------+------------------------------------------------------------------+ | Name | Bytes | Count | Mtime | CID | +-------------------------------+---------+-------+------------------+------------------------------------------------------------------+ | MY_CONTAINER_EXPORTED_NFS_0 | 9439358 | 284 | 1575144085.8659 | A0C3BD03C85BD2B1810C5168016C2C31A7A81EC0BA2209264F6D9FFF296D5972 | | MY_CONTAINER_EXPORTED_SAMBA_0 | 0 | 0 | 1575143929.50248 | 159DEBA12D8F4CBB03678DC8EB139B06548D04678941A225E1C14BF98180FB8B | +-------------------------------+---------+-------+------------------+------------------------------------------------------------------+ [root@node4 /]# openio --oio-account MY_ACCOUNT1 container show MY_CONTAINER_EXPORTED_NFS_0 +-----------------+--------------------------------------------------------------------+ | Field | Value | +-----------------+--------------------------------------------------------------------+ | account | MY_ACCOUNT1 | | base_name | A0C3BD03C85BD2B1810C5168016C2C31A7A81EC0BA2209264F6D9FFF296D5972.1 | | bytes_usage | 9.439MB | | container | MY_CONTAINER_EXPORTED_NFS_0 | | ctime | 1575143927 | | damaged_objects | 0 | | max_versions | Namespace default | | missing_chunks | 0 | | objects | 284 | | quota | Namespace default | | status | Enabled | | storage_policy | Namespace default | +-----------------+--------------------------------------------------------------------+ [root@node4 /]# openio --oio-account MY_ACCOUNT1 object list MY_CONTAINER_EXPORTED_NFS_0 +------+---------+----------------------------------+------------------+ | Name | Size | Hash | Version | +------+---------+----------------------------------+------------------+ | 100 | 65536 | A5AE49867124AC75F029A9A33AF31BAD | 1575144084612691 | | 101 | 16384 | 1C5566B67C186B2EE1F58346B104785E | 1575144084621875 | | 102 | 45 | 84C612BDA6CEB71C9ECC0A06B363F049 | 1575144084628161 | … | 96 | 1746 | 643B68A0994AA69649E5B3F13DCF5635 | 1575144084595684 | | 97 | 1735 | B5EAAEE8A77829325A77CBB613380F90 | 1575144084608830 | +------+---------+----------------------------------+------------------+check exports[root@node4 /]# showmount -e IP_ADDRESS_OF_NODE4 Export list for IP_ADDRESS_OF_NODE4: /mnt/oiofs-OPENIO-MY_ACCOUNT1-MY_CONTAINER_EXPORTED_NFS * [root@node4 /]# smbclient -L IP_ADDRESS_OF_NODE4 -U smbguest Enter WORKGROUP\smbguest\'s password: Sharename Type Comment --------- ---- ------- MY_CONTAINER_EXPORTED_SAMBA Disk Samba oiofs IPC$ IPC IPC Service (Samba 4.8.3) Reconnecting with SMB1 for workgroup listing. Server Comment --------- ------- Workgroup Master --------- -------