OIOFS Node Installation
Tip
oio-fs is part of our paid plans.
Get in touch with the team or visit the page describing our plans.
Requirements
Licence
- A login and a password provided by OpenIO Support
Hardware
- Storage drive: A storage device for cache
Operating system
- Centos 7
- Ubuntu 16.04 (Server)
- Ubuntu 18.04 (Server)
System
- Root privileges are required (using sudo).
- SELinux or AppArmor are disabled (managed at deployment).
- The system must be up to date.
RedHatsudo yum update -y sudo rebootUbuntusudo apt update -y sudo apt upgrade -y sudo reboot
Network
- This node connected to the same lan of the OpenIO SDS.
Setup
You only need to perform this setup on one of the nodes in the cluster (or your laptop).
- Install Ansible (official guide).
- Install
git
.
RedHatsudo yum install git -yUbuntusudo apt install git -y
- Clone the OpenIO ansible playbook deployment repository.
git clone https://github.com/open-io/ansible-playbook-openio-deployment.git --branch 19.04 openio cd openio/products/sds
Architecture
This playbook will deploy an oiofs mount connected to a SDS cluster as shown below:
+-------------------------+ | | | |----+ +-------------------------+ .+--+.______ | | | | | +CIFS + | | |----+ | | | ___________ | | | | | | | / / | | | | | | | / / | | | | | | X X XX| / / | | | | | | X +/__________/ | | | | Provide file-oriented access | | XX | 3 | | | | 1 |XXX | OpenIO SDS | | | <----------------------------> | OpenIO FS | XX .+--+.______ | Servers | | | | Server | X +NFS + | | | | on an object storage backend | | X X XX| ___________ | | | | | | | / / | | | | | | | / / | | | | | | | / / +-------------------------+ | | | | +/__________/ | | | | | +--------------------------+ | +-------------------------+ | | +-------------------------+
Configuration
Inventory
Fill the inventory according to your environment:
Edit the
inventory.yml
file and adapt the IP addresses and SSH userip address--- all: hosts: # node for OIOFS node4: ansible_host: IP_ADDRESS_OF_NODE4 # Change it with the IP of the first server ansible_user: root openio_data_mounts: [] openio_metadata_mounts: []
You can check that everything is configured correctly using this command:
RedHatansible all -i inventory.yml -bv -m ping
Ubuntuansible all -i inventory.yml -bv -m ping -e 'ansible_python_interpreter=/usr/bin/python
Credentials
You can set your credentials in the inventory file.
oiofs repository credentials--- all: children: openio: vars: openio_repositories_credentials: oiofs: user: OIOFS_REPO_USER password: OIOFS_REPO_PASSWORD
Mandatory SDS services
An ecd
and an oioproxy
are required on the oiofs node.
To deploy it, fill the inventory file as below:
oiofs node declaration--- all: hosts: … node4: ansible_host: IP_ADDRESS_OF_NODE4 # Change it with the IP of the fourth server vars: ansible_user: root children: openio: children: … oiofs: {} oiofs_redis: {} vars: … openio_repositories_credentials: oiofs: user: OIOFS_REPO_USER password: OIOFS_REPO_PASSWORD ### SDS … ecd: children: backs: {} hosts: node4: {} … namespace: children: openio: {} vars: … hosts: node4: {} … oioproxy: children: openio: {} hosts: node4: {} ### OIOFS oiofs: hosts: node4: {} vars: {} oiofs_redis: hosts: {} ...
Then, run this command:
prepare oiofs nodeansible-playbook -i inventory.yml \ -t check,base,ecd,oioproxy,namespace \ -e "openio_maintenance_mode=false" \ main.yml
Here is the expected result:
result mandatory services[root@node4 /]# cat /etc/oio/sds.conf.d/OPENIO # OpenIO managed [OPENIO] # endpoints conscience=172.28.0.4:6000 zookeeper=172.28.0.2:6005,172.28.0.4:6005,172.28.0.3:6005 proxy=172.28.0.5:6006 event-agent=beanstalk://172.28.0.5:6014 ecd=172.28.0.5:6017 udp_allowed=yes meta1_digits=1 ns.meta1_digits=1 ns.storage_policy=THREECOPIES ns.chunk_size=104857600 ns.service_update_policy=meta2=KEEP|3|1|;rdir=KEEP|1|1|; [root@node4 /]# gridinit_cmd status KEY STATUS PID GROUP OPENIO-ecd-0 UP 1412 OPENIO,ecd,0 OPENIO-oioproxy-0 UP 1080 OPENIO,oioproxy,0
OpenIO FS definition
- The following parameters must be set:
oiofs_global_mount_directory
: defines the directory where the oiofs mounts will be placed.oiofs_cache_device
: defines the dedicated block device used for cache.oiofs_cache_folder
: defines the cache folder (⚠ it should already be mounted).oiofs_cache_low_watermark
andoiofs_cache_high_watermark
: defines an acceptable percentage of cache fill.
mountpoints and cache### OIOFS oiofs: hosts: node4: {} vars: oiofs_global_mount_directory: "/mnt" oiofs_global_redis_sentinel_servers: "{{ groups[oiofs_global_redis_inventory_groupname] \ | map('extract', hostvars, ['openio_bind_address']) \ | map('regex_replace', '$', ':6012') | list }}" ## CACHE oiofs_cache_device: /dev/vdb oiofs_cache_folder: "{{ oiofs_global_mount_directory }}/oiofs_cache" oiofs_cache_high_watermark: 80 oiofs_cache_low_watermark: 50 oiofs_redis: hosts: {}
oiofs_mountpoints
: defines the OIOFS mounts that will be accessible through Samba and NFS.active_mode
: defines the management mode of this mount. If you are not on a high availability deployment, this paramater must be set to true.namespace
,account
andcontainer
: defines the SDS context used by OpenIO FS.state
: present/absent ensures the mount is absent or defined and up.http_server
: defines a socket to request statistics about the mount. Must be unique by mountpoint.
mountpoints and cache### OIOFS oiofs: vars: oiofs_mountpoints: - active_mode: true namespace: "{{ namespace }}" # account/container account: MY_ACCOUNT1 container: MY_CONTAINER1 state: present http_server: 127.0.0.1:6989 # SDS openio_sds_conscience_url: "{{ openio_namespace_conscience_url }}" oioproxy_url: "{{ openio_bind_address }}:6006" ecd_url: "{{ openio_bind_address }}:6017" redis_sentinel_servers: "{{ oiofs_global_redis_sentinel_servers }}" redis_sentinel_name: "{{ namespace }}-master-1"
Exports
If you need to export the OpenIO FS mount, you must define some additionnal parameters in oiofs_mountpoints
user
,group
: defines the owners of the mountignore_flush
: allows to ignore fsync and fflush calls, so only oio-fs choose when to flush and not the client.auto_retry
: if set to true, retries calls (write, read …) on full cache instead of returning EAGAIN.exports
: defines the unique type of export for the mount.fsid
: For NFS, this parameter must be unique.
exports### OIOFS oiofs: vars: oiofs_mountpoints: - active_mode: true … # EXPORTS user: root group: root ignore_flush: true auto_retry: false exports: nfs: client: "*" options: - "rw" - "async" - "no_root_squash" - "fsid=1" uid: 0 gid: 0
Installation
Run this command to deploy the OpenIO FS:
deploy oiofsansible-playbook -i inventory.yml playbooks/oiofs.yml -e "openio_maintenance_mode=false"
Post-install Checks
The node is configured and the filesystem is mounted.
Run these commands on the node: gridinit_cmd status
and df -h
.
Sample output:
check mounts[root@node4 /]# gridinit_cmd status KEY STATUS PID GROUP OPENIO-ecd-0 UP 1669 OPENIO,ecd,0 OPENIO-oiofs-OPENIO-MY_ACCOUNT1-MY_CONTAINER1 UP 3194 OPENIO,oiofs,OPENIO-MY_ACCOUNT1-MY_CONTAINER1 OPENIO-oioproxy-0 UP 1267 OPENIO,oioproxy,0 [root@node4 /]# df -h Filesystem Size Used Avail Use% Mounted on overlay 25G 17G 8.9G 65% / tmpfs 64M 0 64M 0% /dev /dev/vdb 1014M 33M 982M 4% /mnt/oiofs_cache /dev/vda1 25G 17G 8.9G 65% /etc/hosts shm 64M 0 64M 0% /dev/shm tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup tmpfs 3.9G 8.1M 3.9G 1% /run oiofs-fuse 16E 2.5M 16E 1% /mnt/oiofs-OPENIO-MY_ACCOUNT1-MY_CONTAINER1 [root@node4 /]# export OIO_NS=OPENIO [root@node4 /]# cp -r /etc/ /mnt/oiofs-OPENIO-MY_ACCOUNT1-MY_CONTAINER1/ [root@node4 /]# openio --oio-account MY_ACCOUNT1 container list +-----------------+---------+-------+ | Name | Bytes | Count | +-----------------+---------+-------+ | MY_CONTAINER1 | 0 | 0 | | MY_CONTAINER1_0 | 2653340 | 245 | +-----------------+---------+-------+ [root@node4 /]# openio --oio-account MY_ACCOUNT1 container show MY_CONTAINER1_0 +-----------------+--------------------------------------------------------------------+ | Field | Value | +-----------------+--------------------------------------------------------------------+ | account | MY_ACCOUNT1 | | base_name | 249876D3CCF1C834C0F56006251E56066A60933F2A0A6A6D3E4B0A1FB121DFA5.1 | | bytes_usage | 2.653MB | | container | MY_CONTAINER1_0 | | ctime | 1558100572 | | damaged_objects | 0 | | max_versions | Namespace default | | missing_chunks | 0 | | objects | 245 | | quota | Namespace default | | status | Enabled | | storage_policy | Namespace default | +-----------------+--------------------------------------------------------------------+ [root@node4 /]# openio --oio-account MY_ACCOUNT1 object list MY_CONTAINER1_0 +------+--------+----------------------------------+------------------+ | Name | Size | Hash | Version | +------+--------+----------------------------------+------------------+ | 100 | 192 | 20697B6A640CCD785CB8C96AC8C1FF7C | 1558100630655261 | | 101 | 232 | AC222217925C4552D63A8982A1C2937C | 1558100630699728 | | 102 | 701 | 2918A1268CD800BBBEB949D1ACA9AD4A | 1558100630701348 | … | 97 | 393 | 6038E70C459D5F53686E47BE9C6C8781 | 1558100630602393 | | 99 | 192 | 20697B6A640CCD785CB8C96AC8C1FF7C | 1558100630647614 | +------+--------+----------------------------------+------------------+
Customizing your deployment
Export for a specific user
In some cases it may be necessary not to mount volumes in user root.
Change the inventory file as below:
change user### OIOFS oiofs: vars: my_password: 'foobar' openio_users_add: - username: openio uid: "{{ default_openio_user_openio_uid }}" name: openio account group: openio groups: [] home_create: true shell: /bin/bash - username: myuser uid: 6000 name: My guest account group: mygroup groups: [] home_create: true shell: /bin/bash update_password: on_create password: "{{ my_password | password_hash('sha512')}}" openio_users_groups: - groupname: openio gid: "{{ default_openio_group_openio_gid }}" - groupname: mygroup gid: 6000 oiofs_mountpoints: - active_mode: true … user: myuser group: mygroup exports: nfs: … uid: 6000 gid: 6000