AWS S3 CLI
- Installing AWS-CLI
- Configure your credentials
- Usage
- Create a bucket
- List your buckets
- Upload your files as objects in your bucket
- Download an object in a bucket
- Upload an object from a bucket to another bucket
- Download or upload a whole bucket to host/bucket
- Synchronize Buckets
- Delete Objects and Buckets
- Enable Bucket Versioning
- Set tags on Buckets
- Set tags on objects
This guide provides a few tips to help users get started with the AWS S3 command line client using the OpenIO Swift gateway. By default, the gateway uses the Swift3 middleware to allow access to OpenIO object storage using the Amazon S3 API. The AWS CLI allows you to use the different APIs provided by AWS, including the S3 and S3API ones.
Installing AWS-CLI
The AWS-CLI is an open source tool built on top of the AWS SDK for Python (Boto) that provides commands for interacting with AWS services. With minimal configuration, you can start using all of the functionality provided by the AWS Management.
- You need to provide to the client:
- the endpoint of the Swift gateway
- disable SSL verification, as it is not provided by default
Note
Replace localhost with the IP address of your OpenIO Swift proxy.
Install AWS-CLI
To interact with AWS, aws-cli and awscli-plugin-endpoint need to be installed. The awscli-plugin-endpoint is a great plugin to help people more easily access third party S3 providers.
# pip3 install awscli
# pip3 install awscli-plugin-endpoint
Configure the client
You can set specific parameters for the S3 commands:
#[plugins]
#endpoint = awscli_plugin_endpoint
[default]
region = us-east-1
s3 =
#endpoint_url = http://localhost:6007
signature_version = s3v4
max_concurrent_requests = 20
max_queue_size = 100
multipart_threshold = 1GB
# Edit the multipart_chunksize value according to the file sizes that you want to upload.
# The present configuration allows to upload files up to 10GB (100 requests * 100MB).
# For example setting it to 10GB allows you to upload files up to 1TB.
multipart_chunksize = 100MB
s3api =
#endpoint_url = http://localhost:6007
Set the endpoint_url and region corresponding to your OpenIO SDS cluster.
Configure your credentials
Client configuration depends on whether you use a TempAuth or Keystone authentication on the Swift proxy.
TempAuth
TempAuth is used primarily in functional test environment and can be used in other test environments. It is not recommended to use TempAuth in a production system. However, TempAuth is fully functional.
To use the AWS command, you need to set your credentials in the ~/.aws/credentials file:
[default]
aws_access_key_id=demo:demo
aws_secret_access_key=DEMO_PASS
Keystone Auth
Using the Openstack Keystone authentication system, you need to obtain a token in order in order to authenticate. You need to install the Openstack command line interface. Export these variables to use the S3 CLI. Create a file ~/keystonerc_demo containing:
# export OS_TENANT_NAME=demo
# export OS_USERNAME=demo
# export OS_PASSWORD=DEMO_PASS
# export OS_AUTH_URL=http://localhost:5000/v2.0
Note
In OS_AUTH_URL, replace the address with the IP address of your Keystone service
Source your credentials and get a token:
# . keystonerc_demo
# openstack ec2 credentials create
Configure your credentials to ~/.aws/credentials and configure the default S3 client. Replace ACCESS_KEY and SECRET_KEY with the result of the previous command:
[default]
aws_access_key_id=ACCESS_KEY
aws_secret_access_key=SECRET_KEY
Usage
Create a bucket
Note
If you don’t have install the awscli-plugin-endpoint, you can add –endpoint-url http://localhost:6007 to the command line
# aws --no-verify-ssl s3 mb s3://test1
# aws --no-verify-ssl --endpoint-url http://localhost:6007 s3 mb s3://test1
List your buckets
# aws --no-verify-ssl s3 ls
Upload your files as objects in your bucket
# aws --no-verify-ssl s3 cp /etc/magic s3://test1
By default, the objects takes the name of the files but they can be renamed
# aws --no-verify-ssl s3 cp /etc/magic s3://test1/foo
Download an object in a bucket
# aws --no-verify-ssl s3 cp s3://test1/magic .
Upload an object from a bucket to another bucket
# aws --no-verify-ssl s3 cp s3://test1/magic s3://test2
Download or upload a whole bucket to host/bucket
# aws --no-verify-ssl cp s3://test1 . --recursive
# aws --no-verify-ssl cp s3://test1 s3://test2 --recursive
Synchronize Buckets
# aws --no-verify-ssl sync . s3://test1
# aws --no-verify-ssl sync s3://test1 s3://test2
Delete Objects and Buckets
# Deleting an object
# aws --no-verify-ssl rm s3://test1/magic
# Deleting all objects from a bucket
# aws --no-verify-ssl rm s3://test1 --recursive
# Deleting a bucket. To delete a bucket, it must be empty.
# aws --no-verify-ssl rb s3://test1
# If the bucket is not deleted, you can use the same command with the --force option.
# This command deletes all the objects from the bucket and then deletes the bucket.
# aws s3 rb s3://test1 --force
Enable Bucket Versioning
Once enabled, the versioning feature supports the retrieval of objects that are deleted or overwritten. It’s a mean to keep multiple variants of an object in the same bucket. If the feature is enabled, you can list archived versions of an object or permanently delete an archived version.
Note
Once versioning is enabled, you cannot return to an un-versioned state of your bucket.
# aws --no-verify-ssl s3api put-bucket-versioning --bucket test1 --versioning-configuration Status=Enabled
Add a test file to test the feature out
# aws --no-verify-ssl s3 cp test s3://test1
List the versions of the object in the bucket
# aws --no-verify-ssl s3api list-object-versions --bucket test1
which returns
{
"Versions": [
{
"LastModified": "2017-03-24T22:21:40.000Z",
"VersionId": "1553466100378762",
"ETag": "\"166e32f2190ddd841fba3f90b94e95b8\"",
"StorageClass": "STANDARD",
"Key": "test",
"Owner": {
"DisplayName": "demo:demo",
"ID": "demo:demo"
},
"IsLatest": true,
"Size": 4027
},
{
"LastModified": "2017-03-24T22:06:19.000Z",
"VersionId": "1553465179970865",
"ETag": "\"272913026300e7ae9b5e2d51f138e674\"",
"StorageClass": "STANDARD",
"Key": "magic",
"Owner": {
"DisplayName": "demo:demo",
"ID": "demo:demo"
},
"IsLatest": true,
"Size": 111
}
]
}
Add a new version of the test file in same the bucket and list versions
# aws --no-verify-ssl s3 cp test s3://test1
# aws --no-verify-ssl s3api list-object-versions --bucket test1
{
"Versions": [
{
"LastModified": "2017-03-24T22:30:04.000Z",
"VersionId": "1553466604084401",
"ETag": "\"dc283b84ee3567c44a1e90a421b05f2e\"",
"StorageClass": "STANDARD",
"Key": "test",
"Owner": {
"DisplayName": "demo:demo",
"ID": "demo:demo"
},
"IsLatest": true,
"Size": 4028
},
{
"LastModified": "2017-03-24T22:21:40.000Z",
"VersionId": "1553466100378762",
"ETag": "\"166e32f2190ddd841fba3f90b94e95b8\"",
"StorageClass": "STANDARD",
"Key": "test",
"Owner": {
"DisplayName": "demo:demo",
"ID": "demo:demo"
},
"IsLatest": false,
"Size": 4027
},
{
"LastModified": "2017-03-24T22:06:19.000Z",
"VersionId": "1553465179970865",
"ETag": "\"272913026300e7ae9b5e2d51f138e674\"",
"StorageClass": "STANDARD",
"Key": "magic",
"Owner": {
"DisplayName": "demo:demo",
"ID": "demo:demo"
},
"IsLatest": true,
"Size": 111
}
]
}
Set tags on Buckets
PUT a tag set to a bucket
# aws --no-verify-ssl s3api put-bucket-tagging --bucket test1 --tagging 'TagSet=[{Key=provider,Value=openio}]'
# aws --no-verify-ssl s3api get-bucket-tagging --bucket test1
{
"TagSet": [
{
"Value": "openio",
"Key": "provider"
}
]
}
DELETE the tag set of a bucket
# aws --no-verify-ssl s3api s3api delete-bucket-tagging --bucket test1
Set tags on objects
PUT a tag set to an object
# aws --no-verify-ssl s3api put-object-tagging --bucket test1 --key magic --tagging 'TagSet=[{Key=provider,Value=openio},{Key=service,Value=objectstorage}]'
# aws --no-verify-ssl s3api get-object-tagging --bucket test1 --key magic
{
"VersionId": "1553465179970865",
"TagSet": [
{
"Value": "openio",
"Key": "provider"
},
{
"Value": "objectstorage",
"Key": "service"
}
]
}
DELETE the tag set of an object
# aws --no-verify-ssl s3api delete-object-tagging --bucket test1 --key magic