Bifrost Cloud wants to make your transition to decentralized storage as frictionless as possible. Therefore instead of building our API, we made it compatible with the AWS S3 API, which is the most commonly used API for managing user storage on the cloud. This also means if you are already using S3 API, then all you need to do is configure your object storage backend and change the endpoint from an AWS server to ours. This tutorial will teach you how to get started with Bifrost Cloud CLI:
The following instructions assume you are in a Linux-based environment.
First, acquire Access Key ID and Secret Key ID keys from your account executive or the bifrostcloud portal. Make sure you've copied down the Secret Key and store it in a safe place, because it will only be shown once for security considerations.
Follow the official installation instructions for your operating system https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html
once awscli is available on your system, run
$ aws configure
It will prompt for the following information:
AWS Access Key ID [none]:
AWS Secret Access Key [none]:
Default region name [us-east-1]:
Default output format [None]:
Enter the Access Key ID and Secret Key ID you've received earlier, and use default values for region name and output format
Use aws v4 signature scheme
$ aws configure set default.s3.signature_version s3v4
For the best performance, use the following part size
$ aws configure set default.s3.multipart_threshold 64MB
That’s it! You are now all set up. Now let 's store some data!
$ aws --endpoint-url https://us1-dcs-s3.bifrostcloud.com/ s3
mb s3://[bucket name]
Replace bucket name with the name of the bucket you wish to create
$ aws --endpoint-url https://us1-dcs-s3.bifrostcloud.com/ s3
ls
This command will list all the buckets you have created so far. Use this command to make sure your bucket is created before uploading any files.
$ aws --endpoint-url https://us1-dcs-s3.bifrostcloud.com/ s3
ls s3://[....]
This command will list all objects in a particular bucket. Replace ... in the command with the name of the bucket you wish to check.
$ aws --endpoint-url https://us1-dcs-s3.bifrostcloud.com/ s3
cp ~/[local file path] s3://[bucketname]/[rest of objects
name]
Replace local file path with the path of the file you wish to upload on your local machine. Replace bucketname and rest of objects name with the path you want to store the file on the cloud where rest of object name should be the name you wish to the file to have on the cloud.
$ aws --endpoint-url https://us1-dcs-s3.bifrostcloud.com/ s3
cp s3://[bucketname]/[rest of objects name] ~/[local file
path]
Replace local file path with the path of the file you wish to download on your local machine. Replace bucketname and rest of objects name with the path on the cloud you want to download from.
$ aws --endpoint-url https://us1-dcs-s3.bifrostcloud.com/ s3
rm s3://[bucket name]/[object name]
Replace bucketname and rest of objects name with the path on the cloud you wish to remove.
$ aws --endpoint-url https://us1-dcs-s3.bifrostcloud.com/ s3
rb s3://[bucket name]
Replace bucketname with the bucket name on the cloud you wish to remove.
For more advanced uses of these commands, please see the official AWS S3 CLI page: https://docs.aws.amazon.com/cli/latest/reference/s3/
Make the migration, its worth it.