AWS S3 Sync with AWS CLI and Rclone

Author

Reads 1.2K

A person working on a laptop with a USB flash drive connected, representing technology and data storage.
Credit: pexels.com, A person working on a laptop with a USB flash drive connected, representing technology and data storage.

You can use the AWS CLI to sync files between your local machine and an S3 bucket. The AWS CLI provides a command called `aws s3 sync` that allows you to synchronize files between your local machine and an S3 bucket.

This command can be run from the command line and is useful for automating file transfers between your local machine and S3. For example, you can use it to upload files from your local machine to S3 or download files from S3 to your local machine.

The `aws s3 sync` command requires you to specify the source and destination directories, as well as the S3 bucket and region. You can use the `--delete` option to delete files from the destination directory if they no longer exist in the source directory.

Examples and Usage

You can use aws s3 sync to mirror a local directory to an S3 bucket, or vice versa, without replacing files that already exist in the destination. This is especially useful for creating backups, uploading websites to S3 static hosting, downloading the contents of an S3 bucket, and synchronizing two different S3 buckets.

Credit: youtube.com, Sync an Amazon S3 Bucket to a local folder // How to upload and download S3 buckets with the AWS CLI

To create a remote backup, you can easily synchronize local directories to S3. Only the files that have changed since the last backup will be copied. This is a great way to ensure your important files are safely stored in the cloud.

You can use the sync command to synchronize two S3 buckets, which can be helpful if you need to make a separate clone or backup of a bucket. This is done by using an S3 URI as both the source and destination paths.

Here are some common use cases for aws s3 sync:

  • Creating backups
  • Uploading websites to S3 static hosting
  • Downloading the contents of an S3 bucket
  • Synchronizing two different S3 buckets

To download files from an S3 bucket to a local directory, you can use the same syntax as uploading files to S3. This is done by running the command with the S3 URI as the source and a local directory as the destination.

Options and Settings

You have a lot of options when it comes to customizing the AWS S3 sync process. The --storage-class flag allows you to set the storage class to apply to newly synced files, which determines the performance, pricing, and access frequency restrictions for your files.

Credit: youtube.com, Learn how to Upload Data Files to AWS S3 via CLI Tool | S3 Sync

You can also specify the source and destination regions using the --source-region and --region flags, providing greater control over the synchronization process. Remember to replace the placeholder values with your actual local file paths and S3 bucket names/paths.

Some other useful options include setting the ACL for the synced files using the --acl flag, and excluding or including specific files or patterns from being synced using the --exclude and --include flags. You can also use the --delete flag to delete files from the destination that don't exist in the source, or the --dry-run flag to simulate the sync operation without making any changes.

Here are some common advanced sync options for AWS CLI:

  • --delete: Deletes files from the destination that don't exist in the source.
  • --exclude: Exclude files or patterns from being synced.
  • --include: Include files or patterns for syncing.
  • --dry-run: Simulate the sync operation without making any changes.
  • --quiet: Suppress output and only display errors.

Cp Vs

aws s3 cp is mainly used to copy individual files, but it can also copy entire folders when the --recursive flag is used.

The cp command copies all files and folders from the source to the destination, even if they already exist in the destination. This means existing files will be overwritten.

Detailed view of a black data storage unit highlighting modern technology and data management.
Credit: pexels.com, Detailed view of a black data storage unit highlighting modern technology and data management.

On the other hand, aws s3 sync is used to copy only the new or changed files from the source to the destination.

Sync is the better choice when copying large directories that already exist in your S3 bucket, as it improves performance and reduces transfer costs.

Here's a comparison table to help you decide which command to use:

Using Options

Using options can greatly enhance your AWS S3 sync experience. You can customize the behavior of the sync command to fit your specific needs by using various options and flags.

The `--delete` flag allows you to delete files from the destination that don't exist in the source. This is especially useful when you want to exactly mirror the state of a directory.

To include specific files or patterns for synchronization, you can use the `--include` option. For example, if you want to include only .txt files, you can use: `--include *.txt`.

Close-up of a laptop and smartphone connected via USB cable for data transfer.
Credit: pexels.com, Close-up of a laptop and smartphone connected via USB cable for data transfer.

The `--exclude` option allows you to exclude specific files or patterns from being synchronized. For instance, if you want to exclude all .log files, you can do: `--exclude *.log`.

You can also use the `--quiet` option to suppress most of the output and only display errors. This makes the sync operation less verbose.

To set the ACL for synced S3 files, you can pass the desired policy's name to the `--acl` flag. For example, to set the ACL to public-read, you can use: `--acl public-read`.

To set the storage class for synced S3 files, you can use the `--storage-class` flag. For example, to set the storage class to STANDARD_IA for infrequent access, you can use: `--storage-class STANDARD_IA`.

Here are some common advanced sync options along with code examples:

Remember to replace the code examples with your actual local file paths and S3 bucket names/paths.

Advanced Options

You can customize the synchronization process with various advanced options available for AWS S3 sync. These options can be used to change copy behavior and customize the attributes of created files.

Credit: youtube.com, How can I improve the transfer performance of the sync command for Amazon S3?

Some of the most useful capabilities include using S3 sync with advanced options to go beyond average usage patterns. Advanced options can be used to customize the synchronization process to your specific needs.

Here are some of the common advanced sync options:

  • Exclude files or patterns from being synced with the --exclude option.
  • Include files or patterns for syncing with the --include option.
  • Delete files from the destination that don't exist in the source with the --delete option.
  • Skip making any changes during the sync operation with the --dry-run option.
  • Suppress output and only display errors with the --quiet option.

Disabling Symlink Resolution can be a bit tricky, but it's a useful option to know about, especially if you're working with linked paths. You can disable symlink resolution by setting the --no-follow-symlinks flag.

This flag will ensure that files and folders in the linked path don't appear in S3, which can be a problem if you're trying to keep your S3 bucket organized.

Rclone Advanced Options

Rclone Advanced Options are a game-changer for syncing files and directories to and from various cloud storage providers, including S3.

One of the most useful options is the --delete flag, which deletes files from the destination that don't exist in the source. This helps keep your synced folders tidy and up-to-date.

Credit: youtube.com, Use Rclone to Effortlessly Sync Cloud Files with S3 Storage

You can also use the --exclude and --include flags to exclude files or patterns from being synced, or include specific files or patterns for syncing. This gives you fine-grained control over what gets synced and what doesn't.

The --dry-run flag is another handy option, which simulates the sync operation without making any changes. This is a great way to test your sync settings without risking any data loss.

Finally, the --quiet flag suppresses output and only displays errors, making it easier to troubleshoot any issues that arise during the sync process.

Security and Encryption

Security and encryption are crucial aspects of storing files in S3. Server-side encryption can be enabled using the --sse flag.

You can choose to use AES256 or aws:kms (customer-managed KMS key) for encryption. If you want to use a customer-managed key, you can specify it using the --sse-c-key flag.

For files stored in S3, you can set the ACL (Access Control List) using the --acl flag. This allows you to control who can access your files.

Here are the encryption options available:

Access Control List Settings

Credit: youtube.com, ACLs 101 - An Introduction to Access Control Lists | Tailscale Explained

Setting the ACL for synced S3 files is a straightforward process. Pass the desired policy's name to the --acl flag to control access to uploaded files.

S3 offers several predefined ACLs, including private, public-read, public-read-write, and bucket-owner-full-control. These policies cover common use cases.

To set the ACL on newly synced files, simply use the --acl flag with the policy name you want to apply. For example, public-read allows anyone to read the files.

The --acl flag is a convenient way to manage access control for your synced files.

Server-Side Encryption

Server-Side Encryption is a crucial aspect of securing your data in the cloud. It ensures that your files are encrypted on the server-side, providing an additional layer of security.

You can enable server-side encryption for synced S3 files by using the --sse flag. This flag allows you to use your AWS-managed key from the AWS Key Management Service.

To specify a specific KMS key, you can use the --sse-kms-key-id flag. This flag is useful if you have multiple KMS keys and want to use a specific one for encryption.

Credit: youtube.com, What is Data Protection or AWS S3 Encryption? | Server-side Encryption and Client-side Encryption

If you want to provide dedicated encryption settings for specific files, you can use the --sse-c flag. This flag allows you to define how files should be encrypted, either with AES256 or aws:kms (customer-managed KMS key).

If you're using SSE with a customer-managed key, you need to provide the key to use via the --sse-c-key flag. This flag is essential to ensure that your data is encrypted with the correct key.

Here's a summary of the encryption modes you can use with the --sse-c flag:

By using these encryption modes, you can ensure that your data is protected and secure.

Storage Classes

Storage classes are a crucial aspect of S3, and understanding them can help you save money and ensure your data is stored efficiently.

The default storage class is STANDARD, which covers most regular use cases.

You can directly select the target storage class for the files to copy by using the --storage-class flag. Valid selections include STANDARD, REDUCED_REDUNDANCY, and STANDARD_IA.

Credit: youtube.com, Encrypting Data - CompTIA Security+ SY0-701 - 1.4

The STANDARD_IA class is ideal for infrequently accessed files, as it provides a balance between cost and performance.

Here's a list of valid storage classes you can use:

  • STANDARD - S3 Standard
  • REDUCED_REDUNDANCY - S3 Reduced Redundancy
  • STANDARD_IA - S3 Standard Infrequent Access
  • ONEZONE_IA - S3 One-Zone Infrequent Access
  • INTELLIGENT_TIERING - S3 Intelligent Tiering
  • GLACIER - S3 Glacier
  • DEEP_ARCHIVE - S3 Glacier Deep Archive
  • GLACIER_IR - S3 Glacier Instant Retrieval

It's worth noting that choosing the right storage class depends on your data access, resiliency, and cost requirements.

AWS provides extensive documentation about all storage classes, so be sure to explore your options to save money on large storage requirements.

Rclone and CLI Details

Rclone is a powerful tool that can sync files and directories to and from various cloud storage providers, including S3. It's a great alternative to using the AWS CLI for syncing files.

One of the key benefits of rclone is that it's a command-line program, which means you can use it from the terminal to sync files with your S3 bucket. This is similar to using the AWS CLI, which also offers sync functionality.

Here are some key details about rclone and CLI:

With rclone, you can sync your files with your S3 bucket and take advantage of its scalability and reliability.

Using Rclone

Credit: youtube.com, A Beginner's Guide To Rclone

Rclone is a versatile command-line program for syncing files and directories to and from various cloud storage providers, including S3. It offers an alternative to using the AWS CLI for S3 sync.

rclone is a great tool for syncing data to and from S3, and it's easy to use. With rclone, you can manage your S3 buckets and objects from the command line.

To use rclone for S3 sync, you'll need to install it first, then configure it to connect to your S3 account.

CLI Key Details

The AWS CLI provides the same sync functionality as s3cmd, making it a reliable tool for managing S3 buckets.

It can set ACLs, encryption, and other options on the command line, giving you fine-grained control over your S3 data.

The AWS CLI uses the AWS SDK, allowing you to leverage other AWS services if needed.

Credentials are managed via AWS config files, making it easy to switch between different AWS accounts or environments.

Credit: youtube.com, How To Use Rclone For Windows

Here are some key benefits of using the AWS CLI for syncing S3 buckets:

Sync is a powerful tool for managing S3 buckets, and the AWS CLI is a great way to get started.

Dry Run and Simulation

A dry run is a simulation of the sync operation that shows you what actions would be taken without actually performing the sync.

It's useful for testing and verifying your sync command before executing it. This can save you from accidentally deleting files or making other mistakes.

You can use the --dry-run option to run a dry run. This will display the operations that would be performed without actually running them.

This is especially helpful for verifying that your sync command is correct before running it for real.

Output and Logging

Output and Logging is crucial for troubleshooting and monitoring your AWS S3 sync operations. This is where you can see what's happening with your sync jobs.

Credit: youtube.com, Export logs to Amazon S3 | Beginner | Quick tips | Tutorial

AWS S3 sync uses the AWS SDK to log its activities, and you can configure the logging level to suit your needs. You can set the logging level to DEBUG, INFO, WARNING, ERROR, or CRITICAL.

The default logging level is INFO, which provides a good balance between detail and verbosity. If you need more detailed logs, you can set it to DEBUG, but be aware that this will generate a lot of log data.

AWS S3 sync also logs the number of files synced, deleted, and skipped, which is useful for monitoring the progress of your sync jobs. For example, if you have a large number of files to sync, you can see the number of files synced per minute to gauge the speed of the operation.

You can also use the --delete option to delete files that don't exist in the destination, and AWS S3 sync will log the number of files deleted. This can help you clean up files that are no longer needed in your S3 bucket.

Frequently Asked Questions

What does aws S3 Sync do?

AWS S3 Sync copies files and objects between two locations, ensuring they're up-to-date and complete. It can also remove unnecessary files from the target location.

What is the difference between aws S3 CP and aws S3 sync?

aws s3 cp copies all files from source to destination, regardless of existence or updates, while aws s3 sync only copies new and updated files, preserving destination integrity

Does aws S3 Sync delete files?

Yes, AWS S3 Sync can delete files, but only those that exist in the destination but not in the source, unless excluded by filters. You can enable deletion with the "--delete" option.

Wm Kling

Lead Writer

Wm Kling is a seasoned writer with a passion for technology and innovation. With a strong background in software development, Wm brings a unique perspective to his writing, making complex topics accessible to a wide range of readers. Wm's expertise spans the realm of Visual Studio web development, where he has written in-depth articles and guides to help developers navigate the latest tools and technologies.

Love What You Read? Stay Updated!

Join our community for insights, tips, and more.