aws s3 上传报错的常见原因

Author

Reads 401

An artist's illustration of artificial intelligence (AI). This image represents storage of collected data in AI. It was created by Wes Cockx as part of the Visualising AI project launched ...
Credit: pexels.com, An artist's illustration of artificial intelligence (AI). This image represents storage of collected data in AI. It was created by Wes Cockx as part of the Visualising AI project launched ...

AWS S3 上传报错可能是由于网络连接问题导致的。AWS S3 上传报错可能是由于网络连接问题导致的。

根据AWS S3的最佳实践,上传文件大小应该不超过5GB。上传文件大小超过5GB可能导致上传报错。

S3 上传报错

连接到需要路径前缀的服务时,必须在自定义连接配置中设置 Context 属性。

在 S3 兼容的第三方服务提供商中,写入文件可能会失败。S3 的兼容性服务必须支持分片上传。否则会出现错误消息 BucketnameisnotDNScompatible。请联系您的 webHosting 服务提供商获取帮助。

在 Finder.app 中创建新顶级文件夹时,可能会出现 Interoperabilityfailure.BucketnameisnotDNScompatible 错误。原因是 S3 bucket 名称不能包含空格。解决方法是使用 Terminal.app 创建新 bucket。

Finder.app: Interoperability Failure Creating New Top-Level Folder

When using Finder.app, you might encounter an interoperability failure when trying to create a new Top-Level Folder in S3. A bucket name in S3 cannot have whitespace in the filename.

If you're like me and tend to name your folders with a simple "UntitledFolder" name, you'll run into trouble. Creating a new folder with Finder.app fails because of this issue.

A workaround to this problem is to create a new bucket using the Terminal.app instead of Finder.app. This will help you avoid the interoperability failure.

Make sure to choose a filename with no whitespace when creating the new bucket, as this is a requirement for AWS bucket naming rules.

The Error

If you're trying to upload a file to an S3 compatible service, you may encounter an error. This error occurs when the service doesn't support multipart uploads.

Credit: youtube.com, 如何对来自 Amazon S3 的 403 访问被拒绝错误执行故障排查?

Files under 5Mb upload just fine. They don't require multipart upload, so they're sent as a standard PUT request.

Files over 5Mb may seem to upload fine, but only for the first part. Then, when the service attempts to send the second part, it responds with a 403 Access Denied error.

The exact error message you'll see in Chrome when attempting to upload a file is not specified in the text, but we can infer that it's related to the 403 Access Denied error.

S3 上传安全

To ensure S3 上传安全 (security), you must prevent uploads of unencrypted files. Refer to the AWS Security Blog for more information on how to achieve this.

To make your objects accessible to everyone using a regular web browser, you need to give the group grantee http://acs.amazonaws.com/groups/global/AllUsers read permissions.

Deprecated Path Style Requests

When working with S3 compatible storage, you may encounter issues if the storage only supports path style requests to reference buckets. This is a common problem that can be solved by disabling virtual host style requests.

Credit: youtube.com, How To Solve Amazon S3 Object Access Denied | AWS

To connect to such storage, you'll need to download a preconfigured S3 profile that disables virtual host style requests. This profile is specifically designed for S3 compatible storage and can be found online.

Alternatively, you can set the hidden configuration option 3.bucket.virtualhost.disable to true. This will achieve the same result as downloading the preconfigured profile.

If you choose to download the profile, you can find it by searching for the S3 (Deprecated path style requests) profile. This will save you time and effort in setting up your connection.

Here's a summary of the steps to connect using deprecated path style requests:

  • Download the S3 (Deprecated path style requests) profile for preconfigured settings.
  • Alternatively set the hidden configuration option 3.bucket.virtualhost.disable to true.

Prevent Unencrypted File Uploads

To prevent unencrypted file uploads, you can refer to the AWS Security Blog for guidance. This ensures that sensitive data is protected from unauthorized access.

Importing the Upload module from the @aws-sdk/lib-storage package in your fileparser.js file allows you to upload files in parts. You'll also need to import the S3Client from @aws-sdk/client-s3.

Credit: youtube.com, AWS S3 and KMS | How to Prevent Uploads of Unencrypted Objects to Amazon S3| Python boto3

To configure the upload, you'll need your AWS credentials. Follow the instructions on the AWS website to retrieve your access keys. Check the S3 console to confirm your bucket name and region.

A new instance of the Upload module should be created within the file.open method, configured with options such as client, params, queueSize, and partSize. The client should be the S3Client provided by AWS, with your AWS credentials and bucket region specified.

The params object contains the name of the S3 bucket (Bucket), the Key (file name), ACL, and Body (transform stream). The queueSize and partSize options define the number of parts to be processed simultaneously and the size of each part, respectively.

Here are the key parameters to configure the Upload instance:

  • client: S3Client with AWS credentials and bucket region
  • params: S3 bucket name, file name, ACL, and transform stream
  • queueSize: Number of parts to process simultaneously
  • partSize: Size of each part (minimum 5MB)

Email Address Grantee

If you enter an email address unknown to AWS, the error message S3ErrorMessage.BadRequest.Invalidid. will be displayed.

Entering an email address associated with multiple AWS accounts will result in the error message BadRequest.Thee-mailaddressyouprovidedisassociatedwithmorethanoneaccount.Pleaseretryyourrequestusingadifferentidentificationmethodorafterresolvingtheambiguity.

Metadata

Credit: youtube.com, S3 Metadata

You can edit standard HTTP headers and add custom HTTP headers to files to store metadata. This is done by choosing File → Info (macOS ⌘I Windows Alt+Return) → Metadata to edit headers.

Currently, you can only define default headers to be added for uploads using a hidden configuration option. This allows you to add multiple headers separated by a whitespace delimiter.

To add a header, you separate the key and value with an equals sign. For example, you can add an HTTP header for Cache-Control and one named Creator.

Multiple headers are added by separating them with a whitespace delimiter. This is the only way to add default headers for uploads.

Checksums

AWS verifies files both when they're received and compared with the SHA256 checksum sent with the request.

Files are checked against the checksum returned by AWS for the uploaded file, which is compared with the checksum computed locally if enabled in Transfers → Checksum → Uploads → Verify checksum.

Entering a user ID unknown to AWS will display the error message S3ErrorMessage.BadRequest.Invalidid.

Managing Files

Credit: youtube.com, AWS for Microsoft Workloads: How to Manage Files in S3 in Your .NET Application

You can upload files to S3 using the AWS CLI, and it's all about knowing the right command, syntax, parameters, and options.

To upload a file to S3, you'll need to provide two arguments (source and destination) to the aws s3 cp command. The source is the file you want to upload, and the destination is the S3 bucket where you want to upload it.

S3 bucket names are always prefixed with S3:// when used with AWS CLI, so make sure to include that in your command.

To upload the file c:\sync\logs\log1.xml to the root of the atasync1 bucket, you can use the command: aws s3 cp c:\sync\logs\log1.xml s3://atasync1/.

Once you've uploaded the file, you can list the objects at the root of the S3 bucket using the command: aws s3 ls s3://atasync1/.

This will show you the files and folders in the root of the S3 bucket, including the file log1.xml that you just uploaded.

Credit: youtube.com, AWS S3 Tutorial: S3 Browser The AWS S3 File Manager

If you want to upload multiple files and folders recursively, you can use the -r option with the aws s3 cp command. For example: aws s3 cp c:\sync\logs/ s3://atasync1/ -r.

This will upload the entire directory and all its contents to the S3 bucket.

To upload files to S3 using Node.js, you'll need to use the AWS SDK and the Upload module from the @aws-sdk/lib-storage package. You'll also need to import the S3Client from the @aws-sdk/client-s3 package.

To configure the upload, you'll need to provide the AWS credentials, the bucket name, and the region. You can get the AWS credentials from the AWS website, and the bucket name and region can be found in the S3 console.

Here are the options you'll need to configure the upload:

  • client: This is the destination of the file, which is the S3 bucket.
  • params: This object contains the name of the S3 bucket (Bucket), the Key (that is, the file name), the access control list (ACL) that defines access to the data, and the Body (that is, the generated transform stream).
  • queueSize: This defines the number of parts to be processed simultaneously. The default is 4.
  • partSize: This defines the size of each part that is processed. The smallest size possible is 5MB.

By following these steps, you can upload files to S3 using the AWS CLI or Node.js.

S3 上传问题解决

要解决S3上传问题,首先要检查是否设置了Context属性。根据Example 1的描述,连接到需要路径前缀的服务时,必须设置Context属性。

要确保S3兼容的第三方服务支持分片上传,避免出现BucketnameisnotDNScompatible的错误。Example 2中提到,S3兼容的服务必须支持分片上传,否则会出现错误。

要解决S3上传问题,还需要检查IAM Policy的Condition块。Example 7中提到,AWS.S3.upload()只发送x-amz-acl头部信息到S3.initiateMultipartUpload中,因此后续的PUT请求可能会失败。

Failed Listing with Custom Path

An artist's illustration of artificial intelligence (AI). This image represents storage of collected data in AI. It was created by Wes Cockx as part of the Visualising AI project launched ...
Credit: pexels.com, An artist's illustration of artificial intelligence (AI). This image represents storage of collected data in AI. It was created by Wes Cockx as part of the Visualising AI project launched ...

Connecting to a service with a custom S3 endpoint requires careful configuration. You must set the Context property in a custom connection profile.

This ensures that the correct path prefix is included in all requests, preventing failed listings.

The Context property is crucial for services that demand a path prefix in every request.

By setting it up correctly, you can avoid common upload issues like failed listings.

A Solution

I spent three days wrestling with an S3 upload issue before I finally figured out the problem. The issue was with the Condition block of the IAM Policy that I was sending through as the Policy param of my AWS.STS.getFederationToken() request. Specifically, AWS.S3.upload() only sends an x-amz-acl header for the first PUT request, which is the call to S3.initiateMultipartUpoad.

The x-amz-acl header is not included in the subsequent PUT requests for the actual parts of the upload. I had the following condition on my IAM Policy, which I was using to ensure that any uploads must have an ACL of 'private':

Credit: youtube.com, 從零開始 AWS S3 檔案上傳範例(包含 IAM 設定)|六角學院|2023 鐵人賽 #14

The solution was to edit the policy I was attaching to the temporary user and move the s3:PutObject permission into its own statement, and then adjust the condition to apply only if the targeted value exists. The final policy looks like so:

By making this change, I was able to successfully upload files to S3 using multipart uploads.

Synchronizing Deletions

Some applications, like Windows File Explorer, delete files prior to overwriting them, which can result in a delete marker being set in S3.

If you use the sync command without the --delete option, deletions from the source location will not be processed, and the deleted file will remain at the destination S3 location.

To synchronize deletions, you can use the --delete option with the sync command, as shown in the example: the file Log5.xml was deleted from the source, and the command was run with the --delete option.

This will delete the file Log5.xml at the destination S3 location, as shown in the sample result.

Ismael Anderson

Lead Writer

Ismael Anderson is a seasoned writer with a passion for crafting informative and engaging content. With a focus on technical topics, he has established himself as a reliable source for readers seeking in-depth knowledge on complex subjects. His writing portfolio showcases a range of expertise, including articles on cloud computing and storage solutions, such as AWS S3.

Love What You Read? Stay Updated!

Join our community for insights, tips, and more.