Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
432 views
in Technique[技术] by (71.8m points)

node.js - How do I client-side upload a viewable file to Amazon S3?

Let me start of by saying that I am normally very reluctant to post this questions as I always feel that there's an answer to everything SOMEWHERE on the internet. After spending countless hours looking for an answer to this question, I've finally given up on this statement however.

Assumption

This works:

s3.getSignedUrl('putObject', params);

What am I trying to do?

  1. Upload a file via PUT (from the client-side) to Amazon S3 using the getSignedUrl method
  2. Allow anyone to view the file that was uploaded to S3

Note: If there's an easier way to allow client side (iPhone) uploads to Amazon S3 with pre-signed URLs (and without exposing credentials client-side) I'm all ears.

Main Problems*

  1. When viewing the AWS Management Console, the file uploaded has blank Permissions and Metadata set.
  2. When viewing the uploaded file (i.e. by double clicking the file in AWS Management Console) I get an AccessDenied error.

What have I tried?

Try #1: My original code

In NodeJS I generate a pre-signed URL like so:

var params = {Bucket: mybucket, Key: "test.jpg", Expires: 600};
s3.getSignedUrl('putObject', params, function (err, url){
  console.log(url); // this is the pre-signed URL
});

The pre-signed URL looks something like this:

https://mybucket.s3.amazonaws.com/test.jpg?AWSAccessKeyId=AABFBIAWAEAUKAYGAFAA&Expires=1391069292&Signature=u%2BrqUtt3t6BfKHAlbXcZcTJIOWQ%3D

Now I upload the file via PUT

curl -v -T myimage.jpg https://mybucket.s3.amazonaws.com/test.jpg?AWSAccessKeyId=AABFBIAWAEAUKAYGAFAA&Expires=1391069292&Signature=u%2BrqUtt3t6BfKHAlbXcZcTJIOWQ%3D

PROBLEM
I get the *Main Problems listed above

Try #2: Adding Content-Type and ACL on PUT

I've also tried adding the Content-Type and x-amz-acl in my code by replacing the params like so:

var params = {Bucket: mybucket, Key: "test.jpg", Expires: 600, ACL: "public-read-write", ContentType: "image/jpeg"};

Then I try a good ol' PUT:

curl -v -H "image/jpeg" -T myimage.jpg https://mybucket.s3.amazonaws.com/test.jpg?AWSAccessKeyId=AABFBIAWAEAUKAYGAFAA&Content-Type=image%2Fjpeg&Expires=1391068501&Signature=0yF%2BmzDhyU3g2hr%2BfIcVSnE22rY%3D&x-amz-acl=public-read-write

PROBLEM
My terminal outputs some errors:

-bash: Content-Type=image%2Fjpeg: command not found
-bash: x-amz-acl=public-read-write: command not found

And I also get the *Main Problems listed above.

Try #3: Modifying Bucket Permissions to be public

All of the items listed below are ticked in the AWS Management Console)

Grantee: Everyone can [List, Upload/Delete, View Permissions, Edit Permissions]
Grantee: Authenticated Users can [List, Upload/Delete, View Permissions, Edit Permissions]

Bucket Policy

{
"Version": "2012-10-17",
"Statement": [
    {
        "Sid": "Stmt1390381397000",
        "Effect": "Allow",
        "Principal": {
            "AWS": "*"
        },
        "Action": "s3:*",
        "Resource": "arn:aws:s3:::mybucket/*"
    }
]
}

Try #4: Setting IAM permissions

I set the user policy to be this:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "s3:*",
      "Resource": "*"
    }
  ]
}

AuthenticatedUsers group policy to be this:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "Stmt1391063032000",
      "Effect": "Allow",
      "Action": [
        "s3:*"
      ],
      "Resource": [
        "*"
      ]
    }
  ]
}

Try #5: Setting CORS policy

I set the CORS policy to this:

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
    <CORSRule>
        <AllowedOrigin>*</AllowedOrigin>
        <AllowedMethod>PUT</AllowedMethod>
        <AllowedMethod>POST</AllowedMethod>
        <AllowedMethod>DELETE</AllowedMethod>
        <AllowedMethod>GET</AllowedMethod>
        <MaxAgeSeconds>3000</MaxAgeSeconds>
        <AllowedHeader>*</AllowedHeader>
    </CORSRule>
</CORSConfiguration>

And... Now I'm here.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

Update

I have bad news. According to release notes of SDK 2.1.6 at http://aws.amazon.com/releasenotes/1473534964062833:

"The SDK will now throw an error if ContentLength is passed into an 
Amazon S3 presigned URL (AWS.S3.getSignedUrl()). Passing a 
ContentLength is not supported by the SDK, since it is not enforced on 
S3's side given the way the SDK is currently generating these URLs. 
See GitHub issue #457."

I have found on some occassions, ContentLength must be included (specifically if your client passes it so the signatures will match), then on other occassions, getSignedUrl will complain if you include ContentLength with a parameter error: "contentlength is not supported in presigned urls". I noticed that the behavior would change when I changed the machine which was making the call. Presumably the other machine made a connection to another Amazon server in the farm.

I can only guess why the behavior exists in some cases, but not in others. Perhaps not all of Amazon's servers have been fully upgraded? In either case, to handle this problem, I now make an attempt using ContentLength and if it gives me the parameter error, then I call the getSignedUrl again without it. This is a work-around to deal with this strange behavior with the SDK.

A little example... not very pretty to look at but you get the idea:

MediaBucketManager.getPutSignedUrl = function ( params, next ) {
    var _self = this;
    _self._s3.getSignedUrl('putObject', params, function ( error, data ) {
        if (error) {
            console.log("An error occurred retrieving a signed url for putObject", error);
            // TODO: build contextual error
            if (error.code == "UnexpectedParameter" && error.message.search("ContentLength") > -1) {
                if (params.ContentLength) delete params.ContentLength
                MediaBucketManager.getPutSignedUrl(bucket, key, expires, params, function ( error, data ) {
                    if (error) {
                        console.log("An error occurred retrieving a signed url for putObject", error);
                    } else {
                        console.log("Retrieved a signed url for putObject:", data);
                        return next(null, data)
                    }
                }); 
            } else {
                return next(error); 
            }
        } else {
            console.log("Retrieved a signed url for putObject:", data);
            return next(null, data);
        }
    });
};

So, below is not entirely correct (it will be correct in some cases but give you the parameter error in others) but might help you get started.

Old Answer

It seems (for a signedUrl to PUT a file to S3 where there is only public-read ACL) there are a few headers that will be compared when a request is made to PUT to S3. They are compared against what has been passed to getSignedUrl:

CacheControl: 'STRING_VALUE',
ContentDisposition: 'STRING_VALUE',
ContentEncoding: 'STRING_VALUE',
ContentLanguage: 'STRING_VALUE',
ContentLength: 0,
ContentMD5: 'STRING_VALUE',
ContentType: 'STRING_VALUE',
Expires: new Date || 'Wed De...'

see the full list here: http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#putObject-property

When you're calling getSignedUrl you'll pass a 'params' object (fairly clear in the documentation) that includes the Bucket, Key, and Expires data. Here is an (NodeJS) example:

var params = { Bucket:bucket, Key:key, Expires:expires };
s3.getSignedUrl('putObject', params, function ( error, data ) {
    if (error) {
        // handle error
    } else {
        // handle data
    }
});

Less clear is setting the ACL to 'public-read':

var params = { Bucket:bucket, Key:key, Expires:expires, ACL:'public-read' };

Very much obscure is the notion of passing headers that you expect the client, using the signed url, will pass along with the PUT operation to S3:

var params = {
    Bucket:bucket,
    Key:key,
    Expires:expires,
    ACL:'public-read',
    ContentType:'image/png',
    ContentLength:7469
};

In my example above, I have included ContentType and ContentLength because those two headers are included when using XmlHTTPRequest in javascript, and in the case of Content-Length cannot be changed. I suspect that will be the case for other implementations of HTTP requests like Curl and such because they are required headers when submitting HTTP requests that include a body (of data).

If the client does not include the ContentType and ContentLength data about the file when requesting a signedUrl, when it comes time to PUT the file to S3 (with that signedUrl), the S3 service will find the headers included with the client's requests (because they are required headers) but the signature will not have included them - and so, they will not match and the operation will fail.

So, it appears that you will have to know, in advance of making your getSignedUrl call, the content type and content length of the file to be PUT to S3. This wasn't a problem for me because I exposed a REST endpoint to allow our clients to request a signed url just before making the PUT operation to S3. Since the client has access to the file to be submitted (at the moment they are ready to submit), it was a trivial operation for the client to access the file size and type and request a signed url with that data from my endpoint.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...