Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
681 views
in Technique[技术] by (71.8m points)

amazon web services - AWS S3 gracefully handle 403 after getSignedUrl expired

I'm trying to gracefully handle the 403 when visiting an S3 resource via an expired URL. Currently it returns an amz xml error page. I have uploaded a 403.html resource and thought I could redirect to that.

The bucket resources are assets saved/fetched by my app. Still, reading the docs I set bucket properties to handle the bucket as a static webpage page and uploaded a 403.html to bucket root. All public permissions are blocked, except public GET access to the resource 403.html. In bucket properties, website settings I indicated the 403.html as error page. Visiting http://<bucket>.s3-website-us-east-1.amazonaws.com/some-asset.html redirects correctly to http://<bucket>.s3-website-us-east-1.amazonaws.com/403.html

However, when I use aws-sdk js/node and call method getSignedUrl('getObject', params) to generate the signed url, it returns a different host url: https://<bucket>.s3.amazonaws.com/ Visiting expired resources from this method do not get redirected to 403.html. I'm guessing that since the host address is different this is the reason it is not automatically redirecting.

I have also set up static website routing rules for condition

<Condition>
  <HttpErrorCodeReturnedEquals>403</HttpErrorCodeReturnedEquals>
</Condition>
<Redirect>
  <ReplaceKeyWith>403.html</ReplaceKeyWith>
</Redirect>

Still that's not redirecting the signed urls. So I'm at a loss of how to gracefully handle these expired urls. Any help would be greatly appreciated.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

S3 buckets have 2 public-facing interfaces, REST and website. That is the difference between the two hostnames, and the difference in behavior you are seeing.

They have two different feature sets.

feature          REST Endpoint       Website Endpoint
---------------- ------------------- -------------------
Access control   yes                 no, public content only
Error messages   XML                 HTML
Redirection      no                  yes, bucket, rule, and object-level
Request types    all supported       GET and HEAD only
Root of bucket   lists keys          returns index document
SSL              yes                 no

Source: http://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteEndpoints.html

So, as you can see from the table, the REST endpoint supports signed URLs, but not friendly errors, while the website endpoint supports friendly errors, but not signed URLs. The two can't be mixed and matched, so what you're trying to do isn't natively supported by S3.


I have worked around this limitation by passing all requests for the bucket through HAProxy on an EC2 instance and on to the REST endpoint for the bucket.

When a 403 error message is returned, the proxy modifies the response body XML using the new embedded Lua interpreter, adding this before the <Error> tag.

<?xml-stylesheet type="text/xsl" href="/error.xsl"?>

The file /error.xsl is publicly readable, and uses browser-side XSLT to render a pretty error response.

The proxy also injects a couple of additional tags into the xml, <ProxyTime> and <ProxyHTTPCode> for use in the output. The resulting XML looks like this:

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="/error.xsl"?>
<Error><ProxyTime>2015-10-13T17:36:01Z</ProxyTime><ProxyHTTPCode>403</ProxyHTTPCode><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>9D3E05D20C1BD6AC</RequestId><HostId>WvdkvIRIDMjfa/1Oi3DGVOTR0hABCDEFGHIJKLMNOPQRSTUVWXYZ+B8thZahg7W/I/ExAmPlEAQ=</HostId></Error>

Then I vary the output shown to the user with XSL tests to determine what error condition S3 has thrown:

<xsl:if test="//Code = 'AccessDenied'">
  <p>It seems we may have provided you with a link to a resource to which you do not have access, or a resource which does not exist, or that our internal security mechanisms were unable to reach consensus on your authorization to view it.</p>
</xsl:if>

And the final result looks like this:

browser screenshot example of this behavior

The above is a general "Access Denied" because no credentials were supplied. Here's an example of an expired signature.

screenshot of expired signature

I don't include the HostId in the output, since it's ugly and noisy, and, if I ever need it, the proxy captured and logged it for me, and I can cross-reference to the request-id.

As a bonus, of course, running the requests through my proxy means I can use my own domain name and my own SSL certificate when serving up bucket content, and I have real-time access logs with no delay. When the proxy is in the same region as the bucket, there is no additional charge for the extra step of data transfer, and I've been very happy with this setup.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...