Comments for Upload a File to S3

From @hutualive on Wed Aug 02 2017 02:46:42 GMT+0000 (UTC)

why

const uploadedFilename = (this.file)
 ? (await s3Upload(this.file, this.props.userToken)).Location
 : null;

return an URL ?

in s3Upload, the returned object just have:

return s3.upload({
    Key: filename,
    Body: file,
    ContentType: file.type,
    ACL: 'public-read',
  }).promise();

I do not see the key word like “Location”.

thanks.

From @jayair on Wed Aug 02 2017 22:12:48 GMT+0000 (UTC)

@hutualive This the AWS SDK docs for the upload method - http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#upload-property. It returns the Location property that we use. Our own s3Upload method is simply returning a Promise that eventually will give us the object containing the Location property.

From @michaelcuneo on Thu Aug 10 2017 12:44:10 GMT+0000 (UTC)

I have an odd problem… my DynamoDB Tables and S3 File upload appear to be actually updating, if I log in to AWS console and look in the S3 Bucket, and the DynamoDB table, I’m actually getting proper data… but after a call, the Creating just spins and spins and spins… eventually I get these errors in my console.

PUT https://hrs-notes-uploads.s3-ap-southeast-2.amazonaws.com/undefined-1502368754577-IMG_0454.jpg net::ERR_CONNECTION_ABORTED
hrs-notes-uploads.s3-ap-southeast-2.amazonaws.com/?max-keys=0:1 GET https://hrs-notes-uploads.s3-ap-southeast-2.amazonaws.com/?max-keys=0 403 (Forbidden)
xhr.js?28e2:81 PUT https://hrs-notes-uploads.s3-ap-southeast-2.amazonaws.com/undefined-1502368754577-IMG_0454.jpg 403 (Forbidden)

No idea what I’ve done wrong.

From @michaelcuneo on Thu Aug 10 2017 13:06:02 GMT+0000 (UTC)

Now I’ve got a new error with seemingly no changes whatsoever. :o

POST https://5qyf9lxnte.execute-api.ap-southeast-2.amazonaws.com/prod/notes 403 ()
notes:1 Fetch API cannot load https://5qyf9lxnte.execute-api.ap-southeast-2.amazonaws.com/prod/notes. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://192.168.0.10:3000' is therefore not allowed access. The response had HTTP status code 403. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.

From @michaelcuneo on Thu Aug 10 2017 13:16:40 GMT+0000 (UTC)

Disregard all that, I did a stupid thing. Circularly tried to push /notes … N.B. don’t do that. :slight_smile:

From @jayair on Thu Aug 10 2017 23:15:08 GMT+0000 (UTC)

@michaelcuneo Glad you figured it out.

From @designpressure on Tue Sep 19 2017 11:50:03 GMT+0000 (UTC)

@fwang I’ve the same 403 error. I’ve verified my IAM roles policy and it is exactly as requested but I still have the problem… what should I check?
I have also verified CORS:

<?xml version="1.0" encoding="UTF-8"?>
<CORSRule>
   <AllowedOrigin>*</AllowedOrigin>
   <AllowedMethod>GET</AllowedMethod>
   <MaxAgeSeconds>3000</MaxAgeSeconds>
   <AllowedHeader>Authorization</AllowedHeader>
</CORSRule>
</CORSConfiguration>

and policy in IAM:

            "Effect": "Allow",
            "Action": [
                "s3:*"
            ],
            "Resource": [
                "arn:aws:s3:::notes-app-api-prod-ZzZZzzzZzzz/${cognito-identity.amazonaws.com:sub}*"
            ]
        },

From @jayair on Tue Sep 19 2017 18:34:32 GMT+0000 (UTC)

@designpressure That CORS block that you posted is the default one. The one we use in the tutorial (https://serverless-stack.com/chapters/create-an-s3-bucket-for-file-uploads.html) looks like this:

<CORSConfiguration>
	<CORSRule>
		<AllowedOrigin>*</AllowedOrigin>
		<AllowedMethod>GET</AllowedMethod>
		<AllowedMethod>PUT</AllowedMethod>
		<AllowedMethod>POST</AllowedMethod>
		<AllowedMethod>HEAD</AllowedMethod>
		<MaxAgeSeconds>3000</MaxAgeSeconds>
		<AllowedHeader>*</AllowedHeader>
	</CORSRule>
</CORSConfiguration>

Not sure if you missed it but give that a try.

From @designpressure on Thu Sep 21 2017 06:38:37 GMT+0000 (UTC)

Yeah, that was the problem, now it uploads fine, thanks.

From @QuantumInformation on Thu Oct 05 2017 21:53:18 GMT+0000 (UTC)

Note if you get an error that says AccessDenied, your policy for the auth role is likely incorrect, for me it was the wrong bucket setting.

From @tommedema on Fri Oct 20 2017 06:23:43 GMT+0000 (UTC)

Talking about security and validation: currently the size based validation is only running on the frontend:

if (this.file && this.file.size > config.MAX_ATTACHMENT_SIZE) {
    alert("Please pick a file smaller than 5MB");
    return;
}

Obviously this is not something we could rely upon. Is it possible to enforce this in the policy document for s3?

From @jayair on Fri Oct 20 2017 18:43:27 GMT+0000 (UTC)

@tommedema Yeah we don’t cover this in detail but I think we could. There is a way to limit this but not through the policy that you set in the AWS Console. Instead you’ve to generate a pre-signed with the content-length-range header.

From @tommedema on Sat Oct 21 2017 07:04:05 GMT+0000 (UTC)

@jayair I don’t understand. The headers would still be controlled by the client, and therefore cannot be trusted. The enforcement necessarily has to come from the server-side. Since we don’t have a server, the only way (from my perspective) would be to define it in a policy document. Am I missing something?

From @jayair on Sat Oct 21 2017 23:59:38 GMT+0000 (UTC)

@tommedema It does require an API on the backend to sign these headers. Here is one way AWS docs show this - https://aws.amazon.com/articles/browser-uploads-to-s3-using-html-post-forms/.

From @eldadmel on Thu Dec 07 2017 19:48:18 GMT+0000 (UTC)

I have a problem when I try to create a note with a file. I always get “AccessDenied: Access Denied” alert.
I checked the IAM role and the CORS and I think they are current. Is there anything else I can check?

From @jayair on Mon Dec 11 2017 11:45:15 GMT+0000 (UTC)

@eldadmel Can you see the full error in the browser console?

From @SpencerGreene on Fri Dec 15 2017 17:53:21 GMT+0000 (UTC)

I’m also seeing “AccessDenied: AccessDenied” alert.
The browser console message is:
Failed to load resource https://sg-serverless-stack-tutorial-01.s3.us-west-2.amazonaws.com/us-west-2%3A403eb53b-5dc4-47c2-9a61-340175902fd1-1513360210560-testfile.txt: the server responded with a status of 403 (Forbidden)

It’s not in the client afaik - I tried your client from the github repo and the error is the same.

From @jayair on Sun Dec 17 2017 22:46:45 GMT+0000 (UTC)

@SpencerGreene Check the IAM role for your Identity Pool in this chapter - https://serverless-stack.com/chapters/create-a-cognito-identity-pool.html

We set the permissions for what a client can access.

From @SpencerGreene on Tue Dec 19 2017 06:23:23 GMT+0000 (UTC)

@jayair Thanks - fixed! It looked to my eye like it matched what was in that chapter, but I copy-pasted over what I had just to be sure, and sure enough it started working - so I must have fat-fingered something. (Curious - does AWS remember the history of my IAM role, so I can forensically analyze what I did wrong?)

From @SpencerGreene on Tue Dec 19 2017 08:46:07 GMT+0000 (UTC)

OK here’s a question – I’m trying to implement the delete attachment “exercise for the reader.” I see that the ‘attachment’ value stored in the database table is the full Location including the bucket name, whereas the deleteObject api wants the key in its own parameter. Is there a recommended way to strip the bucket name off of the ‘attachment’? I can string-manipulate it, but that makes an assumption on the format of that string, which seems like bad practice. I was looking for an AWS api call that would take the Location and return the Key, but I don’t see such api.

Another way would be to store the Key instead of the Location; or to add another field to the database and store both. What would you recommend?