Configure S3 in Serverless

Link to chapter -

Don’t we miss 2 things here?

  1. IAM configuration to give access to the attachment bucket

    • Effect: “Allow”
      • s3:GetObject
      • s3:PutObject
      • “Fn::Join”: [ “/”, [ “Fn::GetAtt”: [ AttachmentsBucket, “Arn” ], “private”, “", "” ] ]
  2. S3 bucket to deploy the front-end

The IAM configuration is added in this chapter -

The frontend app in this section uses Netlify instead of S3.

How can we reference the bucket name in our code?

Is there really not a better way to get bucket names? I’d like to have different versions of my bucket for dev and prod and be able to reference it inside my functions.

Bucket names need to be globally unique, so in this case it’s better if AWS generates them.

But your question is a bit more of an advanced topic. You want to be able to “parameterize” your bucket name and use that in a Lambda function. We talk about part of this in the new best practices section of the guide - You’ll notice how we export the bucket name there and it includes the stage name as well.

Finally, to use this exported value in your Lambda function you’ll need to add this to the environment block in your serverless.yml.

      'Fn::ImportValue': ${self:custom.stage}-ExtAttachmentsBucketArn

And then process.env.bucketName will give you this value in your function.

Thank you jayair. The content has grown quite a bit and I must have missed this chapter.

1 Like

Yeah we have a new section in the guide.

If I want to allow my users upload large files, i.e. several GB, what should be the MaxAge parameter value?

If I want to give my users access token and secret keys which can be used to upload data in a subdirectory, i.e. - Fn::GetAtt: [UserDataS3Bucket, Arn]
- ‘/private/’
- ‘$’
- ‘{}/raw-data
how do I generate it?

Hmm I’m not entirely sure about this. Is there a doc on uploading large files?