Configure DynamoDB in Serverless

I’m fairly new to this, so I may be incorrect. After some debugging, I discovered that the tableName env variable gets assigned the [object Object] value, maybe because it doesn’t evaluate the ‘Ref’ intrinsic function (Not played around with serverless as much but thought this might be helpful).
Anyways, you can resolve the issue by replacing

tableName: Ref NotesTable

with,

tableName: notes-${self:custom.stage}

Hmmm that is weird. If you’ve configured your resources correctly, it should pick it up. The Ref is simply pointing to ${self:custom.stage}-notes.

An issue of this nature has already been posted on the Repo issues: https://github.com/serverless/serverless/issues/3080

Following that issue thread, you can install a plugin serverless-export-env to properly initialize all environment variables with Cloudformation template references.

Again, I’m new to this so I’ve encountered this problem when I use the serverless invoke local rather than serverless-offline plugin. Not sure if it works different for the plugin.

That’s a really old issue. Which version of Serverless Framework are you using?

What is the best way to share Dynamo DB across multiple services. According to Amazon, having a single table is idea, which I can do.

I need to modify different parts of the object, hence difference services that interact with the document differently.

How would you setup the resource in a monorepo?

  1. Separate Resource Service (how does that work in Seed, race condition?)
  2. Each serverless.yml references the same /resources/dynamoDb.yml configuration and import it?

Additionally, do IAM roles for DynamoDB impact all requests in the service? Can you restrict delete actions to the /delete path for example. May need a different user credentials to delete for example?

Sorry still very new to this.

I talked about the resource sharing in the other thread - Organizing Serverless Projects.

But for the IAM roles, it really depends on the level at which you want to control this. You can use Cognito User Groups. Or you can manage these internally in your code if you have some slightly complicated business logic to resolve permissions.

Is there a good documentation resource on configuring DynamoDB in Serverless’s yaml? I’m checking out the docs here: https://serverless.com/framework/docs/providers/aws/guide/resources/ but it’s not very comprehensive

Yeah I listed one in this chapter.

It’s a guide dedicated to DynamoDB - https://www.dynamodbguide.com/.

I’m experiencing the same as @acidfabric and the fix shown by @aaditya-panik is what I had to do to get past it for now. I really dislike the duplicative naming and I’m sure I’m doing something wrong, but I’m not seeing it.

Some background:

  • I read through part 1 in the past 2 weeks
  • I started with part 2 to create a base project and literally just tonight cloned the repo
  • I have made minor modifications to reflect the real-world project that I plan on using it (namely table + field names)
  • I’m using WebStorm as my IDE

Here’s how I invoke the method:

serverless invoke local --function create --path mocks/create-event.json

Here’s the error that gets returned:

      message: '1 validation error detected: Value \'[object Object]\' at \'tableName\' failed to satisfy constraint: Member must satisfy regular expression pattern: [a-zA-Z0-9_.-]+',
      code: 'ValidationException',
      time: 2018-09-08T01:28:37.270Z,

When I do console.log(process.env.TripsTableName);, I simply get:
[object Object]

Here is my serverless.yml:

service: routinator

# Use the serverless-webpack plugin to transpile ES6
plugins:
  - serverless-webpack
  - serverless-offline

# serverless-webpack configuration
# Enable auto-packing of external modules
custom:
  # Our stage is based on what is passed in when running serverless
  # commands. Or fallsback to what we have set in the provider section.
  stage: ${opt:stage, self:provider.stage}
  # Set our DynamoDB throughput for prod and all other non-prod stages.
  tableThroughputs:
    prod: 5
    default: 1
  tableThroughput: ${self:custom.tableThroughputs.${self:custom.stage}, self:custom.tableThroughputs.default}
  # Load our webpack config
  webpack:
    webpackConfig: ./webpack.config.js
    includeModules: true

provider:
  name: aws
  runtime: nodejs8.10
  stage: dev
  region: us-east-1

  # These environment variables are made available to our functions
  # under process.env.
  environment:
    TripsTableName:
      Ref: TripsTable

  # 'iamRoleStatement' defines the permission policy for the Lambda function.
  # In this case Lambda functions are granted with permissions to access DynamoDB.
  iamRoleStatements:
    - Effect: Allow
      Action:
        - dynamodb:DescribeTable
        - dynamodb:Query
        - dynamodb:Scan
        - dynamodb:GetItem
        - dynamodb:PutItem
        - dynamodb:UpdateItem
        - dynamodb:DeleteItem
      # Restrict our IAM role permissions to
      # the specific table for the stage
      Resource:
        - "Fn::GetAtt": [ TripsTable, Arn ]

functions:
  # Defines an HTTP API endpoint that calls the main function in create.js
  # - path: url path is /trips
  # - method: POST request
  # - cors: enabled CORS (Cross-Origin Resource Sharing) for browser cross
  #     domain api call
  # - authorizer: authenticate using the AWS IAM role
  create:
    handler: trips/create.main
    events:
      - http:
          path: trips
          method: post
          cors: true
          authorizer: aws_iam

  get:
    # Defines an HTTP API endpoint that calls the main function in get.js
    # - path: url path is /trips/{id}
    # - method: GET request
    handler: trips/get.main
    events:
      - http:
          path: trips/{id}
          method: get
          cors: true
          authorizer: aws_iam

  list:
    # Defines an HTTP API endpoint that calls the main function in list.js
    # - path: url path is /trips
    # - method: GET request
    handler: trips/list.main
    events:
      - http:
          path: trips
          method: get
          cors: true
          authorizer: aws_iam

  update:
    # Defines an HTTP API endpoint that calls the main function in update.js
    # - path: url path is /trips/{id}
    # - method: PUT request
    handler: trips/update.main
    events:
      - http:
          path: trips/{id}
          method: put
          cors: true
          authorizer: aws_iam

  delete:
    # Defines an HTTP API endpoint that calls the main function in delete.js
    # - path: url path is /trips/{id}
    # - method: DELETE request
    handler: trips/delete.main
    events:
      - http:
          path: trips/{id}
          method: delete
          cors: true
          authorizer: aws_iam

# Create our resources with separate CloudFormation templates
resources:
  # DynamoDB
  - ${file(resources/dynamodb-tables.yml)}
  # S3
  - ${file(resources/s3-bucket.yml)}
  # Cognito
  - ${file(resources/cognito-user-pool.yml)}

Here is my resources/dynamodb-tables.yml:

Resources:
  TripsTable:
    Type: AWS::DynamoDB::Table
    Properties:
      # Generate a name based on the stage
      TableName: ${self:custom.stage}-trips
      AttributeDefinitions:
        - AttributeName: userId
          AttributeType: S
        - AttributeName: tripId
          AttributeType: S
      KeySchema:
        - AttributeName: userId
          KeyType: HASH
        - AttributeName: tripId
          KeyType: RANGE
      # Set the capacity based on the stage
      ProvisionedThroughput:
        ReadCapacityUnits: ${self:custom.tableThroughput}
        WriteCapacityUnits: ${self:custom.tableThroughput}

It works when I change serverless.yml to:

  environment:
    TripsTableName: ${self:custom.stage}-trips

Any thoughts?

At a glance, most of your stuff looks okay. Have you deployed yet or are you just running it locally for now?

Also which version of Serverless are you using?

For two or more tables, how would the the environment block and iamrolestatements Resource definition change?

It would just need more lines in the Resource section:

  # These environment variables are made available to our functions
  # under process.env.
  environment:
    tableName:
      Ref: NotesTable

  iamRoleStatements:
    - Effect: Allow
      Action:
        - dynamodb:DescribeTable
        - dynamodb:Query
        - dynamodb:Scan
        - dynamodb:GetItem
        - dynamodb:PutItem
        - dynamodb:UpdateItem
        - dynamodb:DeleteItem
      # Restrict our IAM role permissions to
      # the specific table for the stage
      Resource:
        - "Fn::GetAtt": [ NotesTable1, Arn ]
        - "Fn::GetAtt": [ NotesTable2, Arn ]
        - "Fn::GetAtt": [ NotesTable3, Arn ]

@jayair, I was using serverless 1.30.3. I just upgraded to 1.31.0.

Same error when running locally. But deployed, it appears to work.

Thanks. You are right this is an issue. This happens because the Ref: option is for CloudFormation and while using it locally Serverless does not have access to that. Here are a couple of ways around it.

  1. You can specify it by referencing directly to the resource file tableName: ${self:resources.0.Resources.NotesTable.Properties.TableName}. Where 0 is the index of the resources specified in your serverless.yml.

  2. Create a custom variable for the table name in your serverless.yml and use that in your DynamoDB resource.

We will probably update the tutorial to go with the second option.

Thanks for getting back to me. I like the option for #2 and will likely implement that. #1 just feels a bit clunky.

1 Like

Why can we remove the following line from libs/dynamodb-lib.js?

AWS.config.update({ region: "us-east-1" });

Without this line, how do Lambda functions know the region of DynamoDB that we want to connect to?

I need to confirm this but by default the AWS SDK within the Lambda defaults the region to the region of the Lambda.

We are going to remove this line from Part I of the tutorial as well since it is causing some confusion.

Why I cannot rename the secondary key noteId with itemId and simply push to git and redeploy with Seed?

An error occurred: NotesTable - Property AttributeDefinitions is inconsistent with the KeySchema of the table and the secondary indexes.

And therefore a build fails.

I think what is happening here is that because of the index change it’s not able to find what you had previously defined. Can you post what your serverless.yml or resource looked like before/after the change?

Before

Resources:
  NotesTable:
    Type: AWS::DynamoDB::Table
    Properties:
      TableName: ${self:custom.tableName}
      AttributeDefinitions:
        - AttributeName: userId
          AttributeType: S
        - AttributeName: noteId
          AttributeType: S
      KeySchema:
        - AttributeName: userId
          KeyType: HASH
        - AttributeName: noteId
          KeyType: RANGE
      # Set the capacity based on the stage
      ProvisionedThroughput:
        ReadCapacityUnits: ${self:custom.tableThroughput}
        WriteCapacityUnits: ${self:custom.tableThroughput}

After

Resources:
  NotesTable:
    Type: AWS::DynamoDB::Table
    Properties:
      TableName: ${self:custom.tableName}
      AttributeDefinitions:
        - AttributeName: userId
          AttributeType: S
        - AttributeName: itemId
          AttributeType: S
      KeySchema:
        - AttributeName: userId
          KeyType: HASH
        - AttributeName: itemId
          KeyType: RANGE
      # Set the capacity based on the stage
      ProvisionedThroughput:
        ReadCapacityUnits: ${self:custom.tableThroughput}
        WriteCapacityUnits: ${self:custom.tableThroughput}

Error

An error occurred: NotesTable - CloudFormation cannot update a stack when a custom-named resource requires replacing. Rename dev-notes and update the stack again..