I’m fairly new to this, so I may be incorrect. After some debugging, I discovered that the tableName env variable gets assigned the [object Object] value, maybe because it doesn’t evaluate the ‘Ref’ intrinsic function (Not played around with serverless as much but thought this might be helpful).
Anyways, you can resolve the issue by replacing
Following that issue thread, you can install a plugin serverless-export-env to properly initialize all environment variables with Cloudformation template references.
Again, I’m new to this so I’ve encountered this problem when I use the serverless invoke local rather than serverless-offline plugin. Not sure if it works different for the plugin.
What is the best way to share Dynamo DB across multiple services. According to Amazon, having a single table is idea, which I can do.
I need to modify different parts of the object, hence difference services that interact with the document differently.
How would you setup the resource in a monorepo?
Separate Resource Service (how does that work in Seed, race condition?)
Each serverless.yml references the same /resources/dynamoDb.yml configuration and import it?
Additionally, do IAM roles for DynamoDB impact all requests in the service? Can you restrict delete actions to the /delete path for example. May need a different user credentials to delete for example?
But for the IAM roles, it really depends on the level at which you want to control this. You can use Cognito User Groups. Or you can manage these internally in your code if you have some slightly complicated business logic to resolve permissions.
I’m experiencing the same as @acidfabric and the fix shown by @aaditya-panik is what I had to do to get past it for now. I really dislike the duplicative naming and I’m sure I’m doing something wrong, but I’m not seeing it.
Some background:
I read through part 1 in the past 2 weeks
I started with part 2 to create a base project and literally just tonight cloned the repo
I have made minor modifications to reflect the real-world project that I plan on using it (namely table + field names)
I’m using WebStorm as my IDE
Here’s how I invoke the method:
serverless invoke local --function create --path mocks/create-event.json
Here’s the error that gets returned:
message: '1 validation error detected: Value \'[object Object]\' at \'tableName\' failed to satisfy constraint: Member must satisfy regular expression pattern: [a-zA-Z0-9_.-]+',
code: 'ValidationException',
time: 2018-09-08T01:28:37.270Z,
When I do console.log(process.env.TripsTableName);, I simply get: [object Object]
Here is my serverless.yml:
service: routinator
# Use the serverless-webpack plugin to transpile ES6
plugins:
- serverless-webpack
- serverless-offline
# serverless-webpack configuration
# Enable auto-packing of external modules
custom:
# Our stage is based on what is passed in when running serverless
# commands. Or fallsback to what we have set in the provider section.
stage: ${opt:stage, self:provider.stage}
# Set our DynamoDB throughput for prod and all other non-prod stages.
tableThroughputs:
prod: 5
default: 1
tableThroughput: ${self:custom.tableThroughputs.${self:custom.stage}, self:custom.tableThroughputs.default}
# Load our webpack config
webpack:
webpackConfig: ./webpack.config.js
includeModules: true
provider:
name: aws
runtime: nodejs8.10
stage: dev
region: us-east-1
# These environment variables are made available to our functions
# under process.env.
environment:
TripsTableName:
Ref: TripsTable
# 'iamRoleStatement' defines the permission policy for the Lambda function.
# In this case Lambda functions are granted with permissions to access DynamoDB.
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:DescribeTable
- dynamodb:Query
- dynamodb:Scan
- dynamodb:GetItem
- dynamodb:PutItem
- dynamodb:UpdateItem
- dynamodb:DeleteItem
# Restrict our IAM role permissions to
# the specific table for the stage
Resource:
- "Fn::GetAtt": [ TripsTable, Arn ]
functions:
# Defines an HTTP API endpoint that calls the main function in create.js
# - path: url path is /trips
# - method: POST request
# - cors: enabled CORS (Cross-Origin Resource Sharing) for browser cross
# domain api call
# - authorizer: authenticate using the AWS IAM role
create:
handler: trips/create.main
events:
- http:
path: trips
method: post
cors: true
authorizer: aws_iam
get:
# Defines an HTTP API endpoint that calls the main function in get.js
# - path: url path is /trips/{id}
# - method: GET request
handler: trips/get.main
events:
- http:
path: trips/{id}
method: get
cors: true
authorizer: aws_iam
list:
# Defines an HTTP API endpoint that calls the main function in list.js
# - path: url path is /trips
# - method: GET request
handler: trips/list.main
events:
- http:
path: trips
method: get
cors: true
authorizer: aws_iam
update:
# Defines an HTTP API endpoint that calls the main function in update.js
# - path: url path is /trips/{id}
# - method: PUT request
handler: trips/update.main
events:
- http:
path: trips/{id}
method: put
cors: true
authorizer: aws_iam
delete:
# Defines an HTTP API endpoint that calls the main function in delete.js
# - path: url path is /trips/{id}
# - method: DELETE request
handler: trips/delete.main
events:
- http:
path: trips/{id}
method: delete
cors: true
authorizer: aws_iam
# Create our resources with separate CloudFormation templates
resources:
# DynamoDB
- ${file(resources/dynamodb-tables.yml)}
# S3
- ${file(resources/s3-bucket.yml)}
# Cognito
- ${file(resources/cognito-user-pool.yml)}
Here is my resources/dynamodb-tables.yml:
Resources:
TripsTable:
Type: AWS::DynamoDB::Table
Properties:
# Generate a name based on the stage
TableName: ${self:custom.stage}-trips
AttributeDefinitions:
- AttributeName: userId
AttributeType: S
- AttributeName: tripId
AttributeType: S
KeySchema:
- AttributeName: userId
KeyType: HASH
- AttributeName: tripId
KeyType: RANGE
# Set the capacity based on the stage
ProvisionedThroughput:
ReadCapacityUnits: ${self:custom.tableThroughput}
WriteCapacityUnits: ${self:custom.tableThroughput}
Thanks. You are right this is an issue. This happens because the Ref: option is for CloudFormation and while using it locally Serverless does not have access to that. Here are a couple of ways around it.
You can specify it by referencing directly to the resource file tableName: ${self:resources.0.Resources.NotesTable.Properties.TableName}. Where 0 is the index of the resources specified in your serverless.yml.
Create a custom variable for the table name in your serverless.yml and use that in your DynamoDB resource.
We will probably update the tutorial to go with the second option.
I think what is happening here is that because of the index change it’s not able to find what you had previously defined. Can you post what your serverless.yml or resource looked like before/after the change?
Resources:
NotesTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: ${self:custom.tableName}
AttributeDefinitions:
- AttributeName: userId
AttributeType: S
- AttributeName: noteId
AttributeType: S
KeySchema:
- AttributeName: userId
KeyType: HASH
- AttributeName: noteId
KeyType: RANGE
# Set the capacity based on the stage
ProvisionedThroughput:
ReadCapacityUnits: ${self:custom.tableThroughput}
WriteCapacityUnits: ${self:custom.tableThroughput}
After
Resources:
NotesTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: ${self:custom.tableName}
AttributeDefinitions:
- AttributeName: userId
AttributeType: S
- AttributeName: itemId
AttributeType: S
KeySchema:
- AttributeName: userId
KeyType: HASH
- AttributeName: itemId
KeyType: RANGE
# Set the capacity based on the stage
ProvisionedThroughput:
ReadCapacityUnits: ${self:custom.tableThroughput}
WriteCapacityUnits: ${self:custom.tableThroughput}
Error
An error occurred: NotesTable - CloudFormation cannot update a stack when a custom-named resource requires replacing. Rename dev-notes and update the stack again..