Deploying Multiple Services in Serverless

Link to chapter -

I was excited to see this topic coming up (finally), but was very disappointed at the end.

The most important and most difficult part, orchestrating deployment, was quickly brushed off with a “separate resources out and deploy them manually” suggestion.

I disagree that some base resources almost never change. When doing agile development, the creation of new resources and mutation of old ones occurs almost on a daily basis.

In 2018 you’d expect this to be a given; i.e. it should not be necessary for developers to go to devops and ask them if a certain resource is available, if they can (re)deploy it, and what the key/reference to that resource is.

In my opinion you should be able to:

  • have tooling to auto determine your dependency graph (Lerna comes to mind)
  • auto deploy all services in the right order, if they have changed
  • have complete rollback support; i.e. if service B fails to deploy and service A was updated before that; both services should rollback

Yet none of these issues are covered in depth.

1 Like

Hey @tommedema

I agree that the orchestration part is the hardest. Though we haven’t come across any robust solutions for this and so it doesn’t make sense it talk about them here.

The bulk of these chapters are really just addressing how to use cross-stack references. But your list is something we are working on internally and it is something we will try and address moving forward.

Btw, feel free to share any resources that you come across on this. I’m sure there are other folks that are looking for more details on this as well.

Thanks Jay for the explanation,

My best bet right now is to wait for serverless/components; but I’ve followed their development and must say that it is going very slow with much attention to detail still missing.

Honestly I believe Serverless is in somewhat of a crisis at the moment, with a lack of attention on scalability. There’s been lots of talks for the past 6~12 months but little developments.

1 Like

Yeah I agree. A lot more needs to be done for working with larger Serverless applications. This is something we have been looking at recently. We’ll reach out to you over email to take a deeper look into this.

Let’s set something up Jay.

I think one approach is to pivot, much like a startup.

Have you heard about AWS CDK? Read more about AWS CDK here:

It has some obvious advantages over the serverless framework, most notably that you can write your stack in code. E.g. you can programmatically define that a certain stack should be deployed prior to another one, and you can easily pass references from one to another, which is much easier than relying on hardcoded cross-stack references and then remember whether that updated version of a referenced stack is already deployed or not. I believe it has cross-stack rollback support too.

1 Like

Oh I haven’t had a chance to play around with CDK yet. I’ll take a look.

I cloned the github repository you mentioned in this page. When i deploy service in the right order i get different API endpoints for notes and users service. This isnt supposed to happen right? I did nothing else but clone and deploy. Is there anything else i need to do?
Thank you

What do you mean different endpoints? Can you post the endpoints you are seeing?

I tried again today, and got same endpoints for users and notes service. i don’t know what i was doing wrong previously. Sorry about this, and thanks!

1 Like

Would you mind showing an example of those endpoints you deployed and how they should look when done correctly please?

Just an FYI. Your link for Seed in this chapter actually redirects to the landing page for the tutorial

1 Like

Good catch! Thanks! Just fixed it.

I am trying to build an app that uses multiple tables. When the client calls the ‘clients’ endpoint I am getting a 500 status code returned. I am not sure what is going on. I have been through the set up, and everything looks good (Although, I could be missing something). Is it possible for more than one table to use the same endpoint? My deploy output looks like this:

GET -{id}
PUT -{id}
DELETE -{id}
GET -{id}
PUT -{id}
DELETE -{id}

createWorkorder: work-orders-api-dev-createWorkorder
getWorkorder: work-orders-api-dev-getWorkorder
listWorkorders: work-orders-api-dev-listWorkorders
updateWorkorder: work-orders-api-dev-updateWorkorder
deleteWorkorder: work-orders-api-dev-deleteWorkorder
createClient: work-orders-api-dev-createClient
getClient: work-orders-api-dev-getClient
listClients: work-orders-api-dev-listClients
updateClient: work-orders-api-dev-updateClient
deleteClient: work-orders-api-dev-deleteClient

workorders and clients are respectively their own tables. I have more tables to set up as well. Do these need to be separated out into completely separate services with their own endpoints?
And either way, how would you configure the front end to see the endpoints.

Auth: {...},
Storage: {...},
API: {
  endpoints: [
      name: 'workorders',
      endpoint: config.apiGateway.URL,
      region: config.apiGateway.REGION
      name: 'clients',
      endpoint: config.apiGateway.URL,
      region: config.apiGateway.REGION


client repo:
api repo:

I enabled the logs in Seed and it was showing I was missing the correct Id for the request per the schema. I am rearranging the schema to have userId as the HASH, and clienId as the RANGE, as it was the other way around. deploying now

1 Like

Re-ordering the schema to have userId be the HASH, and clientId be the RANGE fixed it!

1 Like

Glad you figured it out!