Link to chapter - https://serverless-stack.com/chapters/configure-cognito-identity-pool-in-serverless.html
Can we use Ref: ApiGatewayRestApi
even we’ve not defined ApiGatewayRestApi
?
I should probably add a note on this. The ApiGatewayRestApi
is a name that Serverless Framework uses to name the API Gateway resource that is defined in the serverless.yml
.
An Identity Pool seems to require an UnAuth role as well as an Auth Role. How is the UnAuth role handled in a .yml file?
Are you seeing any errors? We don’t set the Unauth role. But we have this at the top AllowUnauthenticatedIdentities: false
.
Thanks for the response. I have that line in my yaml file, but when I log in to the console and look at my identity pool, It still tells me that I need to attach an unauthorized role policy. I figured out how to add the unauth policy in the yaml file, but for future reference Is that warning in the AWS console something I can just ignore?
Oh yeah, you don’t need to. By configuring our infrastructure as code, you don’t need to check the console.
How would I go about allowing unauthenticated read-only access to the files in the s3 attachments bucket? I still want only authenticated users to be able to upload files (as the current code has it), but I also want unauthenticated users to be able to fetch files from the s3 bucket. Thanks in advance.
So the URLs that we generate are publicly accessible AFAIK. Can you give that a try?
Yes, it appears the URLs are publicly accessible. In terms of actually generating those URLs though, I wouldn’t want to use the aws-amplify “Storage.get.vault(…)” method (since it involves authentication), right? How would I go about it in an unauthenticated way?
UPDATE:
Got it to work by doing s3 = AWS.S3(…) with credentials I put in an .env file, and then s3.getSignedUrl(…) with the s3 bucket name and file key.
Glad you figured it out. Thanks for the update.
Once I deploy Autmation Serverless Backend API I checked identity pool error in aws console. After reading this post looks like we can ignore it correct ?
Yeah you should be okay.
Is there a reason that the IdentityPoolName
value is using upper camel / pascal case, i.e.
IdentityPoolName: ${self:custom.stage}IdentityPool
whereas the UserPoolName
is hyphen-separated, i.e.
UserPoolName: ${self:custom.stage}-user-pool
?
Sorry, I know this is kind of nitpicky, but I was just wondering if it was a style convention for each of these resources.
Hi, first of all, I’m a huge fan of this tutorial so thank you very much. I’m curious if I can share a cognito-identity-pool among two API gateways. I created an app using this tutorial, and now to increase its features, I want to add a micro-service with a dynamoDB instance and API’s to write to it.
For that, I copied the directory of the serverless tutorial backend, and stripped off the cognito and s3 resources, and only kept the new dynamoDB resource and the api-gateway-errors YAML file. When I use the aws-api-gateway-cli-test for this new service while specifying the user-pool-id of the old app, I get an error like this:
{ status: 403,
statusText: 'Forbidden',
data:
{ message:
'User: arn:aws:sts::259875073853:assumed-role/medbuddy-api-dev-CognitoAuthRole-1UD5GTTQXS920/CognitoIdentityCredentials is not authorized to perform: execute-api:Invoke on resource: arn:aws:execute-api:us-east-1:********3853:qd9g63v59g/dev/POST/medications' } }
I came to the conclusion that I have to specify somewhere in the new microservices serverless or cloud formation files to connect to the existing user-pool-id of the previous stack, but can’t figure out how to do that. I would really appreciate some help. Thank you.
No there isn’t really. You can them any which way you like.
Thanks!
In the original Identity Pool, we allow access to our API Gateway by doing this:
You’ll need to do something similar for your new API.
Though I would question if you need a completely new endpoint or if you can add another service but use the same endpoint. Say /notes
and /users
or something.
I’ve decided adding another endpoint is way easier to work with, thanks for the tip.
One issue I’m having with the Serverless stack is though, every time I make changes to the back-end (APIs), I have to redeploy the whole stack and all the existing user pools and databases become obsolete. I can’t imagine this being the case in a production environment. How do you transfer all the users and their content on to the new serverless stack? or should I be modifying my update process so that I don’t overwrite the existing stack? (in that case how do I do that? , with seed.run I feel like I can’t edit the serverless deploy call and so I’m stuck with rewriting the whole existing stack)
What do you mean obsolete? If you change the service name in your serverless.yml
it’ll create a completely new stack. Otherwise it should simply update your existing stack.
What has been happening to me is that, every time I change something in the lambdas, the serverless.yml
, or the resources, it tries to create a total new stack and and complains that I have a dynamoDB or s3 bucket with the same name. After I delete those resources, it creates a completely new stack. What do you think I could be doing wrong that it doesn’t update the stack but re-writes it?