Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 months ago.
Improve this question
The way that most databases typically work is through a consistent connection (mysql/postgres) etc, through a connection string to hit a server.
I am working with Lambda/DynamoDB and to my understanding, Dynamo is serverless, there is no consistent connection, it's just http calls. I can't see anywhere where I point the lambda at the dynamo table (other than in the IAM policy, giving it access to said table). Does it infer it from the IAM policy, or do I need to point this elsewhere?
DynamoDB Table name is provided as an input when you make a request. That's how Lambda knows which DynamoDB to connect to.
Most people use the AWS SDK to connect to the DynamoDB. Table name is an input in the client.
If you want to connect making HTTP calls, you'd be using the Low-level API. You still provide the Table name in the request. You can see the documentation here for that.
You would typically pass the table name to the Lambda as an Environment Variable*. The table name is used in the DynamoDB SDK client to perform operations on the table.
* You could also retrieve the table name value at runtime from the Systems Manager Parameter Store. But because table name is a fixed value, the environment variable approach is better.
You're partly correct in that it infers the account number and permissions needed from the IAM policy attached to your Lambdas ExecutionRole.
And you're also correct that DynamoDB is a http/https based connection, which uses short lived connections rather that the long lived ones that you may be more familiar with.
Ultimately it comes down to how you configure your DynamoDB Client. It's through the parameters you pass to your SDKs client that Lambda knows how to connect to that table. 2 important parameters here are the region_name and table_name:
session = boto3.session.Session(region_name="eu-west-1")
client = session.client('dynamodb')
response = client.scan(
TableName=table_name
)
Above you can see an example of a Scan in python, where you see the region being set and also the table name being passed as a parameter to the Scan function.
Most people tend to store their table names as Lambda Environment Variables which allow you to set the tables name at creation time through infrastructure as code for example.
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GettingStartedDynamoDB.html
Related
So I got access to SimilarWeb ranking API from AWS(https://aws.amazon.com/marketplace/pp/prodview-clsj5k4afj4ma?sr=0-1&ref_=beagle&applicationId=AWSMPContessa).
I'm not able to figure out how to pass the authentication or how to give a request to retrieve the ranks for domains.
For ex. how will you pass the request for this URL in python?
URL: https://api-fulfill.dataexchange.us-east-1.amazonaws.com/v1/v1/similar-rank/amazon.com/rank
This particular product does not seem to be available any longer. Generally speaking, an AWS IAM principal with correct IAM permissions, can make API calls against AWS Data Exchange for APIs endpoints. The payload of the API call needs to adhere to the OpenAPI spec defined within the DataSet of the product used. The specific API call is 'SendApiAsset'. The easiest way to think about is to read the boto3 documentation for it, here: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/dataexchange.html#DataExchange.Client.send_api_asset
Other AWS SDKs have the same call, idiomatic to the specific language.
The managed policy that describes the IAM permissions needed is named AWSDataExchangeSubscriberFullAccess, the dataexchange specific permission needed is 'dataexchange:SendApiAsset'.
The awscli way of making the call is described here: https://docs.aws.amazon.com/cli/latest/reference/dataexchange/send-api-asset.html
The required parameters are: asset-id, data-set-id, revision-id. You will likely also need to provide values for: method and body (and perhaps others also depending on the specific API you are calling.
The content of the 'body' parameter needs to adhere to the OpenAPI spec of the actual dataset provided as part of the product.
You can obtain the values for asset-id, data-set-id and revision-id from the AWS Data Exchange service web console describing the product/dataset.
We are working on FHIR(Fast Healthcare Interoperability Resources).
We have followed “FHIR works on AWS” and deployed the Cloud Formation template given by AWS in our AWS environment. Following is the template that we have deployed.
https://docs.aws.amazon.com/solutions/latest/fhir-works-on-aws/aws-cloudformation-template.html
Requirement : we want to maintain client specific/customized ids as primary key in the server.
Problem : server not allowing us to override or maintain client specific (customized) ids as primary key. Infact, in the runtime, it is generating its own ids and ignoring the ids provided by us.
Could you please let us know if there is any way to post the FHIR resource with client specific ids into FHIR server(Dynamo DB).
We have observed that by using "PUT" call(https://hl7.org/fhir/http.html#upsert), we might be able to generate the resource with customized ids as primary keys, but there is a precondition stating that "CapabilityStatement.rest.resource.updateCreate" Flag to be updated as "True".
Is there any way to update the "CapabilityStatement.rest.resource.updateCreate" flag through AWS console or by any manual process??
I have to create app where each user has his own database, can login on his own subdomain, but all users use the same API endpoints (Lambda functions).
API is in Node.js, Frontend in Angular 7.
Is it feasible? Can you give me instruction how to configure AWS to it?
AWS has little role to play. Your nodejs api and lambda functions design will handle this
I've done this a couple of years ago. I used a key called ClientID which is passed into every request. You can use anything as long as it's unique.
In your APIs, when you initialize your database, use that identifier to map your database connection. (this means you have to initialize the database each time there's a request)
e.g.
User A -> Database A
User B -> Database B
etc...
However, I encountered an issue when doing this:
when you update database A, you have to update database B as well.
(what happens when you have 100 databases?)
But it's doable.
I have a requirement to build a basic "3 failed login attempts and your account gets locked" functionality. The project uses AWS Cognito for Authentication, and the Cognito PreAuth and PostAuth triggers to run a Lambda function look like they will help here.
So the basic flow is to increment a counter in the PreAuth lambda, check it and block login there, or reset the counter in the PostAuth lambda (so successful logins dont end up locking the user out). Essentially it boils down to:
PreAuth Lambda
if failed-login-count > LIMIT:
block login
else:
increment failed-login-count
PostAuth Lambda
reset failed-login-count to zero
Now at the moment I am using a dedicated DynamoDB table to store the failed-login-count for a given user. This seems to work fine for now.
Then I figured it'd be neater to use a custom attribute in Cognito (using CognitoIdentityServiceProvider.adminUpdateUserAttributes) so I could throw away the DynamoDB table.
However reading https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-dg.pdf the section titled "Configuring User Pool Attributes" states:
Attributes are pieces of information that help you identify individual users, such as name, email, and phone number. Not all information about your users should be stored in attributes. For example, user data that changes frequently, such as usage statistics or game scores, should be kept in a separate data store, such as Amazon Cognito Sync or Amazon DynamoDB.
Given that the counter will change on every single login attempt, the docs would seem to indicate I shouldn't do this...
But can anyone tell me why? Or if there would be some negative consequence of doing so?
As far as I can see, Cognito billing is purely based on storage (i.e. number of users), and not operations, whereas Dynamo charges for read/write/storage.
Could it simply be AWS not wanting people to abuse Cognito as a storage mechanism? Or am I being daft?
We are dealing with similar problem and main reason why we have decided to store extra attributes in DB is that Cognito has quotas for all the actions and "AdminUpdateUserAttributes" is limited to 25 per second.
More information here:
https://docs.aws.amazon.com/cognito/latest/developerguide/limits.html
So if you have a pool with 100k or more it can create a bottle neck if wanted to update a Cognito user records with every login etc.
Cognito UserAttributes are meant to store information about the users. This information can then be read from the client using the AWS Cognito SDK, or just by decoding the idToken on the client-side. Every custom attribute you add will be visible on the client-side.
Another downside of custom attributes is that:
You only have 25 values to set
They cannot be removed or changed once added to the user pool.
I have personally used custom attributes and the interface to manipulate them is not excellent. But that is just a personal thought.
If you want to store this information, and not depend on DynamoDB, you can use Amazon Cognito Sync. Besides the service, it offers a client with great features that you can incorporate to your app.
AWS DynamoDb appears to be your best option, it is commonly used for such use cases. Some of the benefits of using it:
You can store separate record for each login attempt with as much info as you want such as ip address, location, user-agent etc. You can also add datetime that can be used by pre-auth Lambda to query by time range for example failed attempt within last 30 minutes
You don't need to manage table because you can set TTL for DynamoDb record so that record will be deleted automatically after specified time.
You can also archive items in S3
Simple question, but I suspect it doesn't have a simple or easy answer. Still, worth asking.
We're creating an implementation for push notifications using AWS with our Web Server running on EC2, sending messages to a queue on SQS, which is dealt with using Lambda, which is sent finally to SNS to be delivered to the iOS/Android apps.
The question I have is this: is there a way to query SNS endpoints based on the custom user data that you can provide on creation? The only way I see to do this so far is to list all the endpoints in a given platform application, and then search through that list for the user data I'm looking for... however, a more direct approach would be far better.
Why I want to do this is simple: if I could attach a User Identifier to these Device Endpoints, and query based on that, I could avoid completely having to save the ARN to our DynamoDB database. It would save a lot of implementation time and complexity.
Let me know what you guys think, even if what you think is that this idea is impractical and stupid, or if searching through all of them is the best way to go about this!
Cheers!
There isn't the ability to have a "where" clause in ListTopics. I see two possibilities:
Create a new SNS topic per user that has some identifiable id in it. So, for example, the ARN would be something like "arn:aws:sns:us-east-1:123456789:know-prefix-user-id". The obvious downside is that you have the potential for a boat load of SNS topics.
Use a service designed for this type of usage like PubNub. Disclaimer - I don't work for PubNub or own stock but have successfully used it in multiple projects. You'll be able to target one or many users this way.
According the the [AWS documentation][1] if you try and create a new Platform Endpoint with the same User Data you should get a response with an exception including the ARN associated with the existing PlatformEndpoint.
It's definitely not ideal, but it would be a round about way of querying the User Data Endpoint attributes via exception.
//Query CustomUserData by exception
CreatePlatformEndpointRequest cpeReq = new CreatePlatformEndpointRequest().withPlatformApplicationArn(applicationArn).withToken("dummyToken").withCustomUserData("username");
CreatePlatformEndpointResult cpeRes = client.createPlatformEndpoint(cpeReq);
You should get an exception with the ARN if an endpoint with the same withCustomUserData exists.
Then you just use that ARN and away you go.