Access current environment name for a NextJS app running on amplify - amazon-web-services

I have added a few tables on DynamoDB using the amplify add storage command.
But the table has a suffix that is the environment name (dev, prod, etc).
How can I access the environment name on my NextJS backend so I can suffix the DynamoDB table name on my code ?
Or there is another way to achieve what I want ?

Amplify automatically creates DynamoDB tables (and also AppSync queries, etc) to match your current Amplify environment. When you create a new environment (eg, 'dev'), the Amplify will automatically create duplicate 'prod' tables, that will perform the same as you 'dev' tables. I'm guessing in your case, you won't need to access environment variables.
If you are using AppSync/GraphQL to make calls, then you can use Amplify's built in dynamic env features here: https://docs.amplify.aws/cli-legacy/graphql-transformer/function/#usage
For example, you could set up a custom Lambda function to update your DynamoDB. You could then set up an AppSync call to that Lambda in your schema.graphql file.
There are some cases where you may need to access your environment variables. You can either set them up manually in .env.local, or possibly easier to run a query in your NextJS javascript to determine the current domain:
const origin =
typeof window !== "undefined" && window.location.origin
? window.location.origin
: "";
console.log(origin); // "https://dev.<>.amplifyapp.com"
An better solution would be to follow this Amplify documentation, except I've tried it and it doesn't work.
I get this in the left nav panel. I've explored each one and no sign of the described Environment Variables section:
It describes accessing/updating env vars here, but apparently you can only find/use this feature if you've connected your Amplify app to Github first. (It would have been nice if the docs had clarified this!)

Related

AWS `amplify push` creates DynamoDB table names which contain suffixes

I recently amplify init and amplify push to migrate Amplify application from one account to another. I am on the receiving end of the migration. amplify push seems successful. However, the DynamoDB table names are a bit weird. They carry some suffixes in this format: <TableName>-<26-char random string>-staging. The -<26-char random string>-staging is the same for all tables. Does this mean the application export part goes wrong or is there something that I need to prune / tune before amplify push?

List all LogGroups using cdk

I am quite new to the CDK, but I'm adding a LogQueryWidget to my CloudWatch Dashboard through the CDK, and I need a way to add all LogGroups ending with a suffix to the query.
Is there a way to either loop through all existing LogGroups and finding the ones with the correct suffix, or a way to search through LogGroups.
const queryWidget = new LogQueryWidget({
title: "Error Rate",
logGroupNames: ['/aws/lambda/someLogGroup'],
view: LogQueryVisualizationType.TABLE,
queryLines: [
'fields #message',
'filter #message like /(?i)error/'
],
})
Is there anyway I can add it so logGroupNames contains all LogGroups that end with a specific suffix?
You cannot do that dynamically (i.e. you can't make this work such that if you add a new LogGroup, the query automatically adjusts), without using something like AWS lambda that periodically updates your Log Query.
However, because CDK is just a code, there is nothing stopping you from making an AWS SDK API call inside the code to retrieve all the log groups (See https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/CloudWatchLogs.html#describeLogGroups-property) and then populate logGroupNames accordingly.
That way, when CDK compiles, it will make an API call to fetch LogGroups and then generated CloudFormation will contain the log groups you need. Note that this list will only be updated when you re-synthesize and re-deploy your stack.
Finally, note that there is a limit on how many Log Groups you can query with Log Insights (20 according to https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AnalyzingLogData.html).
If you want to achieve this, you can create a custom resource using AwsCustomResource and AwsSdkCall classes to do the AWS SDK API call (as mentioned by #Tofig above) as part of the deployment. You can read data from the API call response as well and act on it as you want.

How to reset Production Amplify DB

I am still currently in development stage of a site and I would like to clean the current live DB completely as well as flush the previous schemas. As I am making so many changes to the schemas that I am not longer able to amplify push
Is this possible with AWS amplify?
Attempting to edit the global secondary index byDivision on the DivisionTable table in the Division stack.
An error occurred during the push operation: Attempting to edit the global secondary index
byDivision on the DivisionTable table in the Division stack.
You can simply run:
amplify api remove
amplify api push
so it will remove the API (including DDB). Make sure you firstly check out on the correct environment.
After that you need to add API once again using:
amplify api add
However if you want to do create a clean environment from scratch, then simply:
amplify env remove <env_name>
amplify init
It will firstly remove the environment and then recreate it. It takes a while but works fine!

AWS Amplify filter for #searchable annotation

Currently I am using a DynamoDB instance for my social media application. While designing the schema I sticked to the "one table" rule. So I am putting every data in the same table like posts, users, comments etc. Now I want to make flexible queries for my data. Here I found out that I could use the #searchable annotation to create an Elastic Search instance for a table which is annotated with #model
In my GraphQL schema I only have one #model, since I only have one table. My problem now is that I don't want to make everything in the table searchable, since that would be most likely very expensive. There are some data which don't have to be added to the Elastic Search instance (For example comment related data). How could I handle it ? Do I really have to split my schema down into multiple tables to be able to manage the #searchable annotation ? Couldn't I decide If the row should be stored to the Elastic Search with help of the Partitionkey / Primarykey, acting like a filter ?
The current implementation of the amplify-cli uses a predefined python Lambda that are added once we add the #searchable directive to one of our models.
The Lambda code can not be edited and currently, there is no option to define a custom Lambda, you read about it
https://github.com/aws-amplify/amplify-cli/issues/1113
https://github.com/aws-amplify/amplify-cli/issues/1022
If you want a custom Lambda where you can filter what goes to the Elasticsearch Instance, you can follow the steps described here https://github.com/aws-amplify/amplify-cli/issues/1113#issuecomment-476193632
The closest you can get is by creating a template in amplify\backend\api\myapiname\stacks\ where you can manage all the resources related to Elasticsearch. A good start point is to
Add #searchable to one of your model in the schema.grapql
Run amplify api gql-compile
Copy the generated template in the build folder, \amplify\backend\api\myapiname\build\stacks\SearchableStack.json to amplify\backend\api\myapiname\stacks\
Remove the #searchable directive from the model added in step 1
Start editing your new template copied in step 3
Add a Lambda and use it in the template as the resolver for the DynamoDB Stream
Using this approach will give you total control of the resources related to the Elasticsearch service, but, will also require to do it all by your own.
Or, just go by creating a table for each model.
Hope it helps
It is now possible to override the generated streaming function code as well.
thanks to the AWS Support for the information provided
leaved a message on the related github issue as well https://github.com/aws-amplify/amplify-category-api/issues/437#issuecomment-1351556948
All you need is to run
amplify override api
edit the corresponding overrode.ts
change the code with the resources.opensearch.OpenSearchStreamingLambdaFunction.code
resources.opensearch.OpenSearchStreamingLambdaFunction.functionName = 'python_streaming_function';
resources.opensearch.OpenSearchStreamingLambdaFunction.handler = 'index.lambda_handler';
resources.opensearch.OpenSearchStreamingLambdaFunction.code = {
zipFile: `
# python streaming function customized code goes here
`
}
Resources:
[1] https://docs.amplify.aws/cli/graphql/override/#customize-amplify-generated-resources-for-searchable-opensearch-directive
[2]AWS::Lambda::Function Code - Properties - https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-lambda-function-code.html#aws-properties-lambda-function-code-properties

InvalidSignatureException when using boto3 for dynamoDB on aws

Im facing some sort of credentials issue when trying to connect to my dynamoDB on aws. Locally it all works fine and I can connect using env variables for AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_DEFAULT_REGION and then
dynamoConnection = boto3.resource('dynamodb', endpoint_url='http://localhost:8000')
When changing to live creds in the env variables and setting the endpoint_url to the dynamoDB on aws this fails with:
"botocore.exceptions.ClientError: An error occurred (InvalidSignatureException) when calling the Query operation: The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details."
The creds are valid as they are used in a different app which talks to the same dynamoDB. Ive also tried not using env variables but rather directly in the method but the error persisted. Furthermore, to avoid any issues with trailing spaces Ive even used the credentials directly in the code. Im using Python v3.4.4.
Is there maybe a header that also should be set that Im not aware of? Any hints would be apprecihated.
EDIT
Ive now also created new credentials (to make sure there are only alphanumerical signs) but still no dice.
You shouldn't use the endpoint_url when you are connecting to the real DynamoDB service. That's really only for connecting to local services or non-standard endpoints. Instead, just specify the region you want:
dynamoConnection = boto3.resource('dynamodb', region_name='us-west-2')
It sign that your time zone is different. Maybe you can check your:
1. Time zone
2. Time settings.
If there are some automatic settings, you should fix your time settings.
"sudo hwclock --hctosys" should do the trick.
Just wanted to point out that accessing DynamoDB from a C# environment (using AWS.NET SDK) I ran into this error and the way I solved it was to create a new pair of AWS access/secret keys.
Worked immediately after I changed those keys in the code.