how to get all log groups names - amazon-web-services

I have a lambda that exports all of our loggroups to s3, and am currently using cloudwatchlogs.describeLogGroups to list
all of our logGroups.
const logGroupsResponse = await cloudwatchlogs.describeLogGroups({ limit: 50 })
The issue is that we have 69 logGroups is there any way to list (ids, and names) of absolutely all logGroups in an aws account. I see it's possible to have 1000 log groups. This is a screenshot of our console:
How come cloudwatchlogs.describeLogGroups just allows a limit of 50 which is very small?

Assuming that you are using AWS JS SDK v2, describeLogGroups API provides a nextToken in its response and also accepts a nexToken. This token is used for retrieving multiple log groups (more than 50) by sending multiple requests. We can use the following pattern to accomplish this:
const cloudwatchlogs = new AWS.CloudWatchLogs({region: 'us-east-1'});
let nextToken = null;
do {
const logGroupsResponse = await cloudwatchlogs.describeLogGroups({limit: 50, nextToken: nextToken}).promise();
// Do something with the retrieved log groups
console.log(logGroupsResponse.logGroups.map(group => group.arn));
// Get the next token. If there are no more log groups, the token will be undefined
nextToken = logGroupsResponse.nextToken;
} while (nextToken);
We are querying the AWS API in loop until there are no more log groups left.

Related

Get generated API key from AWS AppSync API created with CDK

I'm trying to access data from my stack where I'm creating an AppSync API. I want to be able to use the generated Stacks' url and apiKey but I'm running into issues with them being encoded/tokenized.
In my stack I'm setting some fields to the outputs of the deployed stack:
this.ApiEndpoint = graphAPI.url;
this.Authorization = graphAPI.graphqlApi.apiKey;
When trying to access these properties I get something like ${Token[TOKEN.209]} and not the values.
If I'm trying to resolve the token like so: this.resolve(graphAPI.graphqlApi.apiKey) I instead get { 'Fn::GetAtt': [ 'AppSyncAPIApiDefaultApiKey537321373E', 'ApiKey' ] }.
But I would like to retrieve the key itself as a string, like da2-10lksdkxn4slcrahnf4ka5zpeemq5i.
How would I go about actually extracting the string values for these properties?
The actual values of such Tokens are available only at deploy-time. Before then you can safely pass these token properties between constructs in your CDK code, but they are opaque placeholders until deployed. Depending on your use case, one of these options can help retrieve the deploy-time values:
If you define CloudFormation Outputs for a variable, CDK will (apart from creating it in CloudFormation), will, after cdk deploy, print its value to the console and optionally write it to a json file you pass with the --outputs-file flag.
// AppsyncStack.ts
new cdk.CfnOutput(this, 'ApiKey', {
value: this.api.apiKey ?? 'UNDEFINED',
exportName: 'api-key',
});
// at deploy-time, if you use a flag: --outputs-file cdk.outputs.json
{
"AppsyncStack": {
"ApiKey": "da2-ou5z5di6kjcophixxxxxxxxxx",
"GraphQlUrl": "https://xxxxxxxxxxxxxxxxx.appsync-api.us-east-1.amazonaws.com/graphql"
}
}
Alternatively, you can write a script to fetch the data post-deploy using the listGraphqlApis and listApiKeys commands from the appsync JS SDK client. You can run the script locally or, for advanced use cases, wrap the script in a CDK Custom Resource construct for deploy-time integration.
Thanks to #fedonev I was able to extract the API key and url like so:
const client = new AppSyncClient({ region: "eu-north-1" });
const command = new ListGraphqlApisCommand({ maxResults: 1 });
const res = await client.send(command);
if (res.graphqlApis) {
const apiKeysCommand = new ListApiKeysCommand({
apiId: res.graphqlApis[0].apiId,
});
const apiKeyResponse = await client.send(apiKeysCommand);
const urls = flatMap(res.graphqlApis[0].uris);
if (apiKeyResponse.apiKeys && res.graphqlApis[0].uris) {
sendSlackMessage(urls[1], apiKeyResponse.apiKeys[0].id || "");
}
}

Uploading item in Amazon s3 bucket from React Native with user's info

I am uploading an image on AWS S3 using React Native with AWS amplify for mobile app development. Many users use my app.
I want that whenever any user uploads the image on S3 through the mobile app, I want to get user's ID also along with that image. So that later I can recognise the images on S3 that which image belongs to which user. How can I achieve this?
I am using AWS Auth Cognito for user registration/ Sign-In. I came to know that whenever a user is registered in AWS cognito (for the first time), the user gets a unique ID in the pool. Can I use this user ID to be passed alongwith image whenever user uploads image?
Basically I want to have some form of functionality so that I can track back to the user who uploaded the image on S3. This is because after the image is uploaded on S3, I later want to process this image and send the result back ONLY to the user of the image.
You can store the data in S3 in structure similar to the one below:
users/
123userId/
image1.jpeg
image2.jpeg
anotherUserId456/
image1.png
image2.png
Then, if you need all files from given user, you can use ListObjects API in S3 lambda - docs here
// for lambda function
const AWS = require('aws-sdk');
const s3 = new AWS.S3();
const objects = await s3.listObjectsV2({
Bucket: 'STRING_VALUE', /* required */
Delimiter: 'STRING_VALUE',
EncodingType: url,
ExpectedBucketOwner: 'STRING_VALUE',
Marker: 'STRING_VALUE',
MaxKeys: 'NUMBER_VALUE',
Prefix: 'STRING_VALUE',
RequestPayer: requester
}).promise()
objects.forEach(item => {
console.log(item
)});
Or if you are using S3 Lambda trigger, you can parse userId from "key" / filename in received event in S3 lambda (in case you used structure above).
{
"key": "public/users/e1e0858f-2ea1-90f892b68e0c/item.jpg",
"size": 269582,
"eTag": "db8aafcca5786b62966073f59152de9d",
"sequencer": "006068DC0B344DA9E9"
}
Another option is to write "userId" into metadata of the file that will be uploaded to S3.
You can pass "sub" property from Cognito's currently logged user, so in S3 Lambda trigger function you will get the userId from metadata.
import Auth from "#aws-amplify/auth";
const user = await Auth.currentUserInfo();
const userId = user.attributes.sub;
return userId;
// use userId from Cognito and put in into custom metadata
import {Storage} from "aws-amplify";
const userId = "userIdHere"
const filename = "filename" // or use uuid()
const ref = `public/users/#{userId}/${filename}`
const response = await Storage.put(ref, blob, {
contentType: "image/jpeg",
metadata: {userId: userId},
});
AWS Amplify can do all above automatically (create folder structures, etc.), if you do not need any special structure how files are stored = docs here.
You only need to configure Storage ('globally' or per action) with "level" property.
Storage.configure({ level: 'private' });
await Storage.put(ref, blob, {
contentType: "image/jpeg",
metadata: {userId: userId},
});
//or set up level only for given action
const ref= "userCollection"
await Storage.put(ref, blob, {
contentType: "image/jpeg",
metadata: {userId: userId},
level: "private"
});
So, for example, if you use level "private", file "124.jpeg" will be stored in S3 at
"private/us-east-1:6419087f-d13e-4581-b72e-7a7b32d7c7c1/userCollection/124.jpeg"
However, as you can see, "us-east-1:6419087f-d13e-4581-b72e-7a7b32d7c7c1" looks different than the "sub" in Cognito ("sub" property does not contain regions).
The related discussion is here, also with few workarounds, but basically you need to decide how you will manage user identification in your project on your own (if you use "sub" everywhere as userId, or you will go with another ID - I think it is called identityID and consider that as userId).
PS: If you are using React Native, I guess you will go with Push Notification for sending updates from backend - if that is the case, I was doing something similar ("moderation control") - so I added another Lambda function, Cognito's Post-Confirmation Lambda, that creates user in DynamoDB with ID of Cognitos's "sub" property.
Then user can save token from mobile device needed for push notifications, so when the AWS Rekognition finished detection on the image that user uploaded, I queried DynamoDB and used SNS to send the notification to the end user.

share multiple DNS domain with multiple aws accounts by terraform in AWS Resource Access Manager

I'm forwarding DNS requests sent to a list of internal domains (on premise) by using AWS Route53 resolver. By terraform, I want to share the rules I created to other accounts of the company, so I have the following:
# I create as much share endpoint as domain I have, so If I have 30 domains, I'll make 30 endpoint RAM:
resource "aws_ram_resource_share" "endpoint_share" {
count = length(var.forward_domain)
name = "route53-${var.forward_domain[count.index]}-share"
allow_external_principals = false
}
# Here I share every single endpoint with all the AWS ACcount we have
resource "aws_ram_principal_association" "endpoint_ram_principal" {
count = length(var.resource_share_accounts)
principal = var.resource_share_accounts[count.index]
resource_share_arn = {
for item in aws_ram_resource_share.endpoint_share[*]:
item.arn
}
}
The last block, calls the arn output of the first one which is a list.
Now, this last block doesn't work, I don't know how to use multiple counts, when I run this, I get the following error:
Error: Invalid 'for' expression
line 37: Key expression is required when building an object.
Any idea how to make this work?
Terraform version: 0.12.23
Use square brackets in resource_share_arn, like this:
resource_share_arn = [
for item in aws_ram_resource_share.endpoint_share[*]:
item.arn
]

I am learning to create AWS Lambdas. I want to create a "chain": S3 -> 4 Chained Lambda()'s -> RDS. I can't get the first lambda to call the second

I really tried everything. Surprisingly google has not many answers when it comes to this.
When a certain .csv file is uploaded to a S3 bucket I want to parse it and place the data into a RDS database.
My goal is to learn the lambda serverless technology, this is essentially an exercise. Thus, I over-engineered the hell out of it.
Here is how it goes:
S3 Trigger when the .csv is uploaded -> call lambda (this part fully works)
AAA_Thomas_DailyOverframeS3CsvToAnalytics_DownloadCsv downloads the csv from S3 and finishes with essentially the plaintext of the file. It is then supposed to pass it to the next lambda. The way I am trying to do this is by putting the second lambda as destination. The function works, but the second lambda is never called and I don't know why.
AAA_Thomas_DailyOverframeS3CsvToAnalytics_ParseCsv gets the plaintext as input and returns a javascript object with the parsed data.
AAA_Thomas_DailyOverframeS3CsvToAnalytics_DecryptRDSPass only connects to KMS, gets the encrcypted RDS password, and passes it along with the data it received as input to the last lambda.
AAA_Thomas_DailyOverframeS3CsvToAnalytics_PutDataInRds then finally puts the data in RDS.
I created a custom VPC with custom subnets, route tables, gateways, peering connections, etc. I don't know if this is relevant but function 2. only has access to the s3 endpoint, 3. does not have any internet access whatsoever, 4. is the only one that has normal internet access (it's the only way to connect to KSM), and 5. only has access to the peered VPC which hosts the RDS.
This is the code of the first lambda:
// dependencies
const AWS = require('aws-sdk');
const util = require('util');
const s3 = new AWS.S3();
let region = process.env;
exports.handler = async (event, context, callback) =>
{
var checkDates = process.env.CheckDates == "false" ? false : true;
var ret = [];
var checkFileDate = function(actualFileName)
{
if (!checkDates)
return true;
var d = new Date();
var expectedFileName = 'Overframe_-_Analytics_by_Day_Device_' + d.getUTCFullYear() + '-' + (d.getUTCMonth().toString().length == 1 ? "0" + d.getUTCMonth() : d.getUTCMonth()) + '-' + (d.getUTCDate().toString().length == 1 ? "0" + d.getUTCDate() : d.getUTCDate());
return expectedFileName == actualFileName.substr(0, expectedFileName.length);
};
for (var i = 0; i < event.Records.length; ++i)
{
var record = event.Records[i];
try {
if (record.s3.bucket.name != process.env.S3BucketName)
{
console.error('Unexpected notification, unknown bucket: ' + record.s3.bucket.name);
continue;
}
if (!checkFileDate(record.s3.object.key))
{
console.error('Unexpected file, or date is not today\'s: ' + record.s3.object.key);
continue;
}
const params = {
Bucket: record.s3.bucket.name,
Key: record.s3.object.key
};
var csvFile = await s3.getObject(params).promise();
var allText = csvFile.Body.toString('utf-8');
console.log('Loaded data:', {Bucket: params.Bucket, Filename: params.Key, Text: allText});
ret.push(allText);
} catch (error) {
console.log("Couldn't download CSV from S3", error);
return { statusCode: 500, body: error };
}
}
// I've been randomly trying different ways to return the data, none works. The data itself is correct , I checked with console.log()
const response = {
statusCode: 200,
body: { "Records": ret }
};
return ret;
};
While this shows how the lambda was set up, especially its destination:
I haven't posted on Stackoverflow in 7 years. That's how desperate I am. Thanks for the help.
Rather than getting each Lambda to call the next one take a look at AWS managed service for state machines, step functions which can handle this workflow for you.
By providing input and outputs you can pass output to the next function, with retry logic built into it.
If you haven't much experience AWS has a tutorial on setting up a step function through chaining Lambdas.
By using this you also will not need to account for configuration issues such as Lambda timeouts. In addition it allows your code to be more modular which improves testing the individual functionality, whilst also isolating issues.
The execution roles of all Lambda functions, whose destinations include other Lambda functions, must have the lambda:InvokeFunction IAM permission in one of their attached IAM policies.
Here's a snippet from Lambda documentation:
To send events to a destination, your function needs additional permissions. Add a policy with the required permissions to your function's execution role. Each destination service requires a different permission, as follows:
Amazon SQS – sqs:SendMessage
Amazon SNS – sns:Publish
Lambda – lambda:InvokeFunction
EventBridge – events:PutEvents

How to enumerate all SQS queues in an AWS account

How do I list all SQS queues in an AWS account programmatically via the API and .Net SDK?
I am already doing something similar with DynamoDb tables, and that's fairly straightforward - you can page through results using ListTables in a loop until you have them all.
However the equivalent SQS Api endpoint, ListQueues is different and not as useful. It returns up to 1000 queues, with no option of paging.
Yes, there can be over 1000 queues in my case. I have had a query return exactly 1000 results. It's all in 1 region, so it's not the same as this question.
You can retrieve SQS queue names from Cloudwatch, which supports paging. It will only return queues that are considered active.
An active queue is described as:
A queue is considered active by CloudWatch for up to six hours from
the last activity (for example, any API call) on the queue.
Something like this should work:
var client = new AmazonCloudWatchClient(RegionEndpoint.EUWest1);
string nextToken = null;
var results = Enumerable.Empty<string>();
do
{
var result = client.ListMetrics(new ListMetricsRequest()
{
MetricName = "ApproximateAgeOfOldestMessage",
NextToken = nextToken
});
results = results.Concat(
result
.Metrics
.SelectMany(x => x.Dimensions.Where(d => d.Name == "QueueName")
.Select(d => d.Value))
);
nextToken = result.NextToken;
} while (nextToken != null);