AWS opswork stack attributes cache - amazon-web-services

I am passing my 3rd party subscription license key (newrelic) and few other config variables via the stack settings Custom JSON.
"opsworks_java": {
"datasources": {
"app": "jdbc/myapp"
}
},
"newrelic": {
"license": "2454645aef2e055a1f5fc0e2201f5570bccaa3deb3"
}
After I change the value of these attributes or add new keys, I run configure recipes on my already existing instances, hoping that it will pick up the new stack settings JSON, but it doesnt work. Am I missing something?
PS. I am new to Opswork , so probably there is a more elegant way to do this.

You may need to run update_cookbooks on already existing instances (or do a deploy). (Stack -> Run Command)
Here's an experiment: do new instances read these values correctly / configure themselves with these values? If so, update_cookbooks should help you out.

Related

List all LogGroups using cdk

I am quite new to the CDK, but I'm adding a LogQueryWidget to my CloudWatch Dashboard through the CDK, and I need a way to add all LogGroups ending with a suffix to the query.
Is there a way to either loop through all existing LogGroups and finding the ones with the correct suffix, or a way to search through LogGroups.
const queryWidget = new LogQueryWidget({
title: "Error Rate",
logGroupNames: ['/aws/lambda/someLogGroup'],
view: LogQueryVisualizationType.TABLE,
queryLines: [
'fields #message',
'filter #message like /(?i)error/'
],
})
Is there anyway I can add it so logGroupNames contains all LogGroups that end with a specific suffix?
You cannot do that dynamically (i.e. you can't make this work such that if you add a new LogGroup, the query automatically adjusts), without using something like AWS lambda that periodically updates your Log Query.
However, because CDK is just a code, there is nothing stopping you from making an AWS SDK API call inside the code to retrieve all the log groups (See https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/CloudWatchLogs.html#describeLogGroups-property) and then populate logGroupNames accordingly.
That way, when CDK compiles, it will make an API call to fetch LogGroups and then generated CloudFormation will contain the log groups you need. Note that this list will only be updated when you re-synthesize and re-deploy your stack.
Finally, note that there is a limit on how many Log Groups you can query with Log Insights (20 according to https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AnalyzingLogData.html).
If you want to achieve this, you can create a custom resource using AwsCustomResource and AwsSdkCall classes to do the AWS SDK API call (as mentioned by #Tofig above) as part of the deployment. You can read data from the API call response as well and act on it as you want.

Export/Outputs that don't exist preventing stack from updating/deleting

Using serverless to deploy to AWS.
I created a Cognito user pool via serverless then realised I wanted to change it's attributes.
I couldn't deploy because you can't update attributes on an existing user pool.
"No problem - I'll just delete it and make it again" I thought. So I did.
But I had created two Outputs that referencing Client ID and Pool ID so now I get this:
Export alpha-UserPoolId cannot be deleted as it is in use by alpha-Stack
I can't see any way to remove theese references manually via the AWS console.
Anyone know what I can do to remove these dead references?
There's no option to manually remove an Output and I tried editing the template but it didn't seem to actually do anything.
Thanks
[EDIT: Check comments for full details on solution]
You have to edit the importing stack to not rely on these values, afterwards you can remove them.
As long as there is an Fn::ImportValue somewhere, it won't let you delete the export.
From the docs:
The following restrictions apply to cross stack references
...
You can't delete a stack if another stack references one of its outputs.
You can't modify or remove an output value that is referenced by another stack.

Cloudformation: The resource you requested does not exist

I have a cloudformation stack which has a Lambda function that is mapped as a trigger to an SQS queue.
What happened was that I had to delete the mapping and create it again manually cos I wanted to change the batch size. Now when I want to update the mapping the cloudformation throws an error with The resource you requested does not exist. message.
The resource mapping code looks like this:
"EventSourceMapping":{
"Properties":{
"BatchSize":5,
"Enabled":"true",
"EventSourceArn":{
"Fn::GetAtt":[
"ProcessorQueue",
"Arn"
]
},
"FunctionName":{
"Fn::GetAtt":[
"ProcessorLambda",
"Arn"
]
}
},
"Type":"AWS::Lambda::EventSourceMapping"
}
I know that I've deleted the mapping cloudformation created initially and added it manually which is causing the issue. How do I fix this? Cos I cannot push any update now.
Please help
What you did, from my perspective, it is a mistake. When you use Cloud Formation you are not suppose to apply changes manually. You can, and maybe that's fine since one may don't care about the stack once is created. But since you are trying to update the stack, this tells me that you want to keep the stack and update it on a time basis.
To narrow down your problem, first let make clear that the manually-created mapping is out of sync with your cloud formation stack. So, from a cloud formation perspective, it doesn't matter if you keep that mapping or not. I'm wondering, what would happen if you keep the manually-created mapping and create a new from Cloud Formation? Maybe it will complain, since you would have repeated mappings for the same pair of (lambda,queue). Try this:
Create a change for your stack, where you completely remove the EventSourceMapping resource from your script. This step is to basically clean loosing references. Apply the change set.
Then, and this is where I think you may get some kind of issue, add back again EventSourceMapping to your stack.
If you get errors in the step 2, like "this mapping already exists", you will have to remove the manually-created mapping from the console. And then try again step 2.
You probably know now that you should not have removed the resource manually. If you change the CF, you can update it without changing resources which did not change in CF. You can try to replace the resource with the exact same physical name https://aws.amazon.com/premiumsupport/knowledge-center/failing-stack-updates-deleted/ The other option is to remove the resource from CF, update, and then add it back and update again - from the same doc.
While comments above are valid, I found it interesting, that no one mentioned much simpler option: using SAM commands (sam build/sam deploy). It's understandable that during the development process and designing the architecture, there might be flaws and situations where manual input in the console is necessary, therefore there's something I reference to every time I have similar issue.
Simply comment out the chunk of code that is creating troubles, run sam build/deploy on top of it, CloudFormation stack will recognize that the resource no longer in the template and will delete it.
Now, since the resource is no longer in the architecture anyway(removed manually prior), it will have no issues passing the step and successfully updating the stack.
Then simply uncomment, make any necessary changes (if any) and deploy.
Works every time.

Getting error, "Entity doesn't exist in AsyncLocal" when trying to call CreateBatchWrite<T> method of DynamoDBContext object

I have created a DynamoDb table in my dev machine and I'm trying to insert couple of rows from my .NET Core application using the CreateBatchWrite<T> method of DynamoDBContext object. I'm able to query the table from DynamoDB Javascript Shell window from the localhost:8000/shell url and it returns row count as 0. But when trying to call the CreateBatchWrite<T> method I get the error, "Entity doesn't exist in AsyncLocal".
Explanation
When using X-Ray, this happens when there is an attempt to create a SubSegment without a Parent Segment. Depending on your setup, when you run a query it might try creating a SubSegment, but it's failing because there is no parent segment.
This is common when running a Lambda function locally, as the Mock Lambda Test Tool will not create a Segment for you like the actual Lambda environment does on AWS. This can happen in other scenarios too.
More details here: https://github.com/aws/aws-xray-sdk-dotnet/issues/125
Solution
Easiest way to solve this is disabling X-Ray locally (as you probably don't want to generate traces locally):
In appsettings.Development.json add this:
"XRay": {
"DisableXRayTracing": "true",
"UseRuntimeErrors": "false",
"CollectSqlQueries": "false"
}
The important bit is the DisableXRayTracing equals true.
Make sure your appsettings.Development.json is set to Copy Always in the properties window. You can do this by including this in your .csproj:
<ItemGroup>
<None Update="appsettings.Development.json">
<CopyToOutputDirectory>Always</CopyToOutputDirectory>
</None>
<None Update="appsettings.json">
<CopyToOutputDirectory>Always</CopyToOutputDirectory>
</None>
</ItemGroup>
If you really want to trace things locally, then make sure you create
a parent segment only when running locally (on AWS this would cause
problems as you would have two parent segments, one created manually
by you, another one created by AWS).
Add this line before any DynamoDB API methods are executed:
AWSXRayRecorder.Instance.ContextMissingStrategy = ContextMissingStrategy.LOG_ERROR;
You can find more info in GitHub discussion https://github.com/aws/aws-xray-sdk-dotnet/issues/69#issuecomment-482688754
Also, you will need to import these 2 packages.
using Amazon.XRay.Recorder.Core;
using Amazon.XRay.Recorder.Core.Strategies;
If you are tracing requests made with the AWS SDK, the X-Ray SDK attempts to generate a subsegment automatically to represent those requests, such as CreateBatchWrite. However, a subsegment can only be created as the child of an existing Segment, so if you have not created a segment beforehand that Entity doesn't exist error will occur.
See these docs for how to create custom segments. Alternatively, if you are developing a web app, the X-Ray SDK can automatically create segments for requests made to your service by adding configuration described in these docs

Jenkins Job DSL sshAgent not working correctly

For the Jenkins Job DSL, I am trying to choose specific ssh agent (plugin) keys for a job (using the sshAgent keyword inside the wrappers context). We have the Jenkins ssh agent plugin installed and several keys setup (this plugin works, as we use it for almost all of our jobs). The Jenkins Job DSL sshAgent command always picks the first key, regardless of whether I specify a different key in our Jenkins setup.
I have tried using just the key name, but also tried key_name + space + description (just like the dropdowns show). That does not work either -- still picks the first key.
Is this a known issue? (I haven't turned up any searches for this yet)
You need to pass the ID of the credentials to the sshAgent DSL method. To get the ID, install at least version 1.21 of the Credentials Plugin. Then navigate to the credentials you want to use, e.g. if the credentials you want to use are global and called "Your Credentials" go to Jenkins > Credentials > Global credentials (unrestricted) > Your Credentials > Update. Then click the "Advanced..." button to reveal the ID. If you did not specify a custom ID when creating the credentials, it's a UUID like 99add9e9-84d4-408a-b644-9162a93ee3e4. Then use this value in your DSL script.
job('example') {
wrappers {
sshAgent('99add9e9-84d4-408a-b644-9162a93ee3e4')
}
}
It's recommended to use a recognizable custom ID when creating new credentials, e.g. deployment-key. That will lead to readable DSL scripts.
job('example') {
wrappers {
sshAgent('deployment-key')
}
}