Find the OpsWorks deploy user in a recipe - amazon-web-services

I'm trying to figure out how to extract or find the name of the user who's performing the deployment of an app on a given OpsWorks stack. For example, in the "Deployments and Commands" section of a stack, the table there displays a history of various deployments with who the deploy user was... etc. I'd like to be able to capture that same user from within my recipe.
It doesn't look like it's something I can grab out of search(:aws_opsworks_app) databag (unless I'm mistaken). Or is there somewhere else I can get this information easily?

It turns out there's a pretty simple way to get it through search(:aws_opsworks_command) databag.
:aws_opsworks_command provides an iam_user_arn attribute which can be massaged to appear as the deployment user's IAM name. An example iam_user_arn string look like arn:aws:iam:555555:user/username
Example:
owc = search(:aws_opsworks_command).first
owc[:iam_user_arn].split(':').last
# => gets us "user/username"
Documentation: https://docs.aws.amazon.com/opsworks/latest/userguide/data-bag-json-command.html

Related

Export/Outputs that don't exist preventing stack from updating/deleting

Using serverless to deploy to AWS.
I created a Cognito user pool via serverless then realised I wanted to change it's attributes.
I couldn't deploy because you can't update attributes on an existing user pool.
"No problem - I'll just delete it and make it again" I thought. So I did.
But I had created two Outputs that referencing Client ID and Pool ID so now I get this:
Export alpha-UserPoolId cannot be deleted as it is in use by alpha-Stack
I can't see any way to remove theese references manually via the AWS console.
Anyone know what I can do to remove these dead references?
There's no option to manually remove an Output and I tried editing the template but it didn't seem to actually do anything.
Thanks
[EDIT: Check comments for full details on solution]
You have to edit the importing stack to not rely on these values, afterwards you can remove them.
As long as there is an Fn::ImportValue somewhere, it won't let you delete the export.
From the docs:
The following restrictions apply to cross stack references
...
You can't delete a stack if another stack references one of its outputs.
You can't modify or remove an output value that is referenced by another stack.

How to embed parameters into AWS Lambda GET request?

So say my request url to get all the Users at Location 4 is something like
./Location/Users?locationId=4 or ./Location/Users/4
but we would of course prefer to structure the request url like this:
./Location/4/Users
however, every single AWS help document and help question I can find on here uses the first syntax, so I am unsure how to proceed. It seems like there should be a way to do this, as it is a very common design pattern, but AWS seems to lock you in to only being able to append to the ./Users path instead of being able to prepend the argument.
To be clear, the first request syntax is working, but I'm not sure how to adjust the syntax to a more industry-standard way of doing it since embedding the parameter in the middle of the url instead of at the end would fundamentally change the Amazon Resource Name.
There's probably something simple that I'm missing here though.
Api gateway indeed supports URl like Location/{locationId}/Users. You first need to create locationId as a child resource and then create users as a child resource under that.
Steps
Click Location, and goto actions and click create resource.
fill the details of your new resource
resource name - give a meaningful name
resource path - {locationId}
repeat the same to add Users under LocationId

AWS Amplify filter for #searchable annotation

Currently I am using a DynamoDB instance for my social media application. While designing the schema I sticked to the "one table" rule. So I am putting every data in the same table like posts, users, comments etc. Now I want to make flexible queries for my data. Here I found out that I could use the #searchable annotation to create an Elastic Search instance for a table which is annotated with #model
In my GraphQL schema I only have one #model, since I only have one table. My problem now is that I don't want to make everything in the table searchable, since that would be most likely very expensive. There are some data which don't have to be added to the Elastic Search instance (For example comment related data). How could I handle it ? Do I really have to split my schema down into multiple tables to be able to manage the #searchable annotation ? Couldn't I decide If the row should be stored to the Elastic Search with help of the Partitionkey / Primarykey, acting like a filter ?
The current implementation of the amplify-cli uses a predefined python Lambda that are added once we add the #searchable directive to one of our models.
The Lambda code can not be edited and currently, there is no option to define a custom Lambda, you read about it
https://github.com/aws-amplify/amplify-cli/issues/1113
https://github.com/aws-amplify/amplify-cli/issues/1022
If you want a custom Lambda where you can filter what goes to the Elasticsearch Instance, you can follow the steps described here https://github.com/aws-amplify/amplify-cli/issues/1113#issuecomment-476193632
The closest you can get is by creating a template in amplify\backend\api\myapiname\stacks\ where you can manage all the resources related to Elasticsearch. A good start point is to
Add #searchable to one of your model in the schema.grapql
Run amplify api gql-compile
Copy the generated template in the build folder, \amplify\backend\api\myapiname\build\stacks\SearchableStack.json to amplify\backend\api\myapiname\stacks\
Remove the #searchable directive from the model added in step 1
Start editing your new template copied in step 3
Add a Lambda and use it in the template as the resolver for the DynamoDB Stream
Using this approach will give you total control of the resources related to the Elasticsearch service, but, will also require to do it all by your own.
Or, just go by creating a table for each model.
Hope it helps
It is now possible to override the generated streaming function code as well.
thanks to the AWS Support for the information provided
leaved a message on the related github issue as well https://github.com/aws-amplify/amplify-category-api/issues/437#issuecomment-1351556948
All you need is to run
amplify override api
edit the corresponding overrode.ts
change the code with the resources.opensearch.OpenSearchStreamingLambdaFunction.code
resources.opensearch.OpenSearchStreamingLambdaFunction.functionName = 'python_streaming_function';
resources.opensearch.OpenSearchStreamingLambdaFunction.handler = 'index.lambda_handler';
resources.opensearch.OpenSearchStreamingLambdaFunction.code = {
zipFile: `
# python streaming function customized code goes here
`
}
Resources:
[1] https://docs.amplify.aws/cli/graphql/override/#customize-amplify-generated-resources-for-searchable-opensearch-directive
[2]AWS::Lambda::Function Code - Properties - https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-lambda-function-code.html#aws-properties-lambda-function-code-properties

Unable to create AWS key pair using console

I tried to create new AWS key pairs and the option to create disappeared
Does anyone know why?
It would be worth checking the IAM permissions associated with the User who is trying to create the key pair. Contact the Administrator (presumably you?) and investigate. I would suggest creating a Group with Permissions and adding them to that.
I performed an experiment and added aDeny policy to my IAM User that prevented me from being able to create a keypair.
I then tried to launch an instance and the option to create a keypair (in the dialog box you show above) was still available. So, the display does not vary according to permissions.
Therefore, something else is causing your situation. I would recommend trying it in a different browser. Also, check the underlying HTML to see whether the option is coded on the web page. Something is causing it to disappear.

Where are models saved by default?

I've submitted a training job to the cloud using the RESTful API and see in the console logs that it completed successfully. In order to deploy the model and use it for predictions I have saved the final model using tf.train.Saver().save() (according to the how-to guide).
When running locally, I can find the graph files (export-* and export-*.meta) in the working directory. When running on the cloud however, I don't know where they end up. The API doesn't seem to have a parameter for specifying this, it's not in the bucket with the trainer app, and I can't find any temporary buckets on the cloud storage created by the job.
When you set up your Cloud ML environment you set up a bucket for this purpose. Have you looked in there?
https://cloud.google.com/ml/docs/how-tos/getting-set-up
Edit (for future record): As Robert mentioned in comments, you'll want to pass the output location to the job as an argument. Couple of things to be mindful of:
Use a unique output location per job, so one job doesn't clobber over the outputs of another.
The recommendation is to specify the parent output path, and use it to contain the exported model in a subpath called 'model', as well as organizing other outputs like checkpoints and summaries within that path. That makes it easier to manage all the outputs.
While not required, I'll also suggest staging the training code in a packages subpath of the output, which helps correlate the source with the outputs it produces.
Finally(!), also keep in mind when you use hyperparameter tuning, you'll need to append the trial id to the output path for outputs produced by individual runs.