What API action is done for importing a Portfolio? - amazon-web-services

I want to import a Portfolio that's shared with me but I'm unable to find the associated API action for it. (As I want to do it programmatically)
I'm looking for a translation in both Terraform as well as CFN of the specific API action and I can't seem to find it. Anyone an idea?

Terraform does not have a resource for this yet.
CloudFormation allows you to accept a portfolio share through the following resource:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-servicecatalog-acceptedportfolioshare.html

Related

how to solve module already exists error in pipeline

I'm setting up a pipeline that provisions resources in AWS. Each time I run the pipeline, I get get a module already exist error. I know the resources I want I already provisioned but my understanding of Terraform is that if it already exists it just skips it and provisions the rest that don't already exist. How do I make it skip existing modules and not result into a pipeline build error.
my understanding of Terraform is that if it already exists it just skips it and provisions
Sadly your understanding is incorrect. TF does not check if something exists before it provisions resources. By TF design principles it is assumed that resources do not exist if they are to be managed by TF.
How do I make it skip existing modules and not result into a pipeline build error.
You have to do it manually. Pass some variables to your TF script for conditional creation of resources. TF has no capability to check for pre-existance of resources, unless you do it yourself.
Terraform does not skip the resource if it already exists, it throws an error and quits execution.
To deal with this kind of problem, the best alternative is to import the existing resource to your state file.
In the end of each resource page from the official documentation you will find a "import" section, usually it goes like:
terraform import terraform_state_id component_id
Example:
terraform import aws_instance.web i-12345678

Create Infrastructure Documentation from terraform + gitlab-ci system

Our infra pipeline is setup using terraform + gitlab-ci. I am given with task to provide documentation on setup with what's implemented and what's left. I am new to infra world and finding it hard to come up template to start documentation.
So far I thought of having a table with resources needed with details on dependencies, source of the module, additional notes, etc
If you have a template, can you share OR any other suggestions?
For starters, you could try one or both of the below approaches:
a) create a graph of the Terraform resources using its graph command
b) group and then list all of your resources for a specific tag using AWS Resource Groups, specifically its Create Resource Group functionality
The way I do documentation is to keep it as simple as possible, explain how it works, how to use it and also provide instructions on how it was setup from scratch for reference and as an insurance policy. So that if it's destroyed, someone other than the person that set it all up could recreate it.
Since this is just a pipeline there is probably not much to diagram. The structure of documentation I would provide would be something like this and I would add this either as part of the README.md, in Confluence or however your team does documentation.
Summary
1-2 Sentences about the work and why it was created.
How the Repo is Structured
An explanation on how the repo is structured and decisions behind why it was structured the way it was.
How To Use
Provide steps on how a user can use the pipeline
How It Was Created
Provide steps on how it was setup so anybody can manage it and work on it going forward.

"internal server error" with API gateway and lambda on AWS

There are tons of similar questions both on this site and on the web, which leads me to believe there is something really wrong with AWS' documentation due to this causing grief to so many people.
So, I decided to post the most basic example step by step.
First, we create a new function:
It has default "everything", I don't touch a single line of code.
(the red error message is AWS not playing nice with Firefox)
The default code passes the test:
Now I add a trigger:
This gives me the link for the trigger:
I can go to the API endpoint: https://spy3z1jvu8.execute-api.ap-northeast-1.amazonaws.com/default/test
And it works:
Now, the problems will start. I open the API gateway that was created:
and try the default link: https://spy3z1jvu8.execute-api.ap-northeast-1.amazonaws.com
And...
Most of the people having similar questions seem to be having an issue with the gateway expecting some json content, etc, but here is an untouched AWS sample and the gateway link doesn't work.
The troubleshooting steps say to add logging and troubleshoot it that way, but there is nothing of interest in the logs.
What could be the origin of that problem?
What could be the origin of that problem?
You are correct. This is AWS/console fault. Specifically, it provides wrong permissions in the lambda's resource-based permissions for the default route to work. To fix that you have to edit the permissions.
Specifically, go to your function's Resource-based policy (this is different then execution role). You should find one Policy statement there which you have to edit. Then change in Source ARN from something like:
arn:aws:execute-api:ffffff:xxxx:api-id/*/*/function-name
to
arn:aws:execute-api:ffffff:xxxx:api-id/*/*

How do I know what key value pairs are available for deployment manager?

For example when I try to figure out what properties I can put into deployment manager for creating a bigquery table, I had to reference the REST API docs as the best place to find parameters and required fields.
Is there a good place from within gcloud command or online docs that are specific to deployment manager yamls? I would like to be able to reference required fields and optional fields for creating GCP resources. Currently it's very difficult to figure out.
From the documentation at: https://cloud.google.com/deployment-manager/docs/configuration/supported-resource-types
You can get a list of the supported resource types by running:
gcloud deployment-manager types list
That said the yaml reference from documentation on the that page looks pretty complete.
Edit: Refer to this github link for a list of deployment manager examples.
If anything you need is not described in the documentation/exemplary schemas there is a brutal walk around.
You can make an api call with developer console open (F12) and have a look on network activity where your call will be described with all used and available properties.
It will not provide any addtional information about implementation besides parameter's name itself, so you will have to follow rules covering alike parameter.

Choosing active SES ReceiptRuleSet in CloudFormation / Troposphere

I am creating a ReceipRuleSet with troposphere like this :
ReceiptRuleSet(
title="SesRuleset",
RuleSetName="ses-ruleset"
)
However, when I upload the stack with the generated CloudFormation template, the RuleSet appears as inactive in SES.
Does anyone knows if there is a way to set the created RuleSet as active without having to interact with the online console nor the CLI ?
troposphere maintaner here. I don't actually know a ton about SES, but have you included the ReceiptRuleSet in a ReceiptRule? My guess is that if a RuleSet is not used by a Rule, it's probably inactive, since I can't see anything in either cloudformation or the API that would indicate you can set it to "active".
Unfortunately, this doesn't seem to be supported by Cloudformation. I found the following blog post leveraging a lambda doing an API call to activate the RuleSet after creation: https://binx.io/blog/2019/11/25/how-to-set-the-active-receipt-rule-set-in-ses-using-cloudformation/
This seemed one moving piece too many for me, so I'm currently activating the RuleSet through the console.