Specify API Gateway id instead of using 'random' id - amazon-web-services

With deploying an AWS Lambda function (via Serverless Framework), and exposing it via a HTTPS endpoint in AWS API Gateway... is it possible to construct and set the API Gateway id and thus determine the first part of the HTTP endpoint for that Lambda function?
When deploying an AWS Lambda function and adding a HTTP event, I now get a random id as (the first hostname) in https://klv5e3c8z5.execute-api.eu-west-1.amazonaws.com/v1/fizzbuzz. New/fresh deployments receive new random string, that 10 character id.
Instead of using that, I would like to determine and set that id. (I will make sure that it's sufficiently unique myself, or deal with endpoint naming collisions myself.)
Reason for this is that in a separate Serverless project, I need to use that endpoint (and thus need to know that id). Instead of having it being determined by project 1 and then reading/retrieving that in project 2, I want to construct and set the endpoint in project 1 so that I can use the known endpoint in project 2 as well.
(A suggestion was to use a custom domain as an alternative/alias for that endpoint... but if possible I want don't want to introduce a new component in the mix, and a solution that does not include Cloud-it-might-take-30-minutes-to-create-a-domain-Front is better :-) )
If this isn't possible, I might need to use the approach as described at http://www.goingserverless.com/blog/api-gateway-url, mentioning that the endpoint is being exposed from one project via the CloudFormation stack, to be read from and used in the other project, but that introduces (a little latency and) a dependency in deploying the second project.

The "first hostname" you want to set is called "REST API id" and is generated by API Gateway when creating the API. The API used to create API's in API Gateway doesn't offer the ability to specify the REST API id, so no, there is no way to specify the id.
The reason for that is probably that these id's are used as part of a public facing domain name. As this domain name doesn't include an identifier for the AWS account it belongs to, the id's have to be globally unique, so AWS generates them to avoid collisions. As AWS puts it (emphasis by me):
For an edge-optimized API, the base URL is of the http[s]://*{restapi-id}*.execute-api.amazonaws.com/stage format, where {restapi-id} is the API's id value generated by API Gateway. You can assign a custom domain name (for example, apis.example.com) as the API's host name and call the API with a base URL of the https://apis.example.com/myApi format.
For the option to create a custom domain name you should consider that there is even some more complexity associated with it, as you must provision a matching SSL-certificate for the domain as well. While you can use ACM for that, there is currently the limitation that SSL-certificates for CloudFront distributions (which edge-optimized API Gateway API's use behind the scenes) need to be issued in us-east-1.
The option you already mentioned to export the API endpoint in the CloudFormation stack as output value and use that exported value in your other stack would work well. As you noted that'd create a dependency between the two stacks, so once you deployed project 2, which uses the output value from project 1, you can only delete the CloudFormation stack for project 1 after the project 2 stack is either deleted or updated to not use the exported value anymore. That can be a feature, but from your description it sounds like it wouldn't for your use case.
Something similar to exported stack output values would be to use some shared storage instead of making use of CloudFormation's exported output values features. What comes to mind here is the SSM Parameter Store, which offers some integration into CloudFormation. The integration makes it easy to read a parameter from the SSM Parameter Store in the stack of project 2. For writing the value to the Parameter Store in project 1 you'd need to use a custom resource in your CloudFormation template. There is at least one sample implementation for that available on Github.
As you can see there are multiple options available to solve your problem. Which one to choose depends on your projects needs.

Question: "is it possible to construct and set the API Gateway id?"
Answer: No (see the other answer to this question).
I was able to get the service endpoint of project 1 into the serverless.yml file of project 2 though, to finally construct the full URL of the service that I needed. I'm sharing this because it's an alternative solution that also works in my case.
In the serverless.yml of project 2, you can refer to the service endpoint of project 1 via service_url: "${cf:<service-name>-<stage>.ServiceEndpoint}". Example: "${cf:my-first-service-dev.ServiceEndpoint}".
CloudFront exposes the ServiceEndpoint that contains the full URL, so including the AWS Gateway REST API id.
More information in Serverless Framework documentation: https://serverless.com/framework/docs/providers/aws/guide/variables/#reference-cloudformation-outputs.
It seems that Serverless Framework is adding this ServiceEndpoint as stack output.

Related

How to get info about Gcloud logs similar to logs explorer?

I am using #google-cloud/logging package to get logs from gcloud, and it works nicely, you can get logs, event (and query them if needed). But how I can get the same info as Logs Explorer? I mean different type of fields which can be queried and etc:
On this picture you see Log fields like, FUNCTION NAME which may be a list of values. And it seems that #google-cloud/logging can't get this meta (or fields info)? So is it possible to obtain it using some other APIs?
If I understand your question correctly, you're asking how Logs Viewer is determining the values that allows it to present you with the various log fields to filter|refine your log queries.
I suspect (don't know) that the viewer is building these lists from the properties as it parses the logs. This would suggest that, the lists are imperfect and that e.g. FUNCTION_NAME's would only appear once a log including the Function's name were parsed.
There is a way to enumerate definitive lists of GCP resources. This is done using list or equivalent methods available using service-specific libraries (SDKs) e.g. #google-cloud/functions.
The easiest way to understand what functionality is provided by a given Google service is to browse the service using Google's APIs Explorer. Here's Cloud Logging API v2 and here's Cloud Functions API.
You can prove to yourself that there's no method under Cloud Logging that allows enumeration of all a project's Cloud Functions. But there is a method in Cloud Functions projects.locations.functions.list. The latter returns a response body that includes a list of functions that are a type CloudFunction that have a name.
Another way to understand how these APIs ("libraries") are used is to add --log-http to any gcloud command to see what API calls are being made by the command.

Available filters for client.get_products function in Boto3

I am trying to develop a python script that gets different parameters of any AWS service (for EC2 e.g., those parameters would be operating system, billing type etc.). Where can I find a listing of all the available Filters that can be used with the get_products function in boto3 for each different supported Service?
Thanks in advance,
Andreas
Actually, there is no direct API or doc available for getting all the attributes. At least I didnt find any.
What you can do is combine various API calls:
You can use DescribeServices
, you get all the attributes of the all the services or if you want to have for one particular you can provide the name. Boto3 call describe_services
Returns the metadata for one service or a list of the metadata for all services
Then you need to use GetAttributeValues to determine the possible values of the attributes. Boto3 call get_attribute_values
And finally depending on the attributes collected in the earlier step you can build a filter for get_producs

Google Cloud Resource Manager - create projects inside folders

I'm trying to create multiple projects inside my Organisation. My use case is:
1. I want to make an API call that creates a new project.
2. I want to create a new DialogFlow agent (chatbot).
Dialogflow API looks pretty straightforward. Since it's backend implementation, I am using service accounts to achieve this.
My problem is that when I'm trying to create a service account, it is always scoped to some project. I spent the whole day trying to give that service account all the access that I could find, but it's still giving me a Forbidden error.
Can someone explain to me if this is possible and if so - how should I configure it through the Cloud Console so that I can end up with a service account that creates projects (that can be scoped to some folder/project if it makes it easier)?
If the answer is yes - can I create multiple chatbots in one project? And what type of permissions do I need to achieve that?
Thanks!

How to use Google Compute Python API to create custom machine type or instance with GPU?

I am just looking into using GCP for cloud computing stuff. So far I have been using AWS and the boto3 library and was trying to use the google python client API for launching instances.
So an example I came across was from their docs here. The instance machine type is specified as:
machine_type = "zones/%s/machineTypes/n1-standard-1" % zone
and then it passed to the configuration as:
config = {
'name': name,
'machineType': machine_type,
....
I wonder how does one go about specifying machines with GPU and custom RAM and processors etc. from the python API?
The Python API is basically a wrapper around the REST API, so in the example code you are using, the config object is being built using the same schema as would be passed in the insert request.
Reading that document shows that the guestAccelerators structure is the relevant one for GPUs.
Custom RAM and CPUs are more interesting. There is a format for specifying a custom machine type name (you can see it in the gcloud documentation for creating a machine type). The format is:
[GENERATION]custom-[NUMBER_OF_CPUs]-[RAM_IN_MB]
Generation refers to the "n1" or "n2" in the predefined names. For n1, this block is empty, for n2, the prefix is "n2-". That said, experimenting with gcloud seems to indicate that "n1-" as a prefix also works as you would expect.
So, for a 1 CPU n1 machine with 5GB of ram, it would be a custom-1-5120. This is what you would replace the n1-standard-1 in your example with.
You are, of course, subject to the limits of how to specify a custom machine such as the fact that RAM must be a multiple of 256MB.
Finally, there's a neat little feature at the bottom of the console "create instance" page:
Clicking on the relevant link will show you the exact REST object you need to create the machine you have defined in the console at that very moment, so it can be very useful to see how a particular parameter is used.
You can create a Compute Engine instance using the Compute Engine API. Specifically, we can use the insert API request. This accepts a JSON payload in a REST request that describes the desired VM instance that you desire. A full specification of the request is found in the docs. It includes:
machineType - specs of different (common) machines including CPUs and memory
disks - specs of disks to be added including size and type
guestAccelerators - specs for GPUs to add
many more options ...
One can also create a template description of the machine structure you want and simplify the creation of an instance by naming the template to use and thereby abstracting the configuration details out of code and into configuration.
Beyond using REST requests (which can be passed from a python), you also have the capability to create Compute Engines from:
GCP Console - web interface
gcloud - command line (which I suspect can also be driven from within Python)
Deployment Manager - configuration driven deployment which includes Python as a template language
Terraform - popular environment for creating Infrastructure as Code environments

Is it possible to instruct AWS Custom Authorizers to call AWS Lambdas based on Stage Variables?

I am mapping Lambda Integrations like this on API Gateway:
${stageVariables.ENV_CHAR}-somelambda
So I can have d-somelambda, s-somelambda, etc. Several versions for environments, all simultaneous. This works fine.
BUT, I am using Custom Authorizers, and I have d-authorizer-jwt and d-authorizer-apikey.
When I deploy the API in DEV stage, it's all ok. But when I deploy to PROD stage, all lambda calls are dynamically pointing properly to *p-lambdas*, except the custom authorizer, which is still pointing to "d" (DEV) and calling dev backend for needed validation (it caches, but sometimes checks the database).
Please note I don't want necessarily to pass the Stage Variables like others are asking, I just want to call the correct Lambda out of a proper configuration like Integration Request offers. By having access to Stage Variables as a final way of solving this, I would need to change my approach and have a single lambda for all envs, and dynamically touch the required backend based on Stage Variables... not that good.
Tks
Solved. It works just as I described. There are some caveats:
a) You need to previously grant access to that lambda
b) You can't test the authorizer due to a UI glitch ... it doesn't ask for the StageVar so you will never reach the lambda
c) You need to deploy the API to get the Authorizers updated on a particular Stage
Cannot tell why it didn't work on my first attempt.