I am calling an external API HTTP url which redirects 2-3 times.
Current pipeline has a Shell script which used curl -L. I am migrating it to GCP Workflows.
I haven't been able to find any param that lets me do the same for
http.get call in GCP Workflows. Is that understanding correct?
Which means I have to put a loop around every API call in Workflows and look for response code. And keep on looping till I get a 2xx or reach max-redirects I am happy with (say 10). Seems odd to me as many APIs may not redirect today but may later, so I need to add this loop logic to any external API I call.
The only workaround (if needed), I can think of is creating a cloud function and use for e.g. the Python requests library with the arg allow_redirects = True
Related
I have procedures that are exposed as Webservices (REST):
I need it to be able to parse the request body ignoring unrecognized fields (that are not specified
in "rules"). Right now, when procedures tries to parse something that is not defined within the parameters, they throw the following error:
Example:
Some procedure has the following definition:
parm(in:&parm1, in:&parm2, out:&someResponse);
Then we change to:
parm(in:&parm1, in:&parm2, in:&parm3, out:&someResponse);
The web service is updated on some distributions, but on some they're still on the old version with 2 in parameters.
The service that consumes these web services on different APP distributions are sending the body with the second (latest definition).
{
"parm1" : "somevalue",
"parm2" : "somevalue",
"parm3" : "somevalue"
}
Unfortunately we don't have control of the third party that is consuming our web services, so in that case, it would be a lot easier if unused parameters could be ignored...
USING GX 16 U11 - Java Generator
Unfortunately there is no way in GeneXus 16 to "catch" the request and do something previous to the object logic. In GeneXus 17 we have the new API object, there you can transform the parameters.
But, not everything is lost. Taking into account you're generating in Java, there is an "external way" to do it with Filters. I used them to log the client requests for debugging purposes.
If you don't want to mess with the code, there is also API Gateways you could put in front of your API services to redirect the requests to the right service. Bear in mind that I'm not a specialist in this topic, maybe a post in ServerFault would help.
I'm using a Google Cloud Function to run a function off an HTTP trigger. The code itself is non-idempotent, which is expected and desired for us as it's using an external API.
However, quick and repeated triggers of the cloud function yields the same output repeatedly (generally while there is 1 active instance), and we need the output to be different every time the function is triggered.
Unsure if this is due to instance caching or something else, but is there any known workaround to ensure that every time the cloud function is HTTP triggered we get a new output, even when triggered seconds apart?
Thanks.
We've created a Google Cloud Function that is essentially an internal API. Is there any way that other internal Google Cloud Functions can talk to the API function without exposing a HTTP endpoint for that function?
We've looked at PubSub but as far as we can see, you can send a request (per say!) but you can't receive a response.
Ideally, we don't want to expose a HTTP endpoint due to the extra security ramifications and we are trying to follow a microservice approach so every function is its own entity.
I sympathize with your microservices approach and trying to keep your services independent. You can accomplish this without opening all your functions to HTTP. Chris Richardson describes a similar case on his excellent website microservices.io:
You have applied the Database per Service pattern. Each service has
its own database. Some business transactions, however, span multiple
services so you need a mechanism to ensure data consistency across
services. For example, lets imagine that you are building an e-commerce store
where customers have a credit limit. The application must ensure that
a new order will not exceed the customer’s credit limit. Since Orders
and Customers are in different databases the application cannot simply
use a local ACID transaction.
He then goes on:
An e-commerce application that uses this approach would create an
order using a choreography-based saga that consists of the following
steps:
The Order Service creates an Order in a pending state and publishes an OrderCreated event.
The Customer Service receives the event attempts to reserve credit for that Order. It publishes either a Credit Reserved event or a
CreditLimitExceeded event.
The Order Service receives the event and changes the state of the order to either approved or cancelled.
Basically, instead of a direct function call that returns a value synchronously, the first microservice sends an asynchronous "request event" to the second microservice which issues a "response event" that the first service picks up. You would use Cloud PubSub to send and receive the messages.
You can read more about this under the Saga pattern on his website.
The most straightforward thing to do is wrap your API up into a regular function or object, and deploy that extra code along with each function that needs to use it. You may even wish to fully modularize the code, as you would expect from an npm module.
I have an Apache Camel (version 2.15.3) route that is configured as follows (using a mix of XML and Java DSL):
Read a file from one of several folders on an FTP site.
Set a header to indicate which folder it was read from.
Do some processing and auditing.
Synchronously POST to an external REST service (jax-rs 1.1, Glassfish, Java EE 6).
The REST service takes a long time to do its job, 20+ minutes.
Receive the reply.
Do some more processing and auditing.
Write the response to one of several folders on an FTP site.
Use the header set at the start to know which folder to write to.
This is all configured in a single path of chained routes.
The problem is that the connection to the external REST service will timeout while the service is still processing. The infrastructure is a bit complex (edge servers, load balancers, Glassfish), and regardless I don't think increasing the timeout is the right solution.
How can I implement this route such that I avoid timeouts while still meeting all my requirements to (1) write the response to the appropriate FTP folder, (2) audit the transaction, and (3) meet other transaction/context-specific requirements?
I'm relatively new to Camel and REST, so maybe this is easy, but I don't know what Camel and REST tools and techniques to use.
(Questions and suggestions for improvement are welcome.)
Isn't it possible to break the two main steps a part and have two asynchronous operations?
I would do as follows.
Read a file from one of several folders on an FTP site.
Set a header to indicate which folder it was read from.
Save the header and file name and other relevant information in a cache. There is a camel component called camel-cache that is relatively easy to setup and you can store key-value or any other objects.
Do some processing and auditing. Asynchronously POST to an external REST service (jax-rs 1.1, Glassfish, Java EE 6). Note that we are posting asynchronously here.
Step 2.
Receive the reply.
Lookup the reply identifiers i.e. filename or some other identifier in cache to match the reply and then fetch the header.
Do some more processing and auditing.
Write the response to one of several folders on an FTP site.
This way, you don't need to wait and processing can take 20 min or longer. You just set your cache values to not expire for say 24h.
This is a typical asynchronous use case. Can the rest service give you a token id or some unique id immediately after you hit them ?
So that you can have a batch job or some other camel route which will pick up this id from a database/cache and hit the rest service again after 20 minutes.
This is the ideal solution I can think of, if the rest service can provision this.
You are right, waiting for 20 minutes on a synchronous call is a crazy idea. Also what is the estimated size of the file/payload which you are planning to post to the rest service ?
I'd like to define a lambda. When it receives a POST request, I'd like to make another POST request to an external uri (say, splunk or apigee or anything outside of AWS). Is this possible? Does Lambda allow the internet access? I googled but did not find a good answer for this one.
Yes, you can run pretty much any code that you would run on a normal EC2 instance. For instance, if you write your Lambda in node.js you can use the request library to make HTTP calls out to other webservices. The same is true of Java or Python as long as you include whatever library you want to use to make the call in your Lambda. Just make sure you set the Lambda timeout high enough to allow your call(s) enough time to complete.
I wrote a blog post that shows a simple example of a Lambda calling out to a weather API(HTTP GET) to get weather for a zip code and post it in Slack: http://www.ryanray.me/serverless-slack-integrations