Select the service you wish to carry out a Google Task Handler - google-cloud-platform

I am relatively new to Google Cloud Platform, and I am able to create app services, and manage databases. I am attempting to create a handler within Google Cloud Tasks (similar to the NodeJS sample found in this documentation.
However, the documentation fails to clearly address how to connect the deployed service with what is requesting. Necessity requires that I have more than one service in my project (one in Node for managing rest, and another in Python for managing geospatial data as asynchronous tasks).
My question: When running multiple services, how does Google Cloud Tasks know which service to direct the task towards?
Screenshot below as proof that I am able to request tasks to a queue.

When using App Engine routing for your tasks it will route it to the "default" service. However, you can overwrite this by defining AppEngineRouting, select your service, instance and version, the AppEngineHttpRequest field.
The sample shows a task routed to the default service's /log_payload endpoint.
const task = {
appEngineHttpRequest: {
httpMethod: 'POST',
relativeUri: '/log_payload',
},
};
You can update this to:
const task = {
appEngineHttpRequest: {
httpMethod: 'POST',
relativeUri: '/log_payload',
appEngineRouting: {
service: 'non-default-service'
}
},
};
Learn more about configuring routes.

I wonder which "services" you are talking about, because it always is the current service. These HTTP requests are basically being dispatched by HTTP headers HTTP_X_APPENGINE_QUEUENAME and HTTP_X_APPENGINE_TASKNAME... as you have them in the screenshot with sample-tasks and some random numbers. If you want to task other services, these will have to have their own task queue(s).

Related

Java micro service distributed tracing with Istio

Kubernetes and Istio already installed in the cluster. Three micro services deployed as PODs. The flow is
Micro service A to Micro Service B calls => HTTP
Micro service B to Micro service C calls => via Kafka
Micro service A expose a HTTP API to outside
I guess when client hit the Ingres, Istio generate traceId and spanId in HTTP header and enter to Service A.
Are these spanId and traceId propagate to Micro service B and C without using separate API like Spring Cloud sleuth?
No, Istio does not provide tracing headers propagation. However it can be configured on application side without use of 3rd party APIs.
According to Istio documentation:
Istio leverages Envoy’s distributed tracing feature to provide tracing integration out of the box. Specifically, Istio provides options to install various tracing backend and configure proxies to send trace spans to them automatically. See Zipkin, Jaeger and LightStep task docs about how Istio works with those tracing systems.
Istio documentation also has an example of application side header propagation for the bookinfo demo application:
Trace context propagation
Although Istio proxies are able to automatically send spans, they need some hints to tie together the entire trace. Applications need to propagate the appropriate HTTP headers so that when the proxies send span information, the spans can be correlated correctly into a single trace.
To do this, an application needs to collect and propagate the following headers from the incoming request to any outgoing requests:
x-request-id
x-b3-traceid
x-b3-spanid
x-b3-parentspanid
x-b3-sampled
x-b3-flags
x-ot-span-context
Additionally, tracing integrations based on OpenCensus (e.g. Stackdriver) propagate the following headers:
x-cloud-trace-context
traceparent
grpc-trace-bin
If you look at the sample Python productpage service, for example, you see that the application extracts the required headers from an HTTP request using OpenTracing libraries:
def getForwardHeaders(request):
headers = {}
# x-b3-*** headers can be populated using the opentracing span
span = get_current_span()
carrier = {}
tracer.inject(
span_context=span.context,
format=Format.HTTP_HEADERS,
carrier=carrier)
headers.update(carrier)
# ...
incoming_headers = ['x-request-id']
# ...
for ihdr in incoming_headers:
val = request.headers.get(ihdr)
if val is not None:
headers[ihdr] = val
return headers
The reviews application (Java) does something similar:
#GET
#Path("/reviews/{productId}")
public Response bookReviewsById(#PathParam("productId") int productId,
#HeaderParam("end-user") String user,
#HeaderParam("x-request-id") String xreq,
#HeaderParam("x-b3-traceid") String xtraceid,
#HeaderParam("x-b3-spanid") String xspanid,
#HeaderParam("x-b3-parentspanid") String xparentspanid,
#HeaderParam("x-b3-sampled") String xsampled,
#HeaderParam("x-b3-flags") String xflags,
#HeaderParam("x-ot-span-context") String xotspan) {
if (ratings_enabled) {
JsonObject ratingsResponse = getRatings(Integer.toString(productId), user, xreq, xtraceid, xspanid, xparentspanid, xsampled, xflags, xotspan);
When you make downstream calls in your applications, make sure to include these headers.

uknown reason for google cloud compute engine VM shutdown?

One of the vm instances on google cloud compute was shutdown, with an event log in stackdriver without ip or actor (user or service or system) which initiated the event. The instance has onHostMaintenance set to migrate and automaticRestart set to true. This particular instance has migrated on maintenance without error before. The stackdriver event log looks like
{
actor: {
user: ""
},
event_subtype: "compute.instances.stop",
event_timestamp_us: "1531781734907624",
event_type: "GCE_API_CALL",
ip_address: "",
}
The user and ip_address fields are NOT redacted. They have empty values on actual log.
is this common? how does one identify the cause for shutdown in these peculiar cases ?
According to your event type, I dive in the doc of Activity logs,
Compute Engine API calls - GCE_API_CALL events are API calls that change the state of a resource.
It seems like someone may use API calls to shutdown your VM. Looking your settings and logs on the VM instance, it can't be a hostError or Maintenance event. As far as you turn on the onHostMaintenance set to migrate and automaticRestart set to true. Your VM will always be migrated to another hardware.
Out of curiosity, did you restart your VM? did it shutdown again? How often does it happen?
Seems like app engine had an outage during that time, incident .

GAE service running on Flexible Env. as target of a task queue

According to the google doc, a service running in the flexible enviroment can be the target of a push task:
Outside of the standard environment, you can't add tasks to push
queues, but a service running in the flexible environment can be the
target of a push task. You can specify this using the target parameter
when adding a task to queue or by specifying the default target for
the queue in queue.yaml.
However, when I tried to do it I get 404 errors in the flexible service.
That's totally normal due to the required endpoint (/_ah/queue/deferred) for task queues is it not defined in the flexible service.
How do I become a flexible service in a valid target for task queues?
Do I have to define that endpoint in my code in some way?
Usually, you'll need to write a handler in your worker service to do the processing after receiving a task. In the case of push tasks, the service will send HTTP requests to your whatever url you specify. If no url is specified the default URL /_ah/queue/[QUEUE_NAME] will be used.
Now, from the endpoint you mention, it seems you are using deferred tasks, which are a somewhat special kind. Please, see this thread for a workaround by adding the needed url entry. It mentions Managed VMS but it should still work.

spring cloud data flow server cloud foundry redirect to https

I have been wrestling with this for a couple of days now. I want to deploy Spring Cloud Data Flow Server for Cloud Foundry to my org's enterprise Pivotal Cloud Foundry instance. My problem is forcing all Data Flow Server web requests to TLS/HTTPS. Here is an example of a configuration I've tried to get this working:
# manifest.yml
---
applications:
- name: gdp-dataflow-server
buildpack: java_buildpack_offline
host: dataflow-server
memory: 2G
disk_quota: 2G
instances: 1
path: spring-cloud-dataflow-server-cloudfoundry-1.2.3.RELEASE.jar
env:
SPRING_APPLICATION_NAME: dataflow-server
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_URL: https://api.system.x.x.io
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_ORG: my-org
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_SPACE: my-space
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_DOMAIN: my-domain.io
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_USERNAME: user
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_PASSWORD: pass
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_STREAM_SERVICES: dataflow-mq
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_STREAM_BUILDPACK: java_buildpack_offline
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_TASK_SERVICES: dataflow-db
SPRING_APPLICATION_JSON: |
{
"server": {
"use-forward-headers": true,
"tomcat": {
"remote-ip-header": "x-forwarded-for",
"protocol-header": "x-forwarded-proto"
}
},
"management": {
"context-path": "/management",
"security": {
"enabled": true
}
},
"security": {
"require-ssl": true,
"basic": {
"enabled": true,
"realm": "Data Flow Server"
},
"user": {
"name": "dataflow-admin",
"password": "nimda-wolfatad"
}
}
services:
dataflow-db
dataflow-redis
Despite the security block in SPRING_APPLICATION_JSON, the Data Flow Server's web endpoints are still accessible via insecure HTTP. How can I force all requests to HTTPS? Do I need to customize my own build of the Data Flow Server for Cloud Foundry? I understand that PCF's proxy is terminating SSL/TLS at the load balancer, but configuring the forward headers should induce Spring Security/Tomcat to behave the way I want, should it not? I must be missing something obvious here, because this seems like a common desire that should not be this difficult.
Thank you.
There's nothing out-of-the-box from Spring Boot proper to enable/disable HTTPS and at the same time also intercept and auto-redirect plain HTTP -> HTTPS.
There are several online literatures on how to write a custom Configuration class to accept multiple-connectors in Spring Boot (see example).
Spring Cloud Data Flow (SCDF) is a simple Spring Boot application, so all this applies to the SCDF-server as well.
That said, if you intend to enforce HTTPS all throughout your application interaction, there is a PCF setting [Disable HTTP traffic to HAProxy] that can be applied as a global override in Elastic Runtime - see docs. This consistently applies it to all the applications and it is not just specific to Spring Boot or SCDF. Even Python or Node or other types of apps can be enforced to interact via HTTPS with this setting.

Unable to launch task from a spring cloud data flow stream

I registered my task app in Spring Cloud Data Flow, created a definition for it and the status shows 'unknown'. I created the stream and trying to launch the task through task-sink and I get an error:
java.lang.IllegalStateException: failed to resolve MavenResource:
How to launch a task from the task-sink? Am I missing something? Any help is appreciated. Another question I have is how do I access the payload sent via TaskLaunchRequest in my task?
S1 http | step1: transformer-rabbit | log
S2 :S1.step1 > filter --expression=payload.contains('CUSTADDRMODRQ_V15') | task-processor | task-sink
task-sink is launching the task provided by the uri in the TaskLaunchRequest. It is looking for the resource as shown in the log
OUT Using manager EnhancedLocalRepositoryManager with priority 10.0 for /home/vcap/.m2/repository
OUT Using transporter HttpTransporter with priority 5.0 for https://repo.spring.io/libs-snapshot and finally failing.
The task is deployed in our repository and as mentioned I registered and created the definition for it as well.
This one is in cf environment and I am using SCDF server 1.0.0.M4.
In the application.properties for the task-sink i am providing maven.remote.repositories.snapshots.url=**
task create fis-ifx-event-task --definition "fis-event-task"
My goal is launching the task from the stream.
Thanks for the information. I am in fact using the BUILD-SNAPSHOT as I am unable to enable taks in 1.0.0M4 version. Here is the one I am using spring-cloud-dataflow-server-cloudfoundry-1.0.0.BUILD-20160808.144306-116. I am able to register and create task definitions. The status of the task definition is showing as 'unknown' even when I am using the sample task module provided by your team. But when I initiate the flow of the stream and when task-sink tries to launch the task, it is unable to find the maven resource. When I create the task definition, does the task module gets deployed? I don't see any app in Pivotal Apps Manager. As mentioned earlier, I provided maven.remote.repositories.snapshot.url in the application.properties file for the task-sink application. Another thing I observed is when I launch the task manually from dataflow shell it gives an error CF-UnprocessableEntity(10008): The request is semantically invalid: Unknown field(s): 'staging_disk_in_mb', 'staging_memory_in_mb' and also a message saying 'Source is empty'. Presently the task is supposed to print the timestamp and is not dependent on any input.
TaskProcessor code:
#EnableBinding(Processor.class)
#EnableConfigurationProperties(TaskProcessorProperties.class)
public class TaskProcessor {
#Autowired
private TaskProcessorProperties processorProperties;
public TaskProcessor() {
}
#Transformer(inputChannel = Processor.INPUT, outputChannel = Processor.OUTPUT)
#ELI(level = "info", eventType = ELIEventType.INBOUND)
public Object setupRequest(String message) {
Map<String, String> properties = new HashMap<String, String>();
properties.put("payload", message);
TaskLaunchRequest request = new TaskLaunchRequest(processorProperties.getUri(), null, properties, null);
return new GenericMessage<>(request);
}
}
TaskSink code:
#SpringBootApplication
#EnableTaskLauncher
#EnableBinding(Sink.class)
#EnableConfigurationProperties(TaskSinkProperties.class)
public class FisIfxEventTaskSinkApplication {
public static void main(String[] args) {
SpringApplication.run(FisIfxEventTaskSinkApplication.class, args);
}
}
I provided the stream I am using earlier in the post. Sink is receiving the TaskLaunchRequest with uri and payload as you can see here and unable to launch the task.
OUT registering [40, java.io.File] with serializer org.springframework.integration.codec.kryo.FileSerializer
2016-08-10T16:08:55.02-0600 [APP/0]
OUT Launching Task for the following resource TaskLaunchRequest{uri='maven://com.xxx:fis.ifx.event-task:jar:1.0-SNAPSHOT', commandlineArguments=[], environmentProperties={payload={"statusCode":0,"fisT
opic":"CustomerDataUpdated","payloadId":"CUSTADDRMODR``Q_V15","customerIds":[1597304]}}, deploymentProperties={}}
Before I begin, you have a number of questions here. In the future, it's better to break them up into multiple questions so that they are easier to find by other users and easier to answer. That being said:
A little context on the current state of things
In order to understand how things will work, it's important to understand the current state of things. The current releases of the software involved are:
Pivotal Cloud Foundry (PCF) - 1.7.12. This version is required for any task support.
Spring Cloud Task (SCT) - 1.0.2.RELEASE
Spring Cloud Data Flow CF (SCDF) - 1.0.0.BUILD-SNAPSHOT (current as of the date of this post).
Currently PCF 1.7.12+ has all the capabilities to run tasks. You can create v3 applications (the type of application used to launch a task), run it as a task, etc. However, the tooling around that functionality is not currently complete. There is no support for v3 applications in Apps Manager or the CLI. There is a plugin for the CLI that is more of a dev tool that can be used to help with some functions (it will show you logs, etc), but it is not fully functional and requires a specific version of the CLI to work [1]. This is one of the reasons that the task functionality within PCF is still considered experimental.
Spring Cloud Task is currently GA and supports all the functionality needed to effectively run tasks on CF. However, it's important to note that SCT doesn't handle orchestration so the actual launching of tasks on CF is the responsibility of either the user, or Spring Cloud Data Flow (the easier route).
Spring Cloud Data Flow's Cloud Foundry server implementation currently has functionality to launch tasks on PCF in the latest snapshots. We have validated this against 1.7.12 as well as the development branch of 1.8.
The task workflow within SCDF
Tasks are fundamentally different from stream applications within the context of SCDF. When you create a stream definition, you are given the option to deploy it. What this does is it actually downloads the Spring Boot über jars and deploys them to PCF as long running processes. If they go down, PCF, will relaunch them as expected, etc.
Tasks on the other hand, are not deployed. They are launched. The difference is that while you create a task definition, there is nothing deployed until you click launch. And when the task completes, the software is shut down and cleaned up. So while a stream definition may have states, it's really a one to one relationship between the definition and the deployed software. Where with a task, you can launch a task definition as many times as you want.
Your issues
Reading through your post, I see a few things that you are struggling with. Let me see if I can help:
Task Definitions within SCDF and launching them via a stream - When launching a task from a stream, the task registry within SCDF is not used. The sink expects the URL for the resource to be within the TaskLauchRequest.
Apps Manager and tasks - As mentioned above, there is no support for v3 applications in Apps Manager yet so you won't be able to see your tasks there.
Viewing the logs - In order to debug what's going wrong with launching your task on CF, you're going to want to view the logs. To do so, use the v3 CLI plugin mentioned above to view them. It's important to note that you can only tail live logs with the plugin, not view logs that have previously been rendered. Because of that, when testing, you'll want to tail the logs as soon as the app is created, before it's launched.
Error in SCDF Shell - The error you received from the SCDF shell (CF-UnprocessableEntity(10008):...) leads me to wonder if you have both the correct version of PCF (1.7.12+) and the correct version of the following other libraries:
spring-cloud-deployer-cloudfoundry - The latest snapshots
cf-java-client - 2.0.0.M10+
reactor-core - 3.0.0.RC1+
I hope this helps!
[1] https://github.com/cloudfoundry/v3-cli-plugin
Task support is not available in 1.0.0.M4 release of SCDF's CF-server. In this release, the task commands/REST-APIs should be disabled - see here. And for that reason, you wouldn't see any docs related to Tasks in the 1.0.0.M4 reference guide.
That said, the Task support is available/enabled in the BUILD-SNAPSHOT release. If you're locally building the CF-server and upon pushing it to CF, you could take advantage the task commands in the shell to create and launch task definitions.