Is the HTTP Plugin supported in Real-time Google Data Fusion pipeline? - google-cloud-platform

I'm trying to create a really basic Real-time GDF pipeline where the source is an HTTP Property plugin that retrieves some info from an HTTP endpoint. However, whenever I do validate, I get this non-helpful "org.apache.spark.streaming.dstream.DStream" error. Has anyone gotten a real-time source version of HTTP Property plugin to work in Data Fusion?
I made sure to fill in all required fields. I also tested the endpoint through a different client, Postman, to make sure the endpoint itself is working.

HTTP plugin has both batch source and streaming source :
https://github.com/data-integrations/http/tree/develop/src/main/java/io/cdap/plugin/http/source
could you attach the log of pipeline studio service and your pipeline json ?

Related

Using AWS Java SDK 2.0 WebIdentityTokenFileCredentialsProvider gives SdkClientException

I have an application that already works using Kinesis. The application uses AWS Session Credentials but we are switching to using either AWS Session Credentials or Web Identity Token (software.amazon.awssdk.auth.credentials.WebIdentityTokenFileCredentialsProvider) depending on the deployment environment.
When I add in the code to use WebIdentityTokenFileCredentialsProvider I get the stacktrace below. I can't provide the code but rest assured I'm setting an HTTP client for Kinesis. But if you look at the stacktrace it shows that a default HTTP client is being configured via the Provider deep within the AWS SDK code. I have no influence over the Credentials Provider setting the HTTP client as the WebIdentityTokenFileCredentialsProvider doesn't give me a way to tell it that I don't need a default HTTP client being set.
I know one option is to create my own implementation of the WebIdentityTokenFileCredentialsProvider but I'd rather not do that.
Question: What else can I do to work around this?
Caused by: software.amazon.awssdk.core.exception.SdkClientException: Multiple HTTP implementations were found on the classpath. To avoid non-deterministic loading implementations, please explicitly provide an HTTP client via the client builders, set the software.amazon.awssdk.http.service.impl system property with the FQCN of the HTTP service to use as the default, or remove all but one HTTP implementation from the classpath
at software.amazon.awssdk.core.exception.SdkClientException$BuilderImpl.build(SdkClientException.java:102)
at software.amazon.awssdk.core.internal.http.loader.ClasspathSdkHttpServiceProvider.loadService(ClasspathSdkHttpServiceProvider.java:62)
at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)
at java.base/java.util.Spliterators$ArraySpliterator.tryAdvance(Spliterators.java:1002)
at java.base/java.util.stream.ReferencePipeline.forEachWithCancel(ReferencePipeline.java:129)
at java.base/java.util.stream.AbstractPipeline.copyIntoWithCancel(AbstractPipeline.java:527)
at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:513)
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
at java.base/java.util.stream.FindOps$FindOp.evaluateSequential(FindOps.java:150)
at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.base/java.util.stream.ReferencePipeline.findFirst(ReferencePipeline.java:647)
at software.amazon.awssdk.core.internal.http.loader.SdkHttpServiceProviderChain.loadService(SdkHttpServiceProviderChain.java:44)
at software.amazon.awssdk.core.internal.http.loader.CachingSdkHttpServiceProvider.loadService(CachingSdkHttpServiceProvider.java:46)
at software.amazon.awssdk.core.internal.http.loader.DefaultSdkHttpClientBuilder.buildWithDefaults(DefaultSdkHttpClientBuilder.java:40)
at software.amazon.awssdk.core.client.builder.SdkDefaultClientBuilder.lambda$resolveSyncHttpClient$7(SdkDefaultClientBuilder.java:343)
at java.base/java.util.Optional.orElseGet(Optional.java:364)
at software.amazon.awssdk.core.client.builder.SdkDefaultClientBuilder.resolveSyncHttpClient(SdkDefaultClientBuilder.java:343)
at software.amazon.awssdk.core.client.builder.SdkDefaultClientBuilder.finalizeSyncConfiguration(SdkDefaultClientBuilder.java:282)
at software.amazon.awssdk.core.client.builder.SdkDefaultClientBuilder.syncClientConfiguration(SdkDefaultClientBuilder.java:178)
at software.amazon.awssdk.services.sts.DefaultStsClientBuilder.buildClient(DefaultStsClientBuilder.java:27)
at software.amazon.awssdk.services.sts.DefaultStsClientBuilder.buildClient(DefaultStsClientBuilder.java:22)
at software.amazon.awssdk.core.client.builder.SdkDefaultClientBuilder.build(SdkDefaultClientBuilder.java:145)
at software.amazon.awssdk.services.sts.internal.StsWebIdentityCredentialsProviderFactory$StsWebIdentityCredentialsProvider.<init>(StsWebIdentityCredentialsProviderFactory.java:71)
at software.amazon.awssdk.services.sts.internal.StsWebIdentityCredentialsProviderFactory$StsWebIdentityCredentialsProvider.<init>(StsWebIdentityCredentialsProviderFactory.java:55)
at software.amazon.awssdk.services.sts.internal.StsWebIdentityCredentialsProviderFactory.create(StsWebIdentityCredentialsProviderFactory.java:47)
at software.amazon.awssdk.auth.credentials.WebIdentityTokenFileCredentialsProvider.<init>(WebIdentityTokenFileCredentialsProvider.java:86)
at software.amazon.awssdk.auth.credentials.WebIdentityTokenFileCredentialsProvider.<init>(WebIdentityTokenFileCredentialsProvider.java:46)
at software.amazon.awssdk.auth.credentials.WebIdentityTokenFileCredentialsProvider$BuilderImpl.build(WebIdentityTokenFileCredentialsProvider.java:200)

Send HTTP request to specific URL when change occurs in source repository (Google Cloud)

Currently trying to set up my application with JetBrains Spaces. It has a feature that can receive 'push' from a repository in order to update a mirror, but it requires a HTTP request is sent to a certain address.
The only way I can think of to do this is to have a cloud function connected to a pub/sub that listens for changes in the repository - a lot of extra overhead.
There must be a simpler way!
Thanks!

How to download file when the application(MuleSoft 3.9) is deployed on Server?

I have created an application in Mule 3.9 which coverts Json to Excel. I need to deploy it on server,so that it can be used by a larger audience.
The code uses HTTP Connector-->Transformer-->File Connector.
I need the application to work in such a way that when the application is deployed on Pivotal Cloud Foundry (PCF), anyone who sends request to this application via postman, will have the excel file downloaded to their local machine.
How can I achieve this?
PS: Since everyone will not be having access to login to the server and access the file, somehow getting the Excel sheet on the requesters local machine is the only way I can think of. Any other suggestions are welcome.
Request: JSON Request sent on Postman
Response: Converted Excel Sheet
Probably there is no way the File connector can reach out the client local machine, so I would rule that out. The File connector will have access to the file system of the server in which it is deployed.
The usual way to do this is to set the file, in this case the Excel payload resulting of the Transformer, at the end of the flow so it is returned to Postman as the body of the HTTP response. You might need to set the right content type. Postman should be able to handle a binary response. No file handling involved.

Is it possible to use http transcoding (to grpc) without google cloud platform? (node-grpc)

Sorry for the basic question (I'm new with gRPC).
Is it possible to use http transcoding without google cloud platform & endpoints?
(Referring to this article: https://cloud.google.com/endpoints/docs/grpc/transcoding)
I'm currently trying to create a mock-application and we are trying to have some sort of frontend with a UI (or can go headless browser in the beg.) that can send HTTP requests to the Extensible Service Proxy, and then ESP will transcode the HTTP request to HTTP2 so that it can be sent as a request to our gRPC services. I think K8s is a bit overkill since we'll only have a few containers (and not too familiar with deployment in k8s).
I'm trying to just use grpc-node, and want to do http mapping in ESP.
Can we just import <import "google/api/annotations.proto";> into our protofile and get this functionality of HTTP mapping?
As mentioned by DazWilkin, your best option would be to use the Envoy Proxy.
If you are used to using Docker, there is a container of the application available here.
Regards,
Frederic

Simple sender/reciever HL7 messaging service using WSO2

I'm looking for an open-source ESB solution on implementing a Messaging service based on the HL7 protocol.
The best solution may be WSO2, thus I've just downloaded and installed the last version (4.8.0).
After the installation and configuration of the HL7 transport through the Axis repository, I've created a Proxy Services according to the documentation (Creating an HL7 Proxy Service).
How can I, at now, test if the service is correctly implemented, by creating a simple sender/reciever?
Note: I found a tutorial, but on launching the command "ant hl7acceptor" I get the following error: Target "hl7acceptor" does not exist in project "samples".
How can I, at now, test if the service is correctly implemented, by
creating a simple sender/reciever?
Yes you can write your simple client and server to test this.
For client you can use Hapi test tool to send messages. For server, write a simple server code in java. Check this sample code