I get an java.lang.ClassNotFoundException: org.apache.flink.api.common.serialization.RuntimeContextInitializationContextAdapters
when running my Apache Flink code locally.
It works fine when I use a local generator for my data stream.
As soon as I switch to an AWS FlinkKinesisConsumer to get me the DataStream I get the above-mentioned error.
Did I miss to add a dependency ?
Related
Installing Superset with docker-compose. App is up and running. When adding a new database using PyAthena connector, error Unexpected error occurred, please check your logs for details happens with no details in the logs.
First, if you are using docker-compose, check whether you have added driver to the build environment.
echo "PyAthena>1.2.0" >> ./docker/requirements-local.txt
If you don't you will get Driver not found error.
Second, check your URI scheme. It must be of the following form:
awsathena+rest://AKIAXXXX:XXXXXX#athena.{region}.amazonaws.com/{database_name}?s3_staging_dir=s3://{bucket_name_for_results}
If you are missing the query string part you may get mysterious error without a detail reason.
Also note that PyAthena does not check you AK/SK against the staging bucket.
I am trying to create a lambda S3 listener leveraging Lambda as a native image. The point is to get the S3 event and then do some work by pulling the file, etc. To get the file I am using het AWS 2.x S3 client as below
S3Client.builder().httpClient().build();
This code results in
2020-03-12 19:45:06,205 ERROR [io.qua.ama.lam.run.AmazonLambdaRecorder] (Lambda Thread) Failed to run lambda: software.amazon.awssdk.core.exception.SdkClientException: Unable to load an HTTP implementation from any provider in the chain. You must declare a dependency on an appropriate HTTP implementation or pass in an SdkHttpClient explicitly to the client builder.
To resolve this I added the aws apache client and updated the code to do the following:
SdkHttpClient httpClient = ApacheHttpClient.builder().
maxConnections(50).
build()
S3Client.builder().httpClient(httpClient).build();
I also had to add:
[
["org.apache.http.conn.HttpClientConnectionManager",
"org.apache.http.pool.ConnPoolControl","software.amazon.awssdk.http.apache.internal.conn.Wrapped"]
]
After this I am now getting the following stack trace:
Caused by: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty
at java.security.cert.PKIXParameters.setTrustAnchors(PKIXParameters.java:200)
at java.security.cert.PKIXParameters.<init>(PKIXParameters.java:120)
at java.security.cert.PKIXBuilderParameters.<init>(PKIXBuilderParameters.java:104)
at sun.security.validator.PKIXValidator.<init>(PKIXValidator.java:86)
... 76 more
I am running version 1.2.0 of qurkaus on 19.3.1 of graal. I am building this via Maven and the the provided docker container for Quarkus. I thought the trust store was added by default (in the build command it looks to be accurate) but am I missing something? Is there another way to get this to run without the setting of the HttpService on the S3 client?
There is a PR, under review at the moment, that introduces AWS S3 extension both JVM & Native. AWS clients are fully Quarkified, meaning configured via application.properties and enabled for dependency injection. So stay tuned as it most probably be available in Quarkus 1.5.0
I POST the following data to https://api.digitalocean.com/v2/droplets
{
"name":"1470293222",
"image":"ubuntu-16-04-x64",
"size":"512mb",
"user_data":"#!/bin/bash\ncurl http://www.myserver.com",
"region":"nyc1"
}
This should create a new droplet and run the script in user_data, but not matter what I do, I can't seem to get the script to run.
Strangely if I launch a Droplet from the DigitalOcean console, which appears NOT to use the REST API, then the userdata script appears to work OK
Does anyone have any ideas on how to make a DigitalOcean boot script work?
What response are you receiving when you send that request? Based on the DigitalOcean API documentation for creating a droplet, it appears that your size field is incorrect. Instead of using 512mb, try something like s-1vcpu-1gb.
Scenario: I'm uploading an image file to the server, when trying to access it, using curl, I'm getting it on every 2nd request.
e.g curl http://staging.muserver.com/system/assets/images/000/000/test.png
When i am getting 200ok, i can see that i am having ETag, when it fails i can see X-Request-Id and X-Runtime.
this error happen only on the amazon pre production, on local machine i cannot reproduce it.
Frederick was right...i didn't setup LB...but engineyard supplied by default, when disabled, everything works fine.
I get the following error:Service Invocation Exeption i am working with Version 8.7 IBM InfoSphere DataStage and QualityStage Designer and using a server job and there, i have 1 sequential file, web service, sequential file.
Any idea what could be the reason of this error ?
Make sure you have chosen proper DataStage job type and your stage that operates on web service is configured properly.
You should also check the DataStage logs to get more information about root cause of the error.