How to submit a job, by using an uploaded jar in Apache Flink? - mapreduce

I've already uploaded a jar (which was generated through a word count java program) to Apache Flink web console through an HTTP Post request via curl and the get jars api shows the uploaded jar.
When I try to submit a job using that jar, it throws me this error:
Caused by: org.apache.flink.client.program.ProgramInvocationException:
JAR file does not exist
'/tmp/flink-web-8aa36f99-87fb-4fbc-b155-237fd833fc32/:949611ce-345a-4cd5-986b-8ff9b0700852_WordCount.jar'
This is how my POST request looks like:
http://localhost:8081/jars/:949611ce-345a-4cd5-986b-8ff9b0700852_WordCount.jar/run
I followed their official docs for reference. Where am i going wrong? Any help could be appreciated.

Ensure that the jar file is located in your temp directory. There is a ':' in path, is it correct?
I study REST API recently and successfully submit my job with POST request
http://host:port/jars/29525e98-3ece-49c1-85d1-5301a5a38900_myjob.jar/run?allowNonRestoredState=false&entry-class=&parallelism=&program-args=&savepointPath=
Also you can submit job via Flink Dashboard and detect correct url through Chrome DevToos at Network tab (or something like this)

Related

Provide directory listings using API Gateway as a proxy to S3

I'm trying to host a static site through S3 and API Gateway (using S3 directly isn't an option due to security policy). One of the pages runs a client-side script to pull back a set of files from a specific folder on the server. I've set up the hosting following the Amazon tutorial.
For this to run, my script needs to be able to obtain the list of files for a specific folder.
If I was hosting the site on my own server using Apache, I could rely on the directory listing feature, where a GET on a folder with no index.html returns a file list. The tutorial suggests that this should be possible, but I can't seem to get it to work. If I submit a request to a particular {prefix}/{identifier}, I can retrieve the specific file, but sending a request to {prefix}/ returns an error.
Is there a way I can replicate directory listings so my Javascript can pull it down and read it, or is the only solution to write a server-side API in Lambda?

Can't put onesignal service worker js files in root in github

I have used github-pages to publish my site. I'm trying to use onesignal there. But I can't store the sdk files in the root. I'm getting this console error.
Installing service worker failed TypeError: Failed to register a ServiceWorker for scope ('https://username.github.io/') with script ('https://username.github.io/OneSignalSDKWorker.js?appId=<MY_APP_ID>'): A bad HTTP response code (404) was received when fetching the script.
A 404 means the service worker script is not where the SDK thinks it is. Try visiting the URL in your browser (https://username.github.io/OneSignalSDKWorker.js). You should see the script there. If it is not there, you have not successfully hosted the required file.

Removeing http_access_yyyy_mm_dd.log file in wso2 API Manager

I have one problem with API Manager.
I don't want to have logs of requests and responses in API Manager, because those log files are so big and I encounter files with 20G. I tried to comment Catalina access file, which is in repositoy/conf/tomcat/catalina-server.xml:
<Valve className="org.apache.catalina.valves.AccessLogValve"
directory="${carbon.home}/repository/logs"
prefix="http_access_"
suffix=".log"
pattern="combined"/>
Unfortunately, after commenting the above code, only http_access_.log file didn't created, but http_access_yyyy_mm_dd.log was created and requests saved in it. I tried to change directory of above file, too. Only file http_access_.log saved in new directory, and http_access_yyyy_mm_dd.log is still created in the ${carbon.home}/repository/logs directory.
How can change configuration of http_access_yyyy_mm_dd.log in wso2 API Manager?
According to the Apache Documentation,
The name of the file is composed by concatenation of the configured
prefix, timestamp and suffix
You can simply comment out the code snippet which you have mentioned which can be found in <PRODUCT_HOME>\repository\conf\tomcat\catalina-server.xml.
<Valve className="org.apache.catalina.valves.AccessLogValve" directory="${carbon.home}/repository/logs" prefix="http_access_" suffix=".log"b pattern="combined"/>
In simple comment the above code snippet in the mentioned file path and restart the WSO2 APIM server.

Giving value to parameter from a file to send it through HTTPPOst

I have been struggling with Jenkins lately, and I'm stuck because I wanna send some parameters through HTTP Post, and I know how to do it, but the thing is that I am saving a Http request response to a file in my workspace, and then I want to use that file, read it and send the text I saved previously to a new HTTP Request, does anyone have any idea how can I achieve this?
Thanks in advance!!!
Install copy artifacts from another project plugin ( copy artifacts) add in build steps store the file in your workspace then you can run a shell script to read the desired content from that file .
if curl would work, that would be a simple way to send a file's contents as your POST body. see this answer.
Jenkins can work with Jmeter and Jmeter is great tool for handling request and response see tutorial

H18 Error: Django app Media Upload failing on Heroku

Our Django App is failing media upload. This has been an off-and-on issue for us for a while. however, for about a week now, it's been consistently failing to upload media. Our media files are stored on S3.
On inspection, the uploaded files were found in the S3 buckets... However, the logs display the message below while the app throws an Application error...
Found this answer on GitHub (https://github.com/benoitc/gunicorn/issues/840)
Hi, we hit this issue in production using Flask + Gunicorn + Heroku and couldn't find a cause or a workaround.
For one particular POST request with POST parameters, the request would fail with an H18 error (sock=backend) in Heroku's router indicating that the server closed the socket when it shouldn't have.
We started decreasing the response size of that failing endpoint until we narrowed it down to around the 13k mark. If we sent less than 13k, the response would always work. If we sent more than 13k, the response would almost always not work.
Code to reproduce this is available at https://github.com/erjiang/gunicorn-issue - just deploy the repo to Heroku as is and follow the instructions in the README.