Hadoop: Mapper logs are not printing in application logs - mapreduce

I have log statements in my mapper using slf4j/log4j that are not showing up in application(port 8090) logs.
Do I have to configure anything for this in hadoop 2.x? Everything works well in 1.x
Thanks in advance for your help.

Logging works bit differently in Hadoop 2.x
Please follow steps mentioned below:
1.You will find container-log4j.properties in hadoop-yarn-server-nodemanager jar file, extract it (make your Custom changes) and run below command
jar uf /home/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.2.0.jar container-log4j.properties
2.Alternatively yo can make changes in file hadoop-config.sh(you can specify custom changes for logger in line below)
HADOOP_OPTS="$HADOOP_OPTS -Dhadoop.root.logger=${HADOOP_ROOT_LOGGER:-INFO,console}"

Related

Flink job using JNI on EMR

I am trying to invoke a native library from within a flink pipeline.
Environment is
EMR 5.34
Flink 1.13.1
I have built the uber fat jar and made sure the .so file is available in the JAR file.
However I am facing the below exception when starting up the flink application.
Appreciate any pointers.
Caused by: java.lang.UnsatisfiedLinkError: no <<my native library artifact name>> in java.library.path
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1860)
at java.lang.Runtime.loadLibrary0(Runtime.java:871)
Thank you,
Amit
I was able to resolve this at least in "Session" mode by setting below config parameters in flink-conf.yaml file.
env.java.opts: "-Djava.library.path=<<path to libraries>>"
containerized.master.env.LD_LIBRARY_PATH: "<<path to libraries>>"
containerized.taskmanager.env.LD_LIBRARY_PATH: "<<path to libraries>>"
You also need to use StreamExecutionEnvironment.registerCachedFile to pass the extracted files on the JobManager to the TaskManagers involved.
On Driver side -
StreamExecutionEnvironment.getExecutionEnvironment.registerCachedFile(directorywherefilesareextracted,"somekey")
Hope this helps if someone is looking for an approach that could be used to work with such scenario.
You can access these cached files and store them in the directory configured in filnk-conf.yaml so that they are included in the library path for execution.
getRuntimeContext().getDistributedCache().getFile("somekey")
To be able to access the RuntimeContext, you need to extend RichMapFunction.
Update:
With all the above changes, when I run the Flink pipeline for the first time, it still complains about library not found. I did check the directory in which I am extracting distributed cache and the libraries are there.
Subsequent runs after the first failure are successful. I am not sure why I am seeing this kind of behavior.
Update:
Made sure that the directory, where we extract the libraries, is readily available when we create EMR cluster and it worked like a charm. I created this directory by configuring Bootstrap action.

WSO2 Tomcat deployment logs

I need your help.
How can I get logs in WSO2 EI while WAR is deploying?
I want to see the process and get the errors like in standalone Tomcat instance.
Is it possible?
I've tried to change log level at log4j.properties, catalina-server.xml and many others. But there is still no result.
Kind Regards,
Kirill
I found the next solution:
In file /opt/ame-a/repository/conf/log4j.properties
set
log4j.logger.org.apache.catalina=DEBUG
This way you will be able to implement logging realization inside your application and put application logs wherever you need, for example, I put it to
/opt/ame-a/lib/tomcat/logs/monitor.log
You need to setthe log level in the logging-bridge.properties file.
Open wso2as/repository/conf/etc/logging-bridge.properties file and put org.apache.catalina.level=FINEST

Error when adding web service in Script Task in SSIS?

i have this task to send a RS report (event driven) in SSIS. I have this script task that extracts the report and another script task to send the email.
In my script task I'm able to add the web reference, reference it in the code, build successfully and save it.
http://{YourReportServerURL}/reportserver/reportexecution2005.asmx
BUT when I exit the script task editor I get a script error saying "Script contained in the package have compilation errors. Do you want to save changes?"
A red cross appears on it and it says "The binary code for the script is not found.". I set the DelayValidation to true to remove this red cross.
When i execute the package, it fails because of the The binary code for the script is not found error
By the way these are my references:
http://www.travisgan.com/2013/09/ssrs-data-drive-subscriptions-part-3.html
http://www.macaalay.com/2014/04/02/how-to-create-data-driven-report-subscriptions-in-sql-server-standard-version/
Im using VS 2010. how can I solve this problem? I need your help!
Thanks in advance.
Please take a look at SSIS Package failing because “script task is failing because the script is not precompiled”. Your problem looks similar to the described in that issue.
Basically, you need to make sure that VSTA project code is set to MSIL, build the project and save it. After that - close VSTA editor, and click Ok on the Script task.

Can ember-cli watch and build automatically without running the server?

Title is pretty much my question. I'm serving the dist directory differently and would still like the benefit of auto-builds but I don't need to run the server. I looked in the docs and the cli help but didn't see anything specific. I know the cli help doesn't contain everything because it doesn't list ember build which is available.
If I understand correctly you are wanting the ember build command to watch for changes in the file tree and rebuild on a change?
They implemented ember build --watch a while back which will trigger when a file changes. Tested just now and it worked on 0.2.7. Not sure what version it came in though. Let me know if this is not the answer you are looking for.

Testing a app script

I have created a script via Drive|New|App Script
Now I want to test it. This must be so fundamental that it must be obvious, but it is not to me.
My script is for a Document. I used the code example from this "Translate" example, https://developers.google.com/apps-script/quickstart/docs
I have tried creating a new document and then from the menu Tools|Script-editor I have gone looking for my saved project, but the "Open recent projects" is empty.
How do you test/run a saved app script project for a Apps Document?
Thank you in advance.
If created Via Google Drive you can type the following in the search whilst in Drive (Drive.google.com)
Type:Script
This should return all the scripts created manually which are not attached to specific Sheets/Docs/Forms (i.e you have used the Drive tool to create a script or used the URL Script.google.com)
If you have created it this way and want to run a function to interact with a Google Sheet/Doc see the following example.
SpreadsheetApp.getActiveSpreadsheet()
CHANGE TO;
SpreadsheetApp.openById('SHEETS ID GOES INBETWEEN THESE BRACKETS').getSheetByName('Sheet1')
https://docs.google.com/spreadsheets/d/UNIQUE-ID-IS-HERE
OR YOU COULD CREATE A NEW APPSCRIPT USING THE TOOLS>SCRIPT EDITOR
All tests on appscript are run using the '>' play icon or 'RUN' then select your function.
Don't create your script in "Drive|New|App Script". Open a new Google Doc, open "Tools|Script editor..." and write your code in this new app script project.
Normally this must work!
Short answer
Follow the referred tutorial instructions.
Explanation
The path that you followed to create the script project, creates a stand-alone script project, but the referred tutorial is about a Google Document bounded-script. The code include triggers and methods that only work on this kind of script project.