I am getting this error on my GetHDFS and I don't understand why I set the
Hadoop Configuration Resources, Kerberos Principal, Kerebros Keytab and there are files in the path I just checked via superputty and it's a valid path.
Currently the GetHDFS is just linked to a logAttribute as I am trying to get each step working before moving to the next.
Overall Process: GETHDFS -> PUTEMAIL, I am trying to print out a count of the rows of the path(csv)
It looks like your core-site.xml has a default filesystem that references a hostname 'gwhdpdevnnha' that is not reachable from where NiFi is running. You can trouble shoot this outside of NiFi by checking if you can run ping with that hostname from the terminal.
Related
I just installed a new WebSphere 8.5.5 ESB on Linux Centos 7.
All installation i did with root user.
Than i did the following steps to create a Web Service:
1) create server with user wasadmin
2) Generate plugin
3) Propagate plugin
In the last step i get the error:
PLGC0049E: The propagation of the plug-in configuration file failed for the Web server. test2lsoa01-02Node01Cell.XXXXXXXXX-node.IHSWebserver.
Error A problem was encountered transferring the designated file. Make sure the file exists and has correct access permissions.
The file /u01/apps/IBM/WebSphere/profiles/ApplicationServerProfile1/config/cells/test2lsoa01-02Node01Cell/nodes/XXXXX-node/servers/IHSWebserver/plugin-cfg.xml exist.
I gave him for test chmod 777 plugin-cfg.xml
Still the error is not going away.
Can someone help?
User wsadmin would be the user attempting to move the file. Ensure that ID can access /u01/apps/IBM/WebSphere/profiles/ApplicationServerProfile1/config/cells/test2lsoa01-02Node01Cell/nodes/XXXXX-node/servers/IHSWebserver/plugin-cfg.xml and there should be a target directory as well (in the webserver installation where plugin-cfg.xml is being moved to). Ensure that wsadmin has write access to this target location if propagating using node sync. If using IHS admin, ensure that the userid/password defined in the web server definition has write access to the target location.
A good test would be to access the source plugin-cfg.xml using wsadmin userid and attempt to manually move the file to the target location with the appropriate ID (based upon use of node sync or IHS admin).
I have HDP Hortonworks 2.5.3 cluster, MAPREDUCE jobs in YARN are getting failed with the error:
java.io.IOException: DistCp failure: Job job_1498784032636_0015 has
failed:
Application application_1498784032636_0015 failed 2 times due to AM Container for appattempt_1498784032636_0015_000002 exited with
exitCode: -1000 For more detailed output, check the application
tracking page:
http://asterdart0005.labs.teradata.com:8088/cluster/app/application_1498784032636_0015 Then click on links to logs of each attempt. Diagnostics: Application
application_1498784032636_0015 initialization failed (exitCode=255)
with output: main : command provided 0 main : run as user is hdfs main
: requested yarn user is hdfs Requested user hdfs is banned
later i googled, it seems the hdfs user is banned user, as per the configuration in the file /etc/hadoop/conf/container-executor.cfg on each node, here is the content of the file:
yarn.nodemanager.local-dirs=/hadoop/yarn/local
yarn.nodemanager.log-dirs=/hadoop/yarn/log
yarn.nodemanager.linux-container-executor.group=hadoop
banned.users=hdfs,yarn,mapred,bin
min.user.id=500
I have modified the file in all nodes (namenode, edge and data nodes), as below:
yarn.nodemanager.local-dirs=/hadoop/yarn/local
yarn.nodemanager.log-dirs=/hadoop/yarn/log
yarn.nodemanager.linux-container-executor.group=hadoop
#banned.users=hdfs,yarn,mapred,bin
min.user.id=500
and restarted all services in HDFS, YARN and MapReduce2 through Ambari, after restarting my jobs are failing with the same error, and checked the /etc/hadoop/conf/container-executor.cfg content, looks it reset to initial stage as below:
yarn.nodemanager.local-dirs=/hadoop/yarn/local
yarn.nodemanager.log-dirs=/hadoop/yarn/log
yarn.nodemanager.linux-container-executor.group=hadoop
banned.users=hdfs,yarn,mapred,bin
min.user.id=500
any idea whats the solution here, to remove the users from the banned users list?
First thing to note is , you can not comment banned_users line, instead set correct users in value of banned_users list. (i.e. if you do not want to ban user hdfs then change banned.users=hdfs,yarn,mapred,bin to banned.users=yarn,mapred,bin). If you comment banned_users list then anyway by default hdfs, yarn and mapred will be banned.
Another thing, you can follow steps given below to propagate changes to all nodes.
Go to Ambari server node
Modify /var/lib/ambari-server/resources/common-services/YARN/<version>/package/templates/container-executor.cfg.j2 to configure banned users.
Restart Ambari server and all Ambari agents
Is there any log rotation in Vora 1.3? After 2 months of running Vora 1.3 I realized I'm almost of disk space on my nodes because /var/log/vora-manager is like 46 Gb. So I had to stop it, kill the logs and restart.
But maybe I missed some setting?
Edit 1: The log file is supposed to be stored in /var/log/vora/vora-manager, not the folder I mentioned above, but still I saw a huge log file there. The file /var/log/vora-manager is also mentioned in the line 178 of control.py script that is supposed to start a worked vora-manager.
You are right -- the vora-manager log file is not written into the standard /var/log/vora directory, instead it is written to /var/log/vora-manager. This has been corrected in Vora 1.4.
The logs should be rotating based on the vora_manager_log_max_file_size variable which is also set in Ambari.
Something must have gone wrong whenever vora tries to rotate the logs. I propose you search through your log file for the following line and see if it is followed by some kind of error:
vora.vora-manager-master: [c.740b0d26] : Running['sudo', '-i', '-u',
'root', '/usr/sbin/logrotate', '/etc/logrotate.d/vora-manager-master']
You can also change the verbosity of the logger by setting the vora_manager_log_level config variable in Ambari from INFO to WARNING. Be ware this will hide the log rotation log messages.
I get the following error:Service Invocation Exeption i am working with Version 8.7 IBM InfoSphere DataStage and QualityStage Designer and using a server job and there, i have 1 sequential file, web service, sequential file.
Any idea what could be the reason of this error ?
Make sure you have chosen proper DataStage job type and your stage that operates on web service is configured properly.
You should also check the DataStage logs to get more information about root cause of the error.
I am tryying my hands on the WSO2 BAM.
I tried to run the examples (“HTTPD Logs Analysis” or “KPI Monitoring Sample”), but I get the following message after creating the toolbox, upon the publishing of data:
java.io.IOException : Cannot run program in “C:\Program” (in directory “C:\wso2bam-2.3.0”): CreateProcess error=2 , The specified file cannot be found.
Any tips on this issue?
Some times in windows 'Programs Files' cause problem due to space in it. Therefore if you have WSO2 BAM distribution within C:\Program Files or you have space in the distribution path some where, please change it to a location without space.
And also check where your java installation is in. That also should be in a location which doesn't have space in the path.
Please check both, that will resolve your problem.