Spring-xd hdfs sink- Error Creating bean - hdfs

I am getting the following exception when deploying stream with hdfs as sink in spring-xd.
Error creating bean with name 'hadoopConf iguration': Invocation of init method failed; nested exception is java.lang.NoClassDefFoundError: org/apache/hadoop/mapred/JobConf
I have my spring-xd app running on yarn successfully. Appreciate your help.

The problem is with the configuration in siteMapreduceAppClassPath in servers.yml. The classpath should include the path of hadoop-core jar, as the jar is not included the app it is giving NoClassDefFoundError.

Related

How to use AWS_MSK_IAM sasl mechanism with a kafka producer jar?

I have a fat jar called producer which produces messages.I want to use it to produce messages to a MSK serverless cluster. The jar takes the following arguments-
-topic --num-records --record-size --throughput --producer.config /configLocation/
As my MSK serverless cluster uses IAM based authentication, i have provided the following settings in my producer.config-
bootstrap.servers=boot-url
security.protocol=SASL_SSL
sasl.mechanism=AWS_MSK_IAM
sasl.jaas.config=software.amazon.msk.auth.iam.IAMLoginModule required;
sasl.client.callback.handler.class=software.amazon.msk.auth.iam.IAMClientCallbackHandler
The way this jar usually works is by providing a username and password with the sasl.jaas.config property.
However, with MSK serverless we have to use the IAM role attached to our EC2 instance.
When my executing the current jar by using
java - jar producer.jar -topic --num-records --record-size --throughput --producer.config /configLocation/
I get the exception
Exception in thread "main" org.apache.kafka.common.config.ConfigException: Invalid value software.amazon.msk.auth.iam.IAMClientCallbackHandler for configuration sasl.client.callback.handler.class: Class software.amazon.msk.auth.iam.IAMClientCallbackHandler could not be found.
I don't understand how to make the producer jar find the class present in the external jar aws-msk-iam-auth-1.1.1-all.jar.
Any help would be much appreciated, Thanks.
I found out that it isn't possible to override the classpath specified in MANIFEST.MF in case of a jar by using commandline options like --cp. What worked in my case was to have the jar's pom include the missing dependency.
When you use the -jar command-line option to run your program as an
executable JAR, then the Java CLASSPATH environment variable will be
ignored, and also the -cp and -classpath switches will be ignored and,
In this case, you can set your java classpath in the
META-INF/MANIFEST.MF file by using the Class-Path attribute.
https://javarevisited.blogspot.com/2011/01/how-classpath-work-in-java.html

Logstash Google Pubsub Input Plugin fails to load file and pull messages

I'm getting this error when trying to run Logstash pipeline with a configuration that is using google_pubsub on a docker container running in my production env:
2021-09-16 19:13:25 FATAL runner:135 - The given configuration is invalid. Reason: Unable to configure plugins: (PluginLoadingError) Couldn't find any input plugin named 'google_pubsub'. Are you sure this is correct? Trying to load the google_pubsub input plugin resulted in this error: Problems loading the requested plugin named google_pubsub of type input. Error: RuntimeError
you might need to reinstall the gem which depends on the missing jar or in case there is Jars.lock then resolve the jars with `lock_jars` command
no such file to load -- com/google/cloud/google-cloud-pubsub/1.37.1/google-cloud-pubsub-1.37.1 (LoadError)
2021-09-16 19:13:25 ERROR Logstash:96 - java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit
This seems to randomly happen when re-installing the plugin. I thought it's a proxy issue but I have the google domain enabled in the whitelist. Might be the wrong one / missing something. Still, doesn't explain the random failures.
Also, when I run the pipeline in my machine I get GCP events, but when I do it on a VM - no Pubsub messages are being pulled. Could it be a firewall rule blocking them?
The error message suggests there is a problem in loading the ‘google_pubsub’ input plugin. This error generally occurs when the input Pub/Sub plugin is not installed properly. Kindly ensure that you are installing the Logstash Plugin for Pub/Sub correctly.
For example, installing Logstash Plugin for Pub/Sub in a VM :
sudo -u root sudo -u logstash bin/logstash-plugin install logstash-input-google_pubsub
For a detailed demo refer to this community tutorial.

Deploying Customized JAR in AWS failing Exception in thread "main" java.lang.IllegalArgumentException: Wrong FS

I am trying to deploy a customized JAR in AWS (Map Reduce). I have to read files from S3. The paths to S3 are given as command line arguments. While the cluster is running, I see the following in the 'Steps' section of the Cluster:
Status:FAILED
Reason:Illegal Argument.
Log File:s3://aws-logs-502743756123-us-east-1/elasticmapreduce/j-3U1NGY5JNUBK2/steps/s-O3W3I4RU4NXS/stderr.gz
Details:Exception in thread "main" java.lang.IllegalArgumentException: Wrong FS: s3n://****/input, expected: hdfs://ip-172-31-45-130.ec2.internal:8020
JAR location: s3://****/ChainMapperDriver.jar
Main class: None
Arguments: ChainMapperDriver s3://****/input s3://****/output/
Action on failure: Terminate cluster
ChainMapperDriver is the name of the Main Class.
Do I have to do anything in the JAVA code that I have written to handle the case when the files are in S3? Your help is greatly appreciated.

Grails application spock test failed

I am running grails 2.2.3 and installed a plugin called "newrelic:1.0-2.18.". Now i am getting the error everytime I run the project.
| Error Error loading event script from file [/Projects/Front/plugins/cached-resources-1.0/scripts/_Events.groovy] startup failed:
Could not instantiate global transform class org.spockframework.compiler.SpockTransform specified at jar:file:/home/user/.grails/ivy-cache/org.spockframework/spock-core/jars/spock-core-0.6-groovy-1.8.jar!/META-INF/services/org.codehaus.groovy.transform.ASTTransformation because of exception java.lang.reflect.InvocationTargetException
Did you try deleting your .grails folder?

ColdFusion 8 as WAR on JBoss 4.2.3 WEB-INF/flex-config.xml error

I have done this by the book multiple times and have also tried using someone else's WAR to deploy, but I keep getting the same error. I am running JBoss 4.2.3 and have tried this on multiple installations (of 4.2.3)
I have verified that the supposed missing file file exists.
19:19:15,853 INFO [ContextLoader] Root WebApplicationContext: initialization completed in 54014 ms
19:19:18,172 ERROR [STDERR] javax.servlet.ServletException: The configuration file cound not be found at /WEB-INF/cfform/flex-config.xml
19:19:18,174 ERROR [STDERR] at flex.server.j2ee.cache.CacheFilter.setupFlexService(CacheFilter.java:93)
This error results in failure of the WAR to deploy:
--- MBeans waiting for other MBeans ---
ObjectName: jboss.web.deployment:war=cfusion.war,id=611163449
State: FAILED
Reason: org.jboss.deployment.DeploymentException: URL file:/jee/workspace/tools/server/default/deploy/cfusion.war/ deployment failed
Any ideas?
I found a solution. It's insane, but it's worked twice now (on OS X, at least).
copy the WEB-INF directory to your file system root.
Ex: cp -R ./cfusion.war/WEB-INF /
start the instance. Everything works.
Delete the newly copied file system root /WEB-INF folder.
From now on, it will work. Crazy, but there you go.
I've seen this one a million times. It's usually when I forget a file in the WEB-INF of my war. Is flex-config.xml in your WEB-INF?