Spring Boot 2.0.0.RELEASE on aws lambda - amazon-web-services

I'm trying to make a Spring Boot (Web Starter) work on AWS Lambda and have a class not found exception. I checked MANIFEST.MF and there's a spring classpath set but not a normal classpath. I think what's happening is that in the lambda configuration you set a Handler class, but it can't find it because that class is actually in BOOT-INF.
One thing I think might be a problem is that I have two maven plugins configured:
spring-boot-maven-plugin
maven-shade-plugin
and I'm wondering if they are conflicting with each other. My shading plugin configuration excludes tomcat and undertow, but the shaded jar is the same size as the non-shaded one which isn't right.
One stackoverflow suggested configuring org.springframework.boot to use the MODULE layout, but MODULE has been removed.
Is there a known workaround for this? I could convert this to a non "starter" spring project but that's a lot of effort and I have this sinking feeling it won't solve anything. In the absence of better ideas perhaps that's what I will have to do.

Related

Disable Liquibase execution at startup using Jetty + Spring (not Spring boot!)

I'm working in an application developed with Spring5 (not Spring boot!) that runs on Jetty. This application has module that uses the plugin liquibase-maven-plugin.
We generate a image from a dockerfile (base image jetty:9-jre8), where we add the application (war file) in the jetty application directory.
In some specific environments, where I deploy the application, I want to be able to disable that execution.
Is it possible to do so?
I've seen on spring boot documentation, that it's possible to do so by defining the property spring.liquibase.enabled (or liquibase.enabled on Spring4) to false, but that seems that doesn't work:
I've tried to define them at the properties file, define them as env properties and also as java options (-Dspring.liquibase.enabled=false).
This has the same behavior when I deploy the container, or when I execute locally the maven command: mvn jetty:run
Do you have any ideas or hints how to do this?
Thank you in advance
Well I just discovered that it's possible to disable the execution of liquibase by adding the JAVA_OPTION
-Dliquibase.shouldRun=false
For more details see here
I will keep this quesion anyway, in case someone has the same problem I did.

Can I deploy a multiclass java jar in aws lambda OR it should be always a single class file recommended in lambda?

I have an existing spring boot, not a webservice but a Kafka client app. But the issue is we have been structured with typical Processor->Service->DAO layer. The jar is above 50 MB so anyway its not a candidate for aws lambda. I got some doubts, can I deploy the full jar OR should I use the step functions?. All tutorials are a single class function. Do anyone have tried out this(multiclass jar)? Also now lambda have introduced dockers. Thats adding a more confusion, can I deploy a docker, but looks like its the same under the hood.
My pick is ECS/EKS with Fargate. Basically am planning to get rid of the docker image as well. But looks like there is no way available in lambda to host my existing app other than refactoring it as step function. Is it correct?
You can deploy the full fat jar with the usual multi-class hierarchy, but it is not recommended due to the Cold Start issue unless you use "Provisioned concurrency".
Here are my tips for you:
Keep the multi-class hierarchy, which anyways doesn't have much impact on the Jar size. This will keep your code testable. Try to remove the Spring if it is possible and create your own small dependency injection framework or use other small frameworks for that purpose.
Review all your dependencies, remove jars that are not needed. Our usual code is always very small, the dependent jar makes our deployable huge.

Spring Cloud Function - Manual Bean Registration and Loading Configuration Classes

I am currently using Spring Cloud function 3.07.RELEASE with the AWS Adapter for lambda.
We are using a limited scope Functional Bean registration and understand that this does not include full Spring Boot autoconfiguration. We are okay with this as we value the speed and significant reduction cold start times.
However, we do have configuration classes that we want to utilize and assume that this needs to be done manually. What is the best practice on importing these classes?
We tried searching, but failed to find documentation on the differences in behavior of the limited scope context vs spring boot application context.
If I understand your question correctly all you need to do is register those configuration classes manually and the rest will be autowired. There was a little issue with it which may or may not affect you. In any event it was fixed and will be available in 3.0.9 release next week.

JavaPackage org.wso2.carbon.apimgt.impl.APIManagerAnalyticsConfiguration]. It is not a function, it is "object".)

I encounter the following exception in the WSO API Manager
ERROR {JAGGERY.modules.analytics.add.jag}Error occurred while saving
Analytics configuration (Cause:Cannot call property getInstance in
object [JavaPackage
org.wso2.carbon.apimgt.impl.APIManagerAnalyticsConfiguration]. It is
not a function, it is "object".){JAGGERY.modules.analytics.add.jag}
We have no clue what leads to this problem, we are sure that we didn't change the jag files,but we did replace one class file(within the jar) with our own compiled class and replace it into the jar.
When we change back to the original jar and restart the server,the problem is still there,does anyone know what may lead to this problem and how to fix?
This can happen if APIManagerAnalyticsConfiguration class is not available in OSGi rumtime. Most possible reason is that corresponding jar is not ACTIVE. You can start the server with -DosgiConsole and see if that's the case. Here is a guide.
Did you replace a jar in plugins directory? That's actually not recommended. And that can cause OSGi activating issues too. If you really want to replace a jar, you should patch the jar by placing the jar inside <APIM_HOME>/repository/components/patches/patch0100/. Here 0100 is an arbitrary number.
We are deploying our own war app on the APIM Console. Looks the war contains a CXF jar, which conflicts with APIM's own CXF jar that leads to the problem.We are simply un-deploy the war,and the problem is gone

Spring Context wildcard in unit tests

We have a project setup here which uses Maven profiles quite extensively. We're using Spring, and although we mostly have an annotation-based configuration there are a few XML configuration files needed.
These Spring XML config files are pulled in with various different profiles, and in the actual web application they're all put in WEB-INF/spring and loaded up with classpath:spring/spring-*.xml. This works fine.
The problem is unit testing: I want to test a variety of different profiles, and Spring seems to have an issue with a wildcard specification like that when the files are spread over several directories.
The easiest solution I think would just be to specify each config file in the #ContextConfiguration test annotation, but unfortunately if one is missing Spring throws an exception, and there doesn't seem to be a way of turning this off.
The other thing I thought was potentially dumping all spring config files into one folder before running the tests, but that seems a bit of a kludge.
I was just wondering if anyone else had any experience of this problem and any workarounds.
It seems that the Spring guys have thought of this already.
You can use the syntax:
classpath*:spring/spring-*.xml
Which seems to work properly.