How to configure Jetty in spring-boot (easily?) - jetty

By following the tutorial, I could bring up the spring-boot with Jetty running using the following dependencies.
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
<exclusions>
<exclusion>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-tomcat</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-jetty</artifactId>
</dependency>
However, how could I configure the Jetty server such as:
Server threads (Queue thread pool)
Server connectors
Https configurations.
all those configuration available in Jetty...?
Is there an easy way to do in
application.yml?
Configuration class?
Any example would be greatly appreciated.
Many thanks!!

There are some general extension points for servlet containers and also options for plugging Jetty API calls into those, so I assume everything you would want is in reach. General advice can be found in the docs. Jetty hasn't received as much attention yet so there may not be the same options available for declarative configuration as with Tomcat, and for sure it won't have been used much yet. If you would like to help change that, then help is welcome.

Possibility to configure Jetty (in parts) programatically from http://howtodoinjava.com/spring/spring-boot/configure-jetty-server/
#Bean
public JettyEmbeddedServletContainerFactory jettyEmbeddedServletContainerFactory() {
JettyEmbeddedServletContainerFactory jettyContainer =
new JettyEmbeddedServletContainerFactory();
jettyContainer.setPort(9000);
jettyContainer.setContextPath("/home");
return jettyContainer;
}

If anyone is using Spring Boot - you can easily configure this in you application.properties thusly:
server.max-http-post-size=n
where n is the maximum size to which you wish to set this property. For example I use:
server.max-http-post-size=5000000

As of the year 2020, while working on newer versions, this is what you need to do, to configure Jetty port, context path and thread pool properties. I tested this on Spring Boot version 2.1.6 while the document I referred to is for version 2.3.3
Create a server factory bean in a configuration file.
#Bean
public ConfigurableServletWebServerFactory webServerFactory() {
JettyServletWebServerFactory factory = new JettyServletWebServerFactory();
factory.setPort(8080);
factory.setContextPath("/my-app");
QueuedThreadPool threadPool = new QueuedThreadPool();
threadPool.setMinThreads(10);
threadPool.setMaxThreads(100);
threadPool.setIdleTimeout(60000);
factory.setThreadPool(threadPool);
return factory;
}
Following is the link to Spring Docs:
customizing-embedded-containers

Spring Boot provides following Jetty specific configuration through property file:-
server:
jetty:
connection-idle-timeout: # Time that the connection can be idle before it is closed.
max-http-form-post-size: # Maximum size of the form content in any HTTP post request e.g. 200000B
accesslog:
enabled: # Enable access log e.g. true
append: # Enable append to log e.g. true
custom-format: # Custom log format
file-date-format: # Date format to place in log file name
filename: # Log file name, if not specified, logs redirect to "System.err"
format: # Log format e.g ncsa
ignore-paths: # Request paths that should not be logged
retention-period: # Number of days before rotated log files are deleted e.g. 31
threads:
acceptors: # Number of acceptor threads to use. When the value is -1, the default, the number of acceptors is derived from the operating environment.
selectors: # Number of selector threads to use. When the value is -1, the default, the number of selectors is derived from the operating environment.
min: # Minimum number of threads e.g. 8
max: # Maximum number of threads e.g. 200
max-queue-capacity: # Maximum capacity of the thread pool's backing queue. A default is computed based on the threading configuration.
idle-timeout: # Maximum thread idle time in millisecond e.g. 60000ms
Please refer official Spring Boot documentation for more configuration details.

Related

Micronaut and OpenTracing of method calls

We are building a web-app using Micronaut (v1.2.0) which will be deployed in a Kubernetes cluster (we are using Istio as the service-mesh).
We would like to instrument the critical method calls so that they can generate their own spans within a HTTP request span context. For this we are using the Micronaut OpenTracing support and Jaeger integration.
The following dependencies are included in the pom.xml
...
<dependency>
<groupId>io.micronaut</groupId>
<artifactId>micronaut-tracing</artifactId>
<scope>compile</scope>
</dependency>
<dependency>
<groupId>io.jaegertracing</groupId>
<artifactId>jaeger-thrift</artifactId>
<scope>runtime</scope>
</dependency>
...
Have implemented Filter method with #ContinueSpan (also tried the same with #NewSpan) as shown below
#Filter("/**")
public class TraceTestFilter implements HttpServerFilter {
#Override
public Publisher<MutableHttpResponse<?>> doFilter(
HttpRequest<?> request, ServerFilterChain chain) {
return testMethodTracing(request, chain);
}
#ContinueSpan
public Publisher<MutableHttpResponse<?>> testMethodTracing(
HttpRequest<?> request, ServerFilterChain chain) {
// Details ommitted here
}
}
The following is maintained in the application-k8s.yml (also have an application.yml with the same settings)
---
tracing:
jaeger:
enabled: true
sampler:
probability: 1
sender:
agentHost: jaeger-agent.istio-system
agentPort: 5775
However we only see the trace entries that are generated by Istio (Envoy proxies) but we don't see the details of the method calls itself.
Any ideas as to what could be going wrong here?
Istio have this feature called Distributed Tracing, which enables users to track requests in mesh that is distributed across multiple services. This can be used to visualize request latency, serialization and parallelism.
For this to work Istio uses Envoy Proxy - Tracing feature.
You can deploy Bookinfo Application and see how Trace context propagation works.
If you have the same issue explained in this ticket, you need to wait for the next release of micronaut or use the workaround mentioned by micronaut guys there.
https://github.com/micronaut-projects/micronaut-core/issues/2209

Spring Boot Multipart File Upload Size Inconsistency

I have an endpoint that uploads an image file to the server, then to S3.
When I run on localhost, the MultipartFile byte size is correct, and the upload is successful.
However, the moment I deploy it to my EC2 instance the uploaded file size is incorrect.
Controller Code
#PostMapping("/{id}/photos")
fun addPhotos(#PathVariable("id") id: Long,
#RequestParam("file") file: MultipartFile,
jwt: AuthenticationJsonWebToken) = ApiResponse.success(propertyBLL.addPhotos(id, file, jwt))
Within the PropertyBLL.addPhotos method, printing file.size results in the wrong size.
Actual file size is 649305 bytes, however when uploaded to my prod server it reads as 1189763 bytes.
My production server is an AWS EC2 instance, behind Https.
The Spring application yml files are the same. The only configurations I overrode were the file max size properties.
I'm using PostMan to Post the request. I'm passing the body as form-data, key named "file".
Again, it works perfectly when running locally.
I did another test where I wrote the uploaded file to the server so I could compare.
Uploaded file's first n bytes in Hex editor:
EFBFBD50 4E470D0A 1A0A0000 000D4948 44520000 03000000 02400802 000000EF BFBDCC96 01000000 0467414D 410000EF BFBDEFBF BD0BEFBF BD610500 00002063 48524D00 007A2600
Original file's first n bytes:
89504E47 0D0A1A0A 0000000D 49484452 00000300 00000240 08020000 00B5CC96 01000000 0467414D 410000B1 8F0BFC61 05000000 20634852 4D00007A 26000080 840000FA 00000080
They both appear to have the text "PNG" in them and also have the ending EXtdate:modify/create markers.
Per Request, the core contents of addPhoto:
val metadata = ObjectMetadata()
metadata.contentLength = file.size
metadata.contentType = "image/png"
LOGGER.info("Uploading image of size {} bytes, name: {}", file.size, file.originalFilename)
val request = PutObjectRequest(awsProperties.cdnS3Bucket, imageName, file.inputStream, metadata)
awsSdk.putObject(request)
This works when I run web server locally. imageName is just a custom built name. There is other code involving hibernate models, but is not relevant.
Update
This appears to be Https/api proxy related. When I hit the EC2 node's http url, it works fine. However, when I go through the api proxy (https://api.thedomain.com), which proxies to the EC2 node, it fails. I will continue down this path.
After more debugging I discovered that when I POST to the EC2 instance directly everything works as expected. Our primary and public api url makes proxies requests through Amazon's API Gateway service. This service for some reason converts the data to Base64 instead of just passing through raw binary data.
I have found documentation to update the API Gateway to passthrough binary data: here.
I am using the Content-Type value of multipart/form-data. Do not forget to also add it in your API Settings where you enable binary support.
I did not have to edit the headers options, additionally I used the default "Method Request Passthrough" template.
And finally, don't forget to deploy your api changes...
It's now working as expected.
Sorry, but many of the comments make no sense. file.size will return the size of the uploaded file in bytes, NOT the size of the request (which, yes, due to different filters could potentially be enhanced with additional information and increase in size). Spring can't just magically double the size of a PNG file (in your case adding almost another ~600kb of information on top of whatever you've sent). While I'd like to trust that you know what you're doing and the numbers you are giving us are indeed correct, to me, all evidence points to human error... please, double-, triple-, quadruple- check that you're indeed uploading the same file in all scenarios.
How did you get to 649305 bytes in the first place? Who gave you that number? Was it your code or did you actually look at the file on disk and see how big it was? The only way compression discussions make any sense in this context is if 649305 bytes is the already compressed size of the file when running locally (it's actual size on disk being 1189763 bytes) and indeed, compression not being turned on when deployed to AWS for some reason and you receive the full uncompressed file (we don't even know how you are deploying it... is it really the same as locally? Are you running a standalone .jar in both cases? Are you deploying a .war to AWS perhaps instead? Are you really running the app in the same container and container version in both cases or are you perhaps running Tomcat locally and Jetty on AWS? etc. etc. etc.). Are you sure your Postman request is not messed up and you're not sending something else by accident (or more than you think)?
EDIT:
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.sandbox</groupId>
<artifactId>spring-boot-file-upload</artifactId>
<version>1.0-SNAPSHOT</version>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.1.6.RELEASE</version>
</parent>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
</project>
package com.sandbox;
import static org.springframework.http.ResponseEntity.ok;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;
import org.springframework.web.multipart.MultipartFile;
#SpringBootApplication
public class Application {
public static void main(final String[] arguments) {
SpringApplication.run(Application.class,
arguments);
}
#RestController
class ImageRestController {
#PostMapping(path = "/images")
ResponseEntity<String> upload(#RequestParam(name = "file") final MultipartFile image) {
return ok("{\"response\": \"Uploaded image having a size of '" + image.getSize() + "' byte(s).\"}");
}
}
}
The example is in Java because it was just faster to put together (the environment is a simple Java environment having the standalone .jar deployed - no extra configs or anything, except for the server port being on 5000). Either way, you can try it out yourself by sending POST requests to http://test123456.us-east-1.elasticbeanstalk.com/images
This is my Postman request and the response using the image you've provided:
Everything seems to be looking fine on my AWS EB instance and all the numbers add up as expected. If you are saying your setup is as simple as it sounds then I'm unfortunately just as puzzled as you are. I can only assume that there's more to what you have shared so far (however, I doubt the issue is related to Spring Boot... then it is more likely that it has to do with your AWS configs/setup).
CloudFormation Template snippet for achieving Kenny Cason's solution:
MyApi:
Type: AWS::Serverless::Api
Properties:
BinaryMediaTypes:
- "multipart/form-data"

Is there a way to catch container-generated STDOUT within embedded Jetty Logback?

Situation is:
-> a homemade container app, using logback, configured with ConsoleAppender. Different loggers to specify log levels depending on package:
<logger name="com.mycompany.package1">
<level value="DEBUG"/>
</logger>
<logger name="com.mycompany.package2">
<level value="INFO"/>
</logger>
-> an embedded Jetty app, using logback, configured with RollingFileAppender.
I need both log outputs to be sent to the same rolling file, so I'm trying to catch the container STDOUT within the embedded Jetty app. Is there a way to do that? is it the wrong way to go about it?
NOTE: I have access to both logback.xml for editing.
If you have a logback configuration going to ConsoleAppender then don't attempt to catch output and log it again (you just created a loop).
Instead, just configure Jetty to use slf4j for its own events and NOT use the RolloverFileOutputStream or the console-capture module (from jetty-home).
The easiest way is to not do anything, the mere existence of slf4j-api-<ver>.jar in the server classpath is sufficient to make Jetty use slf4j to log its own events on.
In short, your server classpath needs:
slf4j-api-<ver>.jar
Your logback jars (probably logback-classic-<ver>.jar and logback-core-<ver>.jar)
A ${jetty.base}/resources/ directory on your classpath with 2 files:
a jetty-logging.properties with a single line org.eclipse.jetty.util.log.class=org.eclipse.jetty.util.log.Slf4jLog
your logback configuration files. (eg: logback.xml)
Make sure you are not using Jetty's RollingFileOutputStream to capture System.out and/or System.err to a file.

How to configure concurrency in .NET Core Web API?

In the old WCF days, you had control over service concurrency via MaxConcurrentCalls setting. MaxConcurrentCalls defaulted to 16 concurrent calls but you could raise or lower that value based upon your needs.
How do you control server side concurrency in .NET Core Web API? We probably need to limit it in our case as too many concurrent requests can impede overall server performance.
ASP.NET Core application concurrency is handled by its web server. For example:
Kestrel
var host = new WebHostBuilder()
.UseKestrel(options => options.ThreadCount = 8)
It is not recommended to set Kestrel thread count to a large value like 1K due to Kestrel async-based implementation.
More info: Is Kestrel using a single thread for processing requests like Node.js?
New Limits property has been introduced in ASP.NET Core 2.0 Preview 2.
You can now add limits for the following:
Maximum Client Connections
Maximum Request Body Size
Maximum Request Body Data Rate
For example:
.UseKestrel(options =>
{
options.Limits.MaxConcurrentConnections = 100;
}
IIS
When Kestrel runs behind a reverse proxy you could tune the proxy itself. For example, you could configure IIS application pool in web.config or in aspnet.config:
<configuration>
<system.web>
<applicationPool
maxConcurrentRequestsPerCPU="5000"
maxConcurrentThreadsPerCPU="0"
requestQueueLimit="5000" />
</system.web>
</configuration>
Of course Nginx and Apache have their own concurrency settings.

How to make Jetty load webdefault.xml when it runs in OSGi?

I'm running a Jetty 8.1.12 server within an OSGi container thanks to jetty-osgi-boot as explained in Jetty 8 and Jetty 9 documentation
I want to configure the default webapp descriptor (etc/webdefault.xml). When I define jetty.home, jetty picks up etc/jetty.xml but it does not load etc/webdefault.xml
I do not want to rely on a configuration bundle (through the jetty.home.bundle system property) because I want the config easily modifiable.
I do not want to rely on the Jetty-defaultWebXmlFilePath MANIFEST header for the same reason, plus it would tie my webapp to jetty.
The jetty-osgi-boot bundle contains a jetty-deployer.xml configuration file with this commented-out chunk :
<!-- Providers of OSGi Apps -->
<Call name="addAppProvider">
<Arg>
<New class="org.eclipse.jetty.osgi.boot.OSGiAppProvider">
<Set name="defaultsDescriptor"><Property name="jetty.home" default="."/>/etc/webdefault.xml</Set>
...
which does not work because the OSGiAppProvider class does not exist anymore.
Is there any other way to configure the webdefaults.xml file location ?
Short answer : I could not have jetty 8.1.12 to load webdefaults.xml under OSGi.
After many hours of googling, source-reading and debugging, I came to these conclusions :
The Jetty-defaultWebXmlFilePath MANIFEST header did not work as expected. It could not resolve a bundle entry path, only a absolute file system path. An absolute FS path was not a realistic option.
Much of the configuration is hardcoded in ServerInstanceWrapper and the likes of BundleWebAppProvider so we cannot configure defaults descriptor location. This location ends up to the default, which is, IIRC, org/eclipse/jetty/webapp/webdefault.xml.
I resorted to patching jetty-osgi so that it can read some configuration and apply it to BundleWebAppProvider. FWIW this hack is available on github