I'm using the following tool to run an embedded redis for unit testing.
In the beginning of my registrationService is creating a new instance of redis server.
#Import({RedisConfiguration.class})
#Service
public class RegistrationService
RedisTemplate redisTemplate = new RedisTemplate(); //<- new instance
public String SubmitApplicationOverview(String OverviewRequest) throws IOException {
. . .
HashMap<String,Object> applicationData = mapper.readValue(OverviewRequest,new TypeReference<Map<String,Object>>(){});
redisTemplate.setHashKeySerializer(new StringRedisSerializer());
redisTemplate.setHashValueSerializer(new GenericJackson2JsonRedisSerializer());
UUID Key = UUID.randomUUID();
redisTemplate.opsForHash().putAll(Key.toString(), (applicationData)); //<-- ERRORS HERE
System.out.println("Application saved:" + OverviewRequest);
return Key.toString();
}
}
And I'm starting a mock redis server in my Test below.
...
RedisServer redisServer;
#Autowired
RegistrationService RegistrationService;
#Before
public void Setup() {
redisServer = RedisServer.newRedisServer();
redisServer.start();
}
#Test
public void testSubmitApplicationOverview() {
String body = "{\n" +
" \"VehicleCategory\": \"CAR\",\n" +
" \"EmailAddress\": \"email#email.com\"\n" +
"}";
String result = RegistrationService.SubmitApplicationOverview(body);
Assert.assertEquals("something", result);
}
Redis settings in application.properties
#Redis Settings
spring.redis.cluster.nodes=slave0:6379,slave1:6379
spring.redis.url= redis://jx-staging-redis-ha-master-svc.jx-staging:6379
spring.redis.sentinel.master=mymaster
spring.redis.sentinel.nodes=10.40.2.126:26379,10.40.1.65:26379
spring.redis.database=2
However, I'm getting a java.lang.NullPointerException error on the following line in my service under test (registrationService).
redisTemplate.opsForHash().putAll(Key.toString(), (applicationData));
According to the [redis-mock][1] documentation, creating an instance like this:
RedisServer.newRedisServer(); // bind to a random port
Will bind the instance to a random port. It looks like your code expects a specific port. I believe that you need to specify a port when you create the server by passing a port number like this:
RedisServer.newRedisServer(8000); // bind to a specific port
Related
My team is building a CDC service with the Debezium embedded connector. For the offset storage we're thinking about using S3/DynamoDB. Just wondering if anyone here has written something similar to externalize the offset store and what they chose and why they chose that.
We have a Postgres DB as source. Change Data Capture (CDC) is implemented by the Postgres itself (done by the extension pglogical). This CDC subsystem of Postgres is responsible for offset management. The CDC subsytem will maintain a list of CDC clients (aka slots). So if your client creates a CDC connection the DB will start from the point where that client disconnected before (on the same slot). A new client will create a new slot and start receiving only the CDC records created from that point in time on. So there is no need for us to remember the offsets.
Had this challenge recently. You can write a custom class that implements org.apache.kafka.connect.storage.FileOffsetBackingStore or extend org.apache.kafka.connect.storage.MemoryOffsetBackingStore.
Subsequently ensure "offset.storage" config is set to the fully-qualified class name
Please see a sample below using redis (maybe not in production) as a backing store to give you an idea how this can work.
package com.sample.cdc.offsetbackingstore
import com.sample.cdc.service.RedisManager
import org.apache.kafka.connect.errors.ConnectException
import org.apache.kafka.connect.runtime.WorkerConfig
import org.apache.kafka.connect.storage.MemoryOffsetBackingStore
import java.io.IOException
import java.nio.ByteBuffer
import java.util.concurrent.Callable
import java.util.concurrent.Future
class RedisOffsetBackingStore : MemoryOffsetBackingStore() {
lateinit var redisManager : RedisManager
lateinit var redisHost : String
lateinit var redisPort : String
override fun configure(config: WorkerConfig?) {
super.configure(config)
redisHost = config?.getString("custom.config.redis.host")
redisPort = config?.getString("custom.config.redis.port")
}
// Called by Debezium Engine at some point
override fun start() {
super.start()
println("Initializing redis manager...")
redisManager = RedisManager(redisHost, redisPort)
}
// Called by Debezium Engine during graceful shutdown
override fun stop() {
super.stop()
println("Disposing redis client resources...")
if(this::redisManager.isInitialized)
redisManager.dispose()
}
// Called by DebeziumEngine OffsetReader to read Offset
override fun get(keys: MutableCollection<ByteBuffer>?): Future<MutableMap<ByteBuffer, ByteBuffer?>> {
if(data.isNotEmpty())
return super.get(keys)
return executor.submit(Callable<MutableMap<ByteBuffer, ByteBuffer?>> {
val result: MutableMap<ByteBuffer, ByteBuffer?> = HashMap()
keys?.forEach {
val offsetKey = String(it.array())
val offsetValue = redisManager.get(offsetKey)
if(offsetValue.isNotEmpty()){
val buffer = ByteBuffer.wrap(offsetValue.toByteArray())
result[it] = buffer
data[it] = buffer
}
}
result
})
}
// Invoked by set() in MemoryOffsetBackingStore class to persist Offset
// during commit or graceful shutdown
override fun save() {
try {
for ((key, value) in data) {
val offsetKey = String(key!!.array())
val offsetValue = String(value!!.array())
redisManager.save(offsetKey, offsetValue)
}
} catch (e: IOException) {
throw ConnectException(e)
}
}
}
//Ensure the below config setting is set in DebeziumConfig
//"offset.storage":"com.sample.cdc.offsetbackingstore.RedisOffsetBackingStore",
//"custom.config.redis.host": "localhost"
//"custom.config.redis.port": "6379"
Note: In case of multiple standalone embedded debezium services (for reliabilty and fault tolerance) with a custom offset backing store, you'll have to provide a way to handle offset race condition, and event deduplication.
I am working on a Quarkus application to acct as an Operator in a OpenShift/Kubernetes cluster. When writing the tests using a kubernetesMockServer it is working fine for REST calls to developed application but when code runs inside an Initialization Block it is failing, in the log I see that mock server is replying with a 404 error:
2020-02-17 11:04:12,148 INFO [okh.moc.MockWebServer] (MockWebServer /127.0.0.1:53048) MockWebServer[57577] received request: GET /apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions HTTP/1.1 and responded: HTTP/1.1 404 Client Error
On the TestCode I have:
#QuarkusTestResource(KubernetesMockServerTestResource.class)
#QuarkusTest
class TestAIRController {
#MockServer
KubernetesMockServer mockServer;
private CustomResourceDefinition crd;
private CustomResourceDefinitionList crdlist;
#BeforeEach
public void before() {
crd = new CustomResourceDefinitionBuilder()
.withApiVersion("apiextensions.k8s.io/v1beta1")
.withNewMetadata().withName("types.openshift.example-cloud.com")
.endMetadata()
.withNewSpec()
.withNewNames()
.withKind("Type")
.withPlural("types")
.endNames()
.withGroup("openshift.example-cloud.com")
.withVersion("v1")
.withScope("Namespaced")
.endSpec()
.build();
crdlist = new CustomResourceDefinitionListBuilder().withItems(crd).build();
mockServer.expect().get().withPath("/apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions")
.andReturn(200, crdlist)
.always();
}
#Test
void test() {
RestAssured.when().get("/dummy").then().body("size()", Is.is(0));
}
}
The dummy rest is using the same code for searching the CRD, and in fact when running withouth the class observing the startup event it works fine
#Path("/dummy")
public class Dummy {
private static final Logger LOGGER =LoggerFactory.getLogger(Dummy.class);
#GET
#Produces(MediaType.APPLICATION_JSON)
public Response listCRDs(){
KubernetesClient oc = new DefaultKubernetesClient();
CustomResourceDefinition crd = oc.customResourceDefinitions()
.list().getItems().stream()
.filter( ob -> ob.getMetadata().getName().equals("types.openshift.example-cloud.com"))
.findFirst().get();
LOGGER.info("CRD NAME is {}", crd.getMetadata().getName());
return Response.ok(new ArrayList<String>()).build();
}
}
Finally this is an except of the
#ApplicationScoped
public class AIRWatcher {
private static final Logger LOGGER = LoggerFactory.getLogger(AIRWatcher.class);
void OnStart(#Observes StartupEvent ev) {
KubernetesClient oc = new DefaultKubernetesClient();
CustomResourceDefinition crd = oc.customResourceDefinitions()
.list().getItems().stream()
.filter( ob -> ob.getMetadata().getName().equals("types.openshift.example-cloud.com"))
.findFirst().get();
LOGGER.info("Using {}", crd.getMetadata().getName());
}
}
It's like for some reason the mock server is still not initialized for the Startup event, is there any way to solve it?
The problem is that the Mock Server is only configured to respond right before the test execution, while this code:
void OnStart(#Observes StartupEvent ev) {
KubernetesClient oc = new DefaultKubernetesClient();
CustomResourceDefinition crd = oc.customResourceDefinitions()
.list().getItems().stream()
.filter( ob -> ob.getMetadata().getName().equals("types.openshift.example-cloud.com"))
.findFirst().get();
LOGGER.info("Using {}", crd.getMetadata().getName());
}
runs when the application is actually comes up (which is before any #BeforeEach runs).
Can you please open an issue on the Quarkus Github? This should be something we provide a solution for
I'm trying to use the http connector that is provided with the standard Camunda implementation with no luck. Every single time that I run my workflow the instance simply freeze on that activity. I'm using this class in an execution listnener and the code that I'm using is this:
import org.apache.ibatis.logging.LogFactory;
import org.camunda.bpm.engine.delegate.DelegateExecution;
import org.camunda.bpm.engine.delegate.Expression;
import org.camunda.bpm.engine.delegate.JavaDelegate;
import org.camunda.bpm.engine.impl.util.json.JSONObject;
import org.camunda.connect.Connectors;
import org.camunda.connect.ConnectorException;
import org.camunda.connect.httpclient.HttpConnector;
import org.camunda.connect.httpclient.HttpResponse;
import org.camunda.connect.httpclient.impl.HttpConnectorImpl;
import org.camunda.connect.impl.DebugRequestInterceptor;
public class APIAudit implements JavaDelegate {
static {
LogFactory.useSlf4jLogging(); // MyBatis
}
private static final java.util.logging.Logger LOGGER = java.util.logging.Logger.getLogger(Thread.currentThread().getStackTrace()[0].getClassName());
private Expression tokenField;
private Expression apiServerField;
private Expression questionIDField;
private Expression subjectField;
private Expression bodyField;
public void execute(DelegateExecution arg0) throws Exception {
String tokenValue = (String) tokenField.getValue(arg0);
String apiServerValue = (String) apiServerField.getValue(arg0);
String questionIDValue = (String) questionIDField.getValue(arg0);
String subjectValue = (String) subjectField.getValue(arg0);
String bodyValue = (String) bodyField.getValue(arg0);
if (apiServerValue != null) {
String url = "http://" + apiServerValue + "/v1.0/announcement";
LOGGER.info("token: " + tokenValue);
LOGGER.info("apiServer: " + apiServerValue);
LOGGER.info("questionID: " + questionIDValue);
LOGGER.info("subject: " + subjectValue);
LOGGER.info("body: " + bodyValue);
LOGGER.info("url: " + url);
JSONObject jsonBody = new JSONObject();
jsonBody.put("access_token", tokenValue);
jsonBody.put("source", "SYSTEM");
jsonBody.put("target", "AUDIT");
jsonBody.put("tType", "system");
jsonBody.put("aType", "auditLog");
jsonBody.put("affectedItem", questionIDValue);
jsonBody.put("subject", subjectValue);
jsonBody.put("body", bodyValue);
jsonBody.put("language", "EN");
try {
LOGGER.info("Generating connection");
HttpConnector http = Connectors.getConnector(HttpConnector.ID);
LOGGER.info(http.toString());
DebugRequestInterceptor interceptor = new DebugRequestInterceptor(false);
http.addRequestInterceptor(interceptor);
LOGGER.info("JSON Body: " + jsonBody.toString());
HttpResponse response = http.createRequest()
.post()
.url(url)
.contentType("application/json")
.payload(jsonBody.toString())
.execute();
Integer responseCode = response.getStatusCode();
String responseBody = response.getResponse();
response.close();
LOGGER.info("[" + responseCode + "]: " + responseBody);
} catch (ConnectorException e) {
LOGGER.severe(e.getMessage());
}
} else {
LOGGER.info("No APISERVER provided");
}
LOGGER.info("Exiting");
}
}
I'm sure that the fields injection works correctly since the class prints the correct values. I also used the http-connector in javascript in the same activity with no problem.
I'm using this approach since I need to make two different calls to external REST services in the same task, so any advice will be very welcome.
You need to enable Connect process engine plugin in process engine configuration. Not sure how you configured the process engine, make sure to add this plugin org.camunda.connect.plugin.impl.ConnectProcessEnginePlugin
Also check the following in dependencies
Do not add both dependencies - connectors-all and http-connector.
Make sure to check the error logs and see whether you have any class loading problem related to httpclient classes
I am pretty sure there is a class loading issue with http client library. make sure to include the correct version of connectors-all dependency
I looked at this link : How to write a unit test for a Spring Boot Controller endpoint
I am planning to unit test my Spring Boot Controller. I have pasted a method from my controller below. When I use the approach mentioned in the link above , will the call that I have to service.verifyAccount(request) not be made? Are we just testing whether the controller accepts the request in format specified and returns response in format specfied apart from testing the HTTP status codes?
#RequestMapping(value ="verifyAccount", method = RequestMethod.POST, produces=MediaType.APPLICATION_JSON_VALUE)
public ResponseEntity<VerifyAccountResponse> verifyAccount(#RequestBody VerifyAccountRequest request) {
VerifyAccountResponse response = service.verifyAccount(request);
return new ResponseEntity<VerifyAccountResponse>(response, HttpStatus.OK);
}
You can write unit test case
using
#RunWith(SpringJUnit4ClassRunner.class)
// Your spring configuration class containing the
#EnableAutoConfiguration
// annotation
#SpringApplicationConfiguration(classes = Application.class)
// Makes sure the application starts at a random free port, caches it throughout
// all unit tests, and closes it again at the end.
#IntegrationTest("server.port:0")
#WebAppConfiguration
make sure you configure all your server configuration like port/url
#Value("${local.server.port}")
private int port;
private String getBaseUrl() {
return "http://localhost:" + port + "/";
}
Then use code mentioned below
protected <T> ResponseEntity<T> getResponseEntity(final String
requestMappingUrl, final Class<T> serviceReturnTypeClass, final Map<String, ?>
parametersInOrderOfAppearance) {
// Make a rest template do do the service call
final TestRestTemplate restTemplate = new TestRestTemplate();
// Add correct headers, none for this example
final HttpEntity<String> requestEntity = new HttpEntity<String>(new
HttpHeaders());
// Do a call the the url
final ResponseEntity<T> entity = restTemplate.exchange(getBaseUrl() +
requestMappingUrl, HttpMethod.GET, requestEntity, serviceReturnTypeClass,
parametersInOrderOfAppearance);
// Return result
return entity;
}
#Test
public void getWelcomePage() {
Map<String, Object> urlVariables = new HashMap<String, Object>();
ResponseEntity<String> response = getResponseEntity("/index",
String.class,urlVariables);
assertTrue(response.getStatusCode().equals(HttpStatus.OK));
}
When running my embedded jetty web app launcher, I see the following output to stderr. I just started seeing this after moving my build to maven-2. Has anyone seen this before?
IDLE SCEP#988057 [d=false,io=1,w=true,rb=false,wb=false],NOT_HANDSHAKING, in/out=0/0 Status = OK HandshakeStatus = NOT_HANDSHAKING
bytesConsumed = 5469 bytesProduced = 5509
It repeats occasionally seemingly at random times.
This seems to be coming from jetty NIO support -- it appears that jetty feels it is appropriate to log to stderr when it close idle connections.
at org.eclipse.jetty.io.nio.SelectChannelEndPoint.checkIdleTimestamp(SelectChannelEndPoint.java:231)
at org.eclipse.jetty.io.nio.SelectorManager$SelectSet$2.run(SelectorManager.java:768)
at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:436)
For those with similar problems, I overrode System.err with a mock output stream:
public class DebugOutputStream extends OutputStream {
private Logger s_logger = LoggerFactory.getLogger(DebugOutputStream.class);
private final OutputStream m_realStream;
private ByteArrayOutputStream baos = new ByteArrayOutputStream();
private Pattern m_searchFor;
public DebugOutputStream(OutputStream realStream, String regex) {
m_realStream = realStream;
m_searchFor = Pattern.compile(regex);
}
public void write(int b) throws IOException {
baos.write(b);
if (m_searchFor.matcher(baos.toString()).matches()) {
s_logger.info("unwanted output detected", new RuntimeException());
}
if (b == '\n') baos.reset();
m_realStream.write(b);
}
}