I've used RabbitMQ Delayed Message Plugin for scheduling my message. it's worked without problem.
Now I want to configure in-memory MassTransit for scheduling messages at development environment.
My code in asp.net core:
Uri schedulerEndpoint = new Uri("queue:scheduler");
services.AddMassTransit(x =>
{
x.AddMessageScheduler(schedulerEndpoint);
x.AddConsumer<ScheduleNotificationConsumer>();
x.UsingInMemory((context, cfg) =>
{
cfg.UseMessageScheduler(schedulerEndpoint);
cfg.ConfigureEndpoints(context);
});
});
Well, this code doesn't call my consumer after the specific time.
I read this link and I guess I must use MassTransit.Quartz for in-memory scheduled messages, because , it said: the UseInMemoryScheduler method initializes Quartz.NET for standalone in-memory operation.
Is it correct?
if yes, but this link used the follow code, that has dependency to RabbitMq:
services.AddMassTransit(x =>
{
x.AddMessageScheduler(new Uri("queue:scheduler"));
x.UsingRabbitMq((context, cfg) =>
{
cfg.UseInMemoryScheduler("scheduler");
cfg.ConfigureEndpoints(context);
});
});
but I don't want to have any dependency to RabbitMq in development environment.
If you're using the in-memory transport, you can use the new delayed scheduling that was included in 7.1.8. This standardizes the configuration across all transports, and for the first time provides in-memory scheduling without Quartz.NET.
To configure the new delayed scheduler, use:
services.AddMassTransit(x =>
{
x.AddDelayedMessageScheduler();
x.UsingInMemory((context, cfg) =>
{
cfg.UseDelayedMessageScheduler();
cfg.ConfigureEndpoints(context);
});
});
Related
I have a CloudFormation stack that defines a GraphQL API powered by DynamoDB. I would like to run a test script that:
Creates a standard set of fixtures.
Runs various tests, including creating, modifying, and deleting data.
Deletes all of the fixtures and any other objects created during the test.
The “clean way” to do this would be to create a new stage for the tests, but this is extremely time-consuming (in terms of wall-clock time spent waiting for the result).
The “hard way” would be to keep precise track of every DynamoDB record created during the testing process and then delete them afterward one by one (and/or using many batch updates). This would be a huge pain to code, and the likelihood of error is very high.
An intermediate approach would be to use a dedicated pre-existing stage for integration tests, wipe it clean at the end of the tests, and make sure that only one set of tests is running at a time. This would require writing a script to manually clear out the tables, which sounds moderately less tedious and error-prone than the “hard way”.
Is there an established best practice for this? Are there other approaches I haven't considered?
How long does it take to deploy the stack?
If it is only a small portion of the time that takes to run the tests use the "clean way", otherwise use the intermediate approach of having a dedicated test stack already deployed.
You don't have to write scripts.
I actually wrote a testing library for this purpose exactly:
https://github.com/erezrokah/aws-testing-library/blob/master/src/jest/README.md#tohaveitem
Usage example (TypeScript):
import { clearAllItems } from 'aws-testing-library/lib/utils/dynamoDb';
import { invoke } from 'aws-testing-library/lib/utils/lambda';
describe('db service e2e tests', () => {
const region = 'us-east-1';
const table = 'db-service-dev';
beforeEach(async () => {
await clearAllItems(region, table);
});
afterEach(async () => {
await clearAllItems(region, table);
});
test('should create db entry on lambda invoke', async () => {
const result = await invoke(region, 'db-service-dev-create', {
body: JSON.stringify({ text: 'from e2e test' }),
});
const lambdaItem = JSON.parse(result.body);
expect.assertions(1);
await expect({ region, table, timeout: 0 }).toHaveItem(
{ id: lambdaItem.id },
lambdaItem,
);
});
});
If you do write the scripts yourself you might need to consider eventual consistency and retry (as the data might not be available directly after write)
I'd simply delete this "test stack" in the end of the test and let CloudFormation clean up DynamoDB for you -- check DeletionPolicy documentation.
You might want to trigger/hook stack deletion from your CI environment, whatever it is. As an example, I found this CodePipeline walkthrough: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/continuous-delivery-codepipeline-basic-walkthrough.html
I'm working with emberjs during some time and now I want to refine my understanding of ember's nature.
Question 1.
Am I right that the route's model() hook is the best place to write asynchronous code (to get all needed data)? Should I try to place all my network requests (or at least most of them) inside routes' models?
Question 2.
The component hooks have synchronous nature. Does that mean it's a bad decision to write async code inside hooks?
Say, I have async init() hook where some data is calculated and didRender() hook where I expect to use that data. When ember calls init() it returns a Promise, so it's moved from a stack to a special queue and ember doesn't wait until event loop returns that code back to a stack. So ember runs next hooks, and when didRender() is being executed the init() hook may not be fulfilled and the expected data may not exist. Is that right?
Question 3.
Services hooks should also be synchronous. Because when a service is injected inside a component and is used ember also doesn't wait until the async hook is fulfilled.
Say, I have a shopping cart service with products property. The products ids are stored in localstorage and I want to get those products from a server to set them into products property.
import Service from '#ember/service';
import { A } from '#ember/array';
import { inject as service } from '#ember/service';
export default Service.extend({
store: service(),
init(...args) {
this._super(args);
this.set('products', A());
},
async loadProducts() {
const cartProducts = A();
const savedCartProducts = localStorage.getItem('cartProducts');
if (savedCartProducts) {
const parsedSavedCartProducts = JSON.parse(savedCartProducts);
const ids = Object.keys(parsedSavedCartProducts);
if (ids.length > 0) {
const products = await this.store.query('product', {id: Object.keys(parsedSavedCartProducts)});
products.forEach(p => {
p.set('cartQuantity', Number(parsedSavedCartProducts[p.id].qty));
cartProducts.pushObject(p);
});
}
}
this.products.pushObjects(cartProducts);
},
...
});
If I call loadProducts() from service's init() I can't use this.cart.products in controllers/application.js, for example. Because service is ready, but async init() is still executed. So, should I call it in routes/application.js model() hook?
Question 4.
If there is some general component that doesn't refer to any route, but this component needs some data from a server where should I make async requests? Or are computed properties and observers are the only solutions?
Thanks a lot.
Good questions here, you’re on the right track!
Question 1: yes, the general rule of thumb is that until you are familiar with things, doing async work in the route hooks (beforeModel, model and afterModel) make it easier to think about what is going on. Eventually you may want to start bending the defaults to suit your custom UI needs, but it’s simplest to start with this rule
Questions 2-4: you’re asking several questions here about async code, so will answer here more broadly.
First off, services can do async or not but unless you call those methods from a hook (like a route hook) you are responsible for handling the asynchronous results. For instance, the Ember Data store is a service that does return data asynchronously
But if you hit cases where you do need to do async in a component, the recommended addon to help with that is Ember Concurrency: http://ember-concurrency.com/docs/introduction/
Ember Concurrency helps solve many of the async bugs and edge cases that you haven’t yet hit, but will, if you start doing async code in components. So I’d highly recommend learning more about it.
Good luck!
I have a function in angular 2 service which I would like to test.
service.ts
upload(){
let file = new Transfer();
file.upload(myfile).then( // my callback );
}
I would like to mock Transfer in my test using jasmine. I tried this in my
sevice.spec.ts
import { TransferMock as Transfer } from '../mocks/mocks' to mock it. But it is not working. This is how my test is instantiated .
describe('authentication service' , () => {
beforeEach(() => {
auth = new Service(<any>new HttpMock(), <any>new StorageMock())
});
it('initialize authentication',() => {
expect(auth).not.toEqual(null);
auth.upload('file'); //it fails here
});
})
edit
Transfer is not injected in the service. Only one function uses Transfer . So not injecting can reduce the initial loading time of the app i guess(would be happy to know other opinions) . So I would like to know if there is anyway to mock if its constructed this way ?
edit
Although I had accepted Martin's answer as it is the best practice, it has one issue which can happen when you use ionic-native plugins.If the plugin doesnt have browser support it can fail. In this case it happened when I inject it, with error FileTransfer is not defined . So I am back again, looking for suggestions.
In order to provide a mock for a class in a test, you need to inject the class in your implementation.
In your ngModule add Transfer to your providers. Then simply inject it into your service.
Then in your test you can use { provide: Transfer, useClass: TransferMock } in your TestBed providers.
Update
The primary purpose of Dependency Injection is to make code testable and to allow mocking - faking - stubbing of services.
Update
With Dependancy Injection you can configure a different set of providers for different environments.
For example, if you are running your application in the browser, and in a native mobile environment you can swap out your configuration.
In your module you could have something like this:
const TRANSFER_PROVIDER: any;
if (environment.browser) {
TRANSFER_PROVIDER = Transfer;
} else {
TRANSFER_PROVIDER = { provide: Transfer, useClass: NativeTransfer }
}
...
providers: [ TRANSFER_PROVIDER ]
NativeTransfer could be a simple stub that does nothing but prevent errors, or it could let the user know that this feature is not supported in their browser.
So my acceptance test keeps on destroying itself before my promise finishes. I'm aware that I need to wrap my promise in Ember run loop but I just can't get it to work. Here's how my component looks:
export default Ember.Component.extend({
store: Ember.inject.service(),
didReceiveAttrs() {
this.handleSearchQueryChange();
},
/**
* Search based on the searchQuery
*/
handleSearchQueryChange() {
this.get('store').query('animals', {
orderBy: 'name',
startAt: this.attrs.searchQuery
}).then(searchResults => {
this.set('searchResults', searchResults);
});
}
});
I've already tried wrapping this.handleSearchQueryChange(), this.get('store').query... and this.set('searchResults', searchResults) in a run loop but still, the acceptance test just doesn't wait for store.query to finish.
One thing to note that this store query performs a request on a live Firebase back-end.
I'm currently using Pretender to mock the data and solve this issue. But I'd like to solve it through Ember.run as well. Anyone care to provide a solution?
It sounds like your problem may have the same cause as the errors I've been seeing
tl;dr
To work around this issue, I've been using a custom test waiter. You can install it with ember install ember-cli-test-model-waiter (for Ember v2.0+) and it should just make your test work without any further setup (if not, please file a bug).
Longer answer:
The root cause of this problem is that the ember testing system doesn't know how to handle Firebase's asynchronicity. With most adapters, this isn't a problem, because the testing system instruments AJAX calls and ensures they have completed before proceeding, but this doesn't work with Firebase's websockets communication.
The custom test waiter I mentioned above works by waiting for all models to resolve before proceeding with the test, and so should work with any non-AJAX adapter.
I'm having trouble testing reactive code that's consuming a Task based service. In my class under test I consume the task and use ToObservable to do reactive-y things with it.
public void Method()
{
_svc.MyTaskServiceMethod().ToObservable().Select(....) //pipe it elsewhere and do interesting things.
}
Now in a unit test I'm testing some timing (using Moq for the service)
svcMock.Setup(x => x.MyTaskServiceMethod()).Returns(() =>
Observable.Return("VALUE", testScheduler)
.Delay(TimeSpan.FromMilliseconds(100), testScheduler)
.ToTask()
);
The problem is that despite using the test scheduler in the Return/Delay calls, the task itself is still completing on a separate thread. I'm seeing this by adding a couple of Console writes of the current managed thread id to the code.
svcMock.Setup(x => x.MyServiceMethod()).Returns(() =>
{
var task = Observable.Return("VALUE", testScheduler)
.Delay(TimeSpan.FromMilliseconds(1000), testScheduler)
.Do(x => { Console.WriteLine(Thread.CurrentThread.ManagedThreadId.ToString() + " Obs"); })
.ToTask();
task.ContinueWith((_) =>
{
Console.WriteLine(Thread.CurrentThread.ManagedThreadId.ToString() + " Task");
});
return task;
});
The Do(..) executes on the primary testing thread, and happens exactly when I expect after a testSchduler.AdvanceBy(..) call.
The task continuation is still happening in a separate thread and basically doesn't execute until after the body of the unit test has finished. So in the body of my target, nothing ever really gets pushed through my task.ToObservable() observable.
Task continuations will use a task pool thread by default, so your continuation escapes the control of the test scheduler. If you specify the option TaskContinuationOptions.ExecuteSynchronously, it will use the same thread and the result will be posted to the observable as desired:
task.ContinueWith((_) =>
{
Console.WriteLine(Thread.CurrentThread.ManagedThreadId.ToString() + " Task");
}, TaskContinuationOptions.ExecuteSynchronously);
Addendum
You may find this related discussion on the Rx site quite illuminating on the subject of concurrency in TPL -> Rx transitions, and ToObservable() in particular.
Some time ago I co-authored a unit test library based on NUnit to help with precisely with Rx and TPL testing. For that we built a Test TPL scheduler to force all TPL tasks to run without concurrency. You can see the relevant code here: https://github.com/Testeroids/Testeroids/blob/master/solution/src/app/Testeroids/TplTestPlatformHelper.cs#L87