We are implementing DynamoDB data store. For our unit tests we have got that covered for Web.API, services, repository layers. However, I have not found any mocking frameworks/unit test for DynamoDB for .NET Core, only to implement the local DynamoDB (https://aws.amazon.com/blogs/aws/amazon-dynamodb-libraries-mappers-and-mock-implementations-galore/) Amazon indicates that it's best to run the local instance (Dynamo Local), however, this is not ideal for us, since we have Azure dev-ops pipelines in place. This causes other issues,like deleting tables for each test run, making sure the local db instance is running etc.
I was thinking of extending the DynamoDBclient, DynamoContext etc and then use these to simulate, however, this seems like a lot of work and I am not sure if it is worthwhile. Can anyone suggest an easier alternative or have come across a framework, or implemented a good unit testing framework for DynamoDB for .NET Core. We really don't want to go down the path to use the local DynamoDB on our build server.
Thanks in advance.
Related
I have developed an API using flask + postgreSQL and the users registration process was ministered by such infrastructure. I then decided to change the infrastructure in general. I decided to use NestJS + dynamoDB + cognito. In other words, I decided to go serverless!
However, I am finding quite hard to understand how to organize my project into development, staging and production environments.
To sum up, I have two questions
How to perform unit testing in serverless environments?
How to create a development mode for testing and developing features locally?
I will give an example for each question in order to clarify what my doubts are.
Example for (1)
In the first architecture using flask + postgreSQL I created a script that would wipe out the entire local database and create a mock data with the goal of executing functions so the API could be tested. But since I have no local environment, should I replicate the entire production environment in AWS in order to test it?
Example for (2)
Usually when I was testing new features in the API I would erase and create new mock data, and my coworkers would do the same in their local machines. In other words, each developer had their local system ready to be "screwed" haha without disturbing other developers' work.
How could I do something similar in dynamoDB / serveless architecture?
If your tests involve relatively low amount of data being written and read to the table, you can do them using the real AWS, but just use a different table (and probably different account!) from your production table. The test table can be created and deleted in a matter of seconds.
However, if the tests involve a more substantial amount of data, testing this way can become expensive and slow - in which case you might prefer to have a local version of DynamoDB running on your own machine and connect to that in your tests. As far as I know, you have to options for a locally-installed DynamoDB:
Amazon provide DynamoDB Local. This is a DynamoDB-compatible (sort of) database that cannot be used in any real deployment (it is too slow, no high-availability, etc.) but good enough for writing tests against it.
ScyllaDB provides Alternator, an open-source DynamoDB-compatible database. This is a full-fledged database, with high performance, persistence and high-availability - so you can use it even for high-throughput test workloads, and you can also decide to use it for production, not just for testing (full disclosure - I'm one of the Alternator developers).
I have installed CakePHP 3 using directions from this tutorial:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/php-cakephp-tutorial.html
It is working perfectly and actually installation was quite easy. There is PHP, CakePHP, MySQL working and also I noticed that the newest AWS SDK as whole is installed in vendor directory. So I am fully set to use also DynamoDB as my data source. You might ask why I should use DynamoDb since I am already using MySQL/MarianDB, this is because we have an application that is already in production and it is using DynamoDB. But we should be able to write admin application using CakePHP in top of DynamoDB. This is not technical decision but coming from business side.
I found good tutorial written by StarTutorial how to use DynamoDB as session handler in CakePHP 3:
https://www.startutorial.com/articles/view/using-amazon-dynamodb-as-session-handler-in-cakephp-3
Well, there is not long way to using DynamoDB for putting data, getting data and doing scans, isn't there? Do you have any simple example how to do it, how to write data to DynamoDB or do scan?
I have also read the article:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GettingStarted.PHP.html
and this is working fine, no problem. But I would like to all the advantages of the CakePHP 3, templating, security and so on, thousands of hours time saved with well written code and very fast to start coding for example admin console :)
Thank you,
You could create a Lambda function (in case you want to go serverless) or any other microservice to abstract communication with your DynamoDB. This will definitely simplify your PHP code. You may call Lambda functions directly (via API Gateway), or post messages to SQS for better decoupling. I would recommend the use of SQS -- you'll need some kind of microservice anyway to consume messages and deal with your DynamoDB in a CQRS fashion. Hope it helps!
Thank you for your answer, I was looking for a example how to use the AWS SDK for DynamoDB without creating more complexity to this environment as it is. This way I would have to create yet another layer without using the SDK that already exists. Can you please give wokring example how AWS SDK is used from CakePHP 3 so that it can use DynamoDB as a data source for its applications without losing it´s own resources an capabilities (MVC, security etc).
Thank you,
After a hard debug and found bugs I was able to get it working with only using AWS SDK in CakePHP 3.
I have a booking app that can deal with both local and remote API bookings. Our logic —for (eg) pricing and availability— follows two very different pathways. We obviously need to test both.
But running regular tests against a remote API is slow. The test environment provided manages a response in 2-17 seconds. It's not feasible to use this in my pre_commit tests. Even if they sped that up, it's never going to be fast and will always require a connection to pass.
But I still need to test our internal logic for API bookings.
Is there some way that within a test runner, I can spin up a little webserver (quite separate to the Django website) that serves a reference copy of their API. I can then plug that into the models we're dealing with and query against that locally, at speed.
What's the best way to handle this?
Again, I need to stress that this reference API should not be part of the actual website. Unless there's a way of adding views that only apply at test-time. I'm looking for clean solutions. The API calls are pretty simple. I'm not looking for verification or anything like that here, just that bookings made against an API are priced correctly internally, handle availability issues, etc.
for your test porpuse you can mock api call functions.
you can see more here:
https://williambert.online/2011/07/how-to-unit-testing-in-django-with-mocking-and-patching/
I'm creating a web-application and decided to use micro-services approach. Would you please tell me what is the best approach or at least common to organize access to the database from all web-services (login, comments and etc. web-services). Is it well to create DAO web-service and use only it to to read/write values in the database of the application. Or each web-service should have its own dao layer.
Each microservice should be a full-fledged application with all necessary layers (which doesn't mean there cannot be shared code between microservices, but they have to run in separate processes).
Besides, it is often recommended that each microservice have its own database. See http://microservices.io/patterns/data/database-per-service.html https://www.nginx.com/blog/microservices-at-netflix-architectural-best-practices/ Therefore, I don't really see the point of a web service that would only act as a data access facade.
Microservices are great, but it is not good to start with too many microservices right away. If you have doubt about how to define the boundaries between microservices in your application, start by a monolith (all the time keeping the code clean and a good object-oriented with well designed layers and interfaces). When you get to a more mature state of the application, you will more easily see the right places to split to independently deployable services.
The key is to keep together things that should really be coupled. When we try to decouple everything from everything, we end up creating too many layers of interfaces, and this slows us down.
I think it's not a good approach.
DB operation is critical in any process, so it must be in the DAO layer inside de microservice. Why you don't what to implement inside.
Using a service, you loose control, and if you have to change the process logic you have to change DAO service (Affecting to all the services).
In my opinion it is not good idea.
I think that using Services to expose data from a database is ideal due to the flexibility it provides. Development of a REST service to expose some or all of your data as a service provides flexibility to consume the data directly to the UI via AJAX or by other services which can process the data and generate new information. These consumers do not need to implement a DAO and can be in any language. While a REST Service of your entire database is probably not a Micro-Service, a case could be made for breaking this down as Read only for Students, Professors and Classes for exposing on the School Web site(s), with different services for Create, Update and Delete (CUD) available only to the Registrars office desktop applications.
For example building a Service to exposes a statistical value on data will protect the data from examination by a user/program who only needs a statistical value without the requirement of having the service implement an entire DAO for the components of that statistic. Full function databases like SQL Server or Oracle provide a lot of functionality that application developers can use, including complex queries(using indexes), statistics the application of set operations on data.
Having a database service is a completely valid pattern. In fact, this is one of the key examples of where to start to export aspects of a monolith to a micro service in the Building Microservices book.
How to organize your code around such idea is a different issue. Yes, from the db client programmer's stand point, having the same DAO layer on each DB client makes a lot of sense.
The DAO pattern may be suitable to bind your DB to one programming language that you use. But then you need to ask yourself why you are exposing your database as a web service if all access to it will be mediated by the same DAO infrastructure. Or are you going to create one DAO pattern for each client programming language binding?
If all database clients are going to be written on the same programming language, then are you sure you really need to wrap your DB as a microservice? After all, the DB is usually already a remote service with a well-defined network protocol optimized to transfer data fast and reliably. Why adding HTTP on top of it? What are you expecting to gain from adding such complexity?
Another problem with using the DAO pattern is that the DAO structure does not necessarily follow the evolution of the web service. The web service may evolve in a way that does not make old clients incompatible. You may have different clients using different features of the micro service. In this case you are not sharing the same DAO layer structure on each client.
Make sure you are not using RPC-style programming over web services, which does not make much sense. You will be basically throwing away one of the key advantages of micro services, which is the decoupling between service and client.
I am currently writing integration/functional tests for my system. Part of the functionality is to hit a web service via http (another system I run).
How should I set up a test instance of the web service to allow for good functional testing? I want to have my system run against this service with live production data.
Should the web service be an independent instance that always has the live production data that I reload manually (maybe reset every time I start an instance of it)?
Should the web service be setup and teared down with every test?
What are some common practices to deal with situations like this?
As first thing please make sure you know difference between Functional Testing and Integration Testing. You can do a pretty good functional tetsing wthout major efforts which are required by integration testing (instantiating web services, accessing a data base). Basically Mocking technique works pretty well even to simulate the Data Layer responses and web service behaviour (I believe such details like HTTP as transport could be ignored for most test cases)
For such integration testing I would suggest having a separate SIT environment which includes a separate Web service and a Data Base as well.
Should the web service be an independent instance that always has the live production data
that I reload manually (maybe reset every time I start an instance of it)?
Yep, it should be completely separate but data could be manually generated/prepared. For instance, you can prepare some set of data which allows testing some predefined test cases, this could be test data sets which are deployed to the SIT DB instance before the actual test run and then cleaned up in test TearDown.
Should the web service be setup and teared down with every test?
Yep, tests should be isolated one from each other so should not affect each one in any way.