Local cloud stack for Azure similar to LocalStack for AWS? - amazon-web-services

Is there a mocking framework for Azure similar to LocalStack for AWS? Please understand that I am not looking for a SDK mock but a resource stack mock.
So much so, that I could replace the configurations of my local Azure stack with actual Azure resources in my project and the functionality would remain just the same. Quite like how it works with Localstack.
I have found Azure Cloud Fabric to come closest to this, but it is tightly coupled with Visual Studio IDE.

Although there is not an equivalent of LocalStack for Azure, Microsoft publish three emulators you can run locally to help with integration testing:
Azure Functions Core Tools, a local version of the Azure Functions Runtime, allowing you to execute your Azure functions locally without deploying them.
Azure Storage Emulator, a local emulator of Azure storage.
Cosmos DB Emulator, a local emulator of CosmosDB.
The above three can get you a lot of integration test coverage, however since Azure Functions, AWS Lambda and most modern web stacks even non-serverless have moved to consuming services rather than just consuming software modules, the only way to have complete parity between integration test and production environments is to automate the creation and tear-down of real, paid for services.
A recipe for End to End/Integration testing on Azure:
Use Azure DevOps Piplines to automate the entire CI process
Add tasks to the pipeline for creating and tearing down (real) text fixture resources with persistent state (databases, file storage etc) using the Azure command line tools.
Provide the test application access to real, stateless services (such as Azure Cognitive Services etc.) as you would for production.
Use Azure Variable Groups to store names, connection strings etc. for the test fixture resources. You can store a different set for production in a different group, allowing easy switching between them in YAML for different stages. These variables can also be templated in their own YAML file.
Use the Azure Functions Core Tools emulator to host and run functions within the CI agent rather than deploying, with a unit test framework giving them requests. The functions will be using the non-emulated, services stood up as test fixtures.
Or create a deploy for test stage, publishing the API for real, then write API tests that make raw HTTP requests, or use this as a backend for Selenium web driver testing a UI/frontend.
The above approach relies on real services to provide testing rather than emulated ones, testing something that's pretty close to what you deploy in production. It will incur usage fees each time you run your tests. If this is a problem, use unit testing and emulator integration testing first in the pipeline and add a human check/different pipeline for this level of testing which you only perform before pushing to production.
Azure Slots may also be worth looking up.

There is now https://github.com/azure/azurite providing also a docker
https://hub.docker.com/_/microsoft-azure-storage-azurite

Related

Using cloud functions vs cloud run as webhook for dialogflow

I don't know much about web development and cloud computing. From what I've read when using Cloud functions as the webhook service for dialogflow, you are limited to write code in just 1 source file. I would like to create a real complex dialogflow agent, so It would be handy to have an organized code structure to make the development easier.
I've recently discovered Cloud run which seems like it can also handle webhook requests and makes it possible to develop a complex code structure.
I don't want to use Cloud Run just because it is inconvenient to write everything in one file, but on the other hand it would be strange to have a cloud function with a single file with thousands of lines of code.
Is it possible to have multiple files in a single cloud function?
Is cloud run suitable for my problem? (create a complex dialogflow agent)
Is it possible to have multiple files in a single cloud function?
Yes. When you deploy to Google Cloud Functions you create a bundle with all your source files or have it pull from a source repository.
But Dialogflow only allows index.js and package.json in the Built-In Editor
For simplicity, the built-in code editor only allows you to edit those two files. But the built-in editor is mostly just meant for basic testing. If you're doing serious coding, you probably already have an environment you prefer to use to code and deploy that code.
Is Cloud Run suitable?
Certainly. The biggest thing Cloud Run will get you is complete control over your runtime environment, since you're specifying the details of that environment in addition to the code.
The biggest downside, however, is that you also have to determine details of that environment. Cloud Funcitons provide an HTTPS server without you having to worry about those details, as long as the rest of the environment is suitable.
What other options do I have?
Anywhere you want! Dialogflow only requires that your webhook
Be at a public address (ie - one that Google can resolve and reach)
Runs an HTTPS server at that address with a non-self-signed certificate
During testing, it is common to run it on your own machine via a tunnel such as ngrok, but this isn't a good idea in production. If you're already familiar with running an HTTPS server in another environment, and you wish to continue using that environment, you should be fine.

How do deploy a play application on google cloud

This is my first time deploying an application. I have some idea about it but I am not sure if it is correct. How do I go about deploying a play application on google cloud?
1) I have created a package using dist command. I have the zip file now on my local pc. https://www.playframework.com/documentation/2.5.x/Deploying
2) Do I first need to create a compute resource on gcp? What configuration shall I use for the vm? My app is still in test phase so there are no external users at the moment
3) I suppose play uses netty web server. So do I need to install netty on the compute resource? I have looked online a bit but can't find a resource on how to deploy an application on netty.
deploy an application on netty
Netty is not a web server/application server, but an IO framework which can be used to build web servers or any high-performant IO applications.
If you really want to use netty, you need to write an HTTP server yourself, or just use an HTTP framework built on netty.
If you want to build an application using netty, have a look at the examples on https://github.com/netty/netty/tree/4.1/example/src/main/java/io/netty/example/
Deploying a container to the Cloud using Google Cloud Platform and Kubernetes Engine
Kubernetes is a way of orchestrating containers in the Cloud, enabling you to do things like auto-scale, fast deploys and manage running versions of containers. You simply create a container and upload it to a container repository. In this example I used Google’s Container Registry, it’s really simple to use and works brilliantly with their Kubernetes implementation.
follow this tutorial might help you with this
https://medium.com/beyond/deploying-a-container-to-the-cloud-using-google-cloud-platform-and-kubernetes-engine-10d8ee3aba86

How can I automate the end-to-end testing of my serverless web app?

So my app stack looks like this in prod:
Backend: AWS API Gateway + Lambda + DynamoDB + ElastiCache(redis)
Backend - algo: Long running process - dockerized Java app running on ECS (Fargate)
Frontend: Angular app, served from S3
I'd like to use https://www.cypress.io/ for end-to-end testing and I'd like to use https://circleci.com/ for my build server.
How do I go about creating an environment to allow the end-to-end tests to run?
Options:
1) Use Terraform to script the infrastructure and create/tear down a whole environment every time we run the end-to-end tests. This sounds like a huge overhead in terms of spin up time. Also the environment creation and setup being fully scripted sounds like a lot of work!
2) Create a dedicated, long lived environment that we deploy to incrementally. This sounds like it'll get messy - not ideal for a place to run tests.
3) Make it so we can run the environment locally. So perhaps use use AWS's SAM or something like this project https://github.com/gertjvr/serverless-plugin-simulate
That last option may also answer the question of the local dev environment setup however everything that mocks serverless tech locally seems to be in beta and I'm concerned that if I go down that road I might hit some issues after investing a lot of time....
"Also the environment creation and setup being fully scripted sounds like a lot of work" - it is. its also the correct thing to do. it allows you to not only version your code but the environments that the code runs in. automating your deployment is more than just your code. i'd recommend this.
You can use the serverless framework to encode your app as infrastructure as Code and create tests
https://serverless.com
https://serverless.com/framework/docs/providers/aws/guide/testing
On my side, I split my testing strategy as below:
Api:
- Unit test: (use your language favorite framework)
- Integration test: It depends on your InfraAsCode choice, if you use SAM or Serverless framework, you will then be able to inject event directly to your function locally. If you want to add integration part like DynamoDB or S3 interaction, you should consider using LocalStack (https://github.com/localstack/localstack) to emulate those services.
Front:
- For that part, I always mock API Requests using Stub and only test front end part (I already have tested api part previously). And then you will be able to use cypress or an other framework.
How about using endly e2e and automation runner,
It allows you to build testing workflow to automate build, deployment, data population and validate (NoSQL: DynamoDB, Firebase, or SQL: MySQL, BigQuery,PostgreSQL, etc), logs (cloud watch), message bus (SNS, SQS, Cloud Pus/Sub), triggering backrond or sending HTTP reques.
You can find some lambda, cloud function/ here
Or some more production project with e2e:
storage mirror
data ingestion
data sync

Usage of Cloud Foundry Spaces in the development chain

I am currently evaluating the possibility of introducing a private Java PAAS cloud. So far I am quite excited about the whole solution, especially combining the foundry with openstack.
What I am wondering though, is how this can be combined with development. I obviously want the developer to run the developed code on the cloud and no longer on his unmanaged workstation.
Is it possible to do the following:
Developer develops his application code on the local host OS. A virtual machine is used to build and run the application. I have seen this in vagrant and liked this alot. Ideally the local vagrant box is a cloud foundry space.
If the developer is OK with his code, he should push his application out of the local vm to a developer specific acceptance space run by cloud foundry on the network. Here the application is a more production like environment and automated acceptance / disaster recovery tests can be executed.
If the developer decides this is OK and merges his changes to the trunk (SVN/GIT) a CI tool should deploy the application to the "global" test, acceptance and production spaces.
I assume the last point is no problem. I just cannot find a way, how the first steps can be achieved.
Any ideas?
are you actually looking for a complete cf deployment on top of openstack?
That can be achieved using BOSH cloud foundry deployment for openstack.
http://docs.cloudfoundry.com/docs/running/deploying-cf/openstack/
you can have different spaces in the cf deployment: test , production etc. and can move application from one space to another after testing is done.

Best practices (unit) testing Windows Azure

Within a short-time period I'm going to start a project based on Windows Azure. And I was wondering what are the experiences with testing for Windows Azure projects (in continuous intergration (with a TFS build server))? (Eventual using TDD)
Some things I was wondering:
Do you use mocking (in your own written wrapper class)?
Do you use the storage emulator?
Do you deploy the services to Azure and run the tests from the build server to the cloud? (what about costs)?
Thnaks in advance!
The same good practices for writing unit tests for applications outside of Windows Azure apply. If you have an external dependency to what you are actually testing, that dependency should be mocked and injected for your granular unit test.
For example, when I'm using Windows Azure Storage Queues I will have an interface that I use to interact with the queue itself, so in my code consuming the queue service I can mock the subsystem using the interface and use dependency injection to inject the mock. This removes the necessity to actually deal with the emulator during unit tests. For the most part the actual concrete implementation of the code working with the queue is not much more than a very thin wrapper.
I personally don't shoot for 100% test coverage, so I may not have direct unit tests that utilize the concrete implementation of the wrappers. In many cases I try to have integration tests that will exercise these wrappers and exercise multiple aspects of the system working together. In some cases I can run the integration tests in the emulator (for Storage operations for example), but in some cases they simply have to be run with access to the Windows Azure environment (in the case of usage of ACS or Service Bus).
Ideally you'd like to have a set of scripts that can be run to spin up a minimum set of test servers in Azure, deploy your solution and exercise the integration tests that can't be done on premises. Then get the results of that and have the script shut everything down (or optionally leave it running if you need that). Then run the integration tests suite that utilizes these scripts often enough to detect issues, but you certainly don't need to run them every time you check something in unless you are happy with running the test environment all the time. If you okay with the cost of a semi-permanent test environment running in Azure then just make sure to have the scripts to an update deployment rather than a delete and redeploy to cut down on cost a bit (savings would be relative to how often the deploy occurs).
I believe this question is a very subjective one as you're likely to get several different opinions.