How can I publish serverless application on AWS faster? - amazon-web-services

I do not know how this question is logic but if there is anyway to solve this issue, it will cause to not waste my time.
I have a ASP.net core application that consist of many libraries like jquery, modernizer and etc. All of them stored in lib folder in wwwroot folder.
When I start publishing on AWS (with AWS Toolkit) it start zipping and publishing on the server as usual.
The point is that it will take a lot of time for zipping all of the libraries. these library does not any change during the project and I just change some pages or classes.
Is there any way to cancel zipping some folders to publish faster?

You can add this in your AWS serverless template to remove unwanted packages from the bundle.
package:
exclude:
- scripts/**
- dynamodb/tables/**
- policies/**
- dynamodb/seeds/**
If you are using a CI/CD methodology then you can ask the code builder to use a script in a root folder structure to run your package resolvers and all. Please refer this documentation

Related

How do I put AWS Amplify project into CodeCommit?

I am just starting to use AWS Amplify but can't figure out how you are supposed to commit the project to a source code repository so that others can work on the same project.
I created an react Serverless project 'web_app' and have created a few APIs and a simple front end application and now want to commit this to CodeCommit so it can be accessed by others.
Things get a bit confusing now because for the CI/CD it seems once should create a repository for the front end application - usually the source files are in the 'web_app/src' folder.
But Amplify seems to have already created a git repository at the 'web_app' folder level so am I supposed to create a CodeCommit repository and push the 'web_app' local repo to the remote repository and then separately create another repository for the front end in order to be able to use the CI/CD functions in AWS?
For some reason if I do try and push anything to AWS CodeCommit I always get an error 403.
OK - I'll answer this myself.
You just commit the entire project to a repo in CodeCommit. The project folder contains both the backend and the frontend code. The frontend code is usually in the /src folder and the backend code (CloudFormation files) is usually in the amplify folder.
Once you have the CodeCommit repo setup you can use the Amplify Console or the amplify-cli to create a new backend or frontend environment. Amplify is smart enough to know where to find the backend and frontend code.
Bear in mind that the backend amplify-cli code creates a bunch of files that are placed in the frontend folder (/src), including the graphql mutations and queries that will be used in the frontend code.
If you have set up CI/CD then any 'git push' will result in a new build for the environment you are in. You can modify the build script to include or exclude rebuilding the backend - I think by default it will rebuild the backend if there are changed.
You can also manually rebuild the backend by using the amplify-cli 'amplify push' command.
Take care because things can get out of sync and it seems old files can be left lying around that cause problems. Fortunately it doesn't take long to delete and rebuild and entire environment. Of course you may have to backup and reload your data first. Having some scripts to automatically load any seed data for development or testing is useful.
There is a lot of documentation out there but a lot of it seems to be quite confusing.

Best way to public download a full folder with amazon?

I'm doing a launcher (in C#) that downloads a full game or app. The app can be very large (i.e. 5GB) and I need to get it with the correct folder hierarchhy, so the same launcher can check if the user has the correct app or it needs to be repaired or updated.
I'm trying to do that with amazon s3 and CloudFront, but seems that I can only get objects and not the full folder of the app.
I also have stored the folder in an EC2, and that works fine, but seems that EC2 is not designed for that, so downloads are extremely slow.
Is there any amazon service to do that?
Have you considered zipping the files first? It solves alot of issues eg folder structure, compression and works great from s3 and cloud front. Its a common solution for this use case.
You can do this in your application with the DownlodDirectory method in TransferUtility class in the .NET SDK.
You can read more about the DownloadDirectory method here. By default I believe it only downloads objects in the root path, so don’t forget to do it recursively for sub-folders if necessary.

Using Serverless framework on monorepo of multiple services with shared code

I have a pythonic serverless project on AWS, in which several services are contained in a single repository (a monorepo) which looks like:
/
serverless.yml
/service1
lambda_handler.py
/service2
lambda_handler.py
/general
__init__.py
utils.py
'general' is a package that is shared between different services, and therefore we must use a single 'serverless.yml' file at root directory (otherwise it won't be deployed).
We have two difficulties:
A single 'serverless.yml' may be too messy and hard to maintain, and it prevents us from using global configuration (which may be quite useful).
Deplyoing a single service is complicated. I guess that 'package' feature may help but I'm not quite sure how to use it right.
Any advise or best practices to use for this case?
Its better to use individual serverless.yml files per each service. To use the shared code,
You can convert the code into a library and use it as a dependency and installed via a package manager for each individual service similar to a library. (This is useful since updating a version of common code won't affect the other services)
Keep the shared code in a different repository and use git submodule for individual service.
For more information, refer the article Can we share code between microservices which I have originally written considering serverless.

Source code structure: deploying Go with Subpackages

When there is just one go repository and it imports only public dependencies, deploying to (for example, a Docker container on AWS) is extremely straightforward.
However, I have a question about how to use subpackages with go.
Suppose we have a monorepo with 3 packages.
/src
- /appA
- /appB
- /someSharedDep
How are deployments typically built so that you deploy appA and someSharedDep to one server and appB and someSharedDep on another server?
I imagine there needs to be some creative employments of our friend the GOPATH, but some help on the topic would be appreciated.
Bonus points if we're talking about an elastic beanstalk deployment.
Suspicions
I have some thoughts on how to approach the problem now (and I'll add more or submit an answer if this becomes more complete).
Use vendoring, this means you have the source code of all your dependencies checked into your own repo. It doesn't sound good (esp. if you're used to NodeJS) but it works.
Now that you use vendoring, vendor your own submodules into the ./vendor folder in the application. Yes you will have copies of the same code in two places but whatever.
Create automated scripts to help manage some of these things. I'm still looking for tools that make vendoring more convenient.
Some problems I still face:
When vendoring, sometimes the dependency has files that have a main() function or declare package main, usually in an ./example subfolder. These have to be removed manually. I don't like editing the working source of someone else's project though!

Django: pre-deployment

Question 1:
I am about to deploy my first Django website and I was wondering what tools are recommended to gathering all your Django files.
Like for example I don't need my sass and coffeescript files I just want the compiled css and js files. I also want to use the correct production settings file.
Question 2:
Do I put these files ready for deployment into their own version control repository? I guess the advantage is that you can easily roll back changes?
Question 3:
Do I run my tests before gathering the files or before deploying?
Shell scripts could be a solution but maybe there is a better way? I looked at jenkins/hudson but that seems more like a tool that sits on top of the tools that I am looking for.
For questions one and two, I'd recommend using a version control system for this. I'm sure you're already using some sort of version control, so you can just say which branch of your repository you would like to deploy. And yes, this makes rollbacks incredibly easy. Probably the most popular method for Django deployment is to package your files using git, and then deploy these files and run any deployment scripts using fabric.
Using git, packaging your files using your local repository would look something like:
git archive --format=tar HEAD | gzip > my_repo.tar.gz
Alternately, you can first push your changes to a github repository, and then in your deployment script just clone your repository from your production server.
For your third question, if you use this version control method for packaging your files, then just make sure when you are testing you have the deployment branch checked out.
I'll typically use Fabric for deploying most Django projects:
http://docs.fabfile.org/en/1.0.0/?redir
It has a decent api for communicating with remote servers and it's all in Python – bonus!
You don't need to store your concatenated media files in a separate repo. They're only needed for production. In that case I've found libraries like django-mediasync and django-compress to be useful. They both provide template tags/settings that can concatenate and cache your static files for you depending on the DEBUG setting/environments (production vs development).
You can run your tests whenever. Some people will run them as a version control hook to prevent broken code from being checked in or during deployment, stopping the deployment in case of test failure.