I am just starting to use AWS Amplify but can't figure out how you are supposed to commit the project to a source code repository so that others can work on the same project.
I created an react Serverless project 'web_app' and have created a few APIs and a simple front end application and now want to commit this to CodeCommit so it can be accessed by others.
Things get a bit confusing now because for the CI/CD it seems once should create a repository for the front end application - usually the source files are in the 'web_app/src' folder.
But Amplify seems to have already created a git repository at the 'web_app' folder level so am I supposed to create a CodeCommit repository and push the 'web_app' local repo to the remote repository and then separately create another repository for the front end in order to be able to use the CI/CD functions in AWS?
For some reason if I do try and push anything to AWS CodeCommit I always get an error 403.
OK - I'll answer this myself.
You just commit the entire project to a repo in CodeCommit. The project folder contains both the backend and the frontend code. The frontend code is usually in the /src folder and the backend code (CloudFormation files) is usually in the amplify folder.
Once you have the CodeCommit repo setup you can use the Amplify Console or the amplify-cli to create a new backend or frontend environment. Amplify is smart enough to know where to find the backend and frontend code.
Bear in mind that the backend amplify-cli code creates a bunch of files that are placed in the frontend folder (/src), including the graphql mutations and queries that will be used in the frontend code.
If you have set up CI/CD then any 'git push' will result in a new build for the environment you are in. You can modify the build script to include or exclude rebuilding the backend - I think by default it will rebuild the backend if there are changed.
You can also manually rebuild the backend by using the amplify-cli 'amplify push' command.
Take care because things can get out of sync and it seems old files can be left lying around that cause problems. Fortunately it doesn't take long to delete and rebuild and entire environment. Of course you may have to backup and reload your data first. Having some scripts to automatically load any seed data for development or testing is useful.
There is a lot of documentation out there but a lot of it seems to be quite confusing.
Related
Is there any recommended method to create and deploy the Apigee API Proxy Bundle via a CI/CD pipeline (I'm using Azure DevOps)?
I want to avoid excessive API Proxy Bundles from being created and deployed when there are no changes to be made. I've already tested, and I see that identical bundles still create a new revision.
So far, my own solution is to write a PowerShell script to use apigeecli to download the current bundle and compare it against the apiproxy that I have locally in my repo. If it differs, I create and deploy a new API Proxy Bundle.
Has anyone seen anything better?
I have mainly automated with Gitlab but will share my ideas probably may help with your specific case.
So we use version control to manage our apigee repos. I have setup a gitlab pipeline that checks for the diff anytime we push to our repository and only if there are any changes do we redeploy the proxy to Apigee. Normally when the pipeline is triggered, we check if there are any changes to target servers, proxies and shared flows, and if changes are detected, we check the deployed revision and environments.
Through my deployment script, i am able to get a list of these changes and pass them to the pipeline as CHANGES variable. This means that only these modified proxies will be deployed.
On my pipeline I could do something like this git diff --name-only $CI_COMMIT_SHA..$CI_COMMIT_BEFORE_SHA > /changes.txt and pass the content of the changes file to the CHANGES to be deployed.
I am deploying a Django application using the following steps:
Push updates to GIT
Log into AWS
Pull updates from GIT
The issue I am having is my settings production.py file. I have it in my .gitignore so it does not get uploaded to GITHUB due to security. This, of course, means it is not available when I PULL updates onto my server.
What is a good approach for making this file available to my app when it is on the server without having to upload it to GITHUB where it is exposed?
It is definitely a good idea not to check secrets into your repository. However, there's nothing wrong with checking in configuration that is not secret if it's an intrinsic part of your application.
In large scale deployments, typically one sets configuration using a tool for that purpose like Puppet, so that all the pieces that need to be aware of a particular application's configuration can be generated from one source. Similarly, secrets are usually handled using a secret store like Vault and injected into the environment when the process starts.
If you're just running a single server, it's probably just fine to adjust your configuration or application to read secrets from the environment (or possibly a separate file) and set those values on the server. You can then include other configuration settings (secrets excluded) as a file in the repository. If, in the future, you need more flexibility, you can pick up other tools in the future.
I want to automate the deployment of Cloud Function in to Production project through Cloud Build whose source files are present in Cloud Source Repository of DEV project. How can I ensure that the moment I push the code in production branch of Cloud Source Repository of DEV project, the Cloud Function gets created in to Production Project .
If I understand, you are trying to trigger a build from a repository stored on another project.
This is not possible, the build triggers must be on the same project than the repositories
I think my answer will help here: How to pass API parameters to GCP cloud build triggers
Basically what Claudio recommended, use the examples to build your steps. I believe what you want to do is create a step that the. Triggers the cloud function when you push changes to the dev production branch. When the trigger is called and ran, you then add a step to either run the cloud function or use the REST API to trigger the build by its ID. See my example above.
I do not know how this question is logic but if there is anyway to solve this issue, it will cause to not waste my time.
I have a ASP.net core application that consist of many libraries like jquery, modernizer and etc. All of them stored in lib folder in wwwroot folder.
When I start publishing on AWS (with AWS Toolkit) it start zipping and publishing on the server as usual.
The point is that it will take a lot of time for zipping all of the libraries. these library does not any change during the project and I just change some pages or classes.
Is there any way to cancel zipping some folders to publish faster?
You can add this in your AWS serverless template to remove unwanted packages from the bundle.
package:
exclude:
- scripts/**
- dynamodb/tables/**
- policies/**
- dynamodb/seeds/**
If you are using a CI/CD methodology then you can ask the code builder to use a script in a root folder structure to run your package resolvers and all. Please refer this documentation
Question 1:
I am about to deploy my first Django website and I was wondering what tools are recommended to gathering all your Django files.
Like for example I don't need my sass and coffeescript files I just want the compiled css and js files. I also want to use the correct production settings file.
Question 2:
Do I put these files ready for deployment into their own version control repository? I guess the advantage is that you can easily roll back changes?
Question 3:
Do I run my tests before gathering the files or before deploying?
Shell scripts could be a solution but maybe there is a better way? I looked at jenkins/hudson but that seems more like a tool that sits on top of the tools that I am looking for.
For questions one and two, I'd recommend using a version control system for this. I'm sure you're already using some sort of version control, so you can just say which branch of your repository you would like to deploy. And yes, this makes rollbacks incredibly easy. Probably the most popular method for Django deployment is to package your files using git, and then deploy these files and run any deployment scripts using fabric.
Using git, packaging your files using your local repository would look something like:
git archive --format=tar HEAD | gzip > my_repo.tar.gz
Alternately, you can first push your changes to a github repository, and then in your deployment script just clone your repository from your production server.
For your third question, if you use this version control method for packaging your files, then just make sure when you are testing you have the deployment branch checked out.
I'll typically use Fabric for deploying most Django projects:
http://docs.fabfile.org/en/1.0.0/?redir
It has a decent api for communicating with remote servers and it's all in Python – bonus!
You don't need to store your concatenated media files in a separate repo. They're only needed for production. In that case I've found libraries like django-mediasync and django-compress to be useful. They both provide template tags/settings that can concatenate and cache your static files for you depending on the DEBUG setting/environments (production vs development).
You can run your tests whenever. Some people will run them as a version control hook to prevent broken code from being checked in or during deployment, stopping the deployment in case of test failure.