by default ebs uses Dockerrun.aws.json file to deploy. we have a requirement to have multiple dockerrun files, like Dockerrun.aws.json and Dockerrun-staging.aws.json.
Is it possible to make sure deployment uses Dockerrun-Staging.aws.json file instead of Dockerrun.aws.json?
I had the same problem and could not find any documentation on how to provide target file on deploy. I ended up creating separate folders for staging and production then execute the eb deploy on the target environment folder
:docker-run$ tree
.
├── production
│ └── Dockerrun.aws.json
└── staging
└── Dockerrun.aws.json
2 directories, 2 files
Related
I'm managing a repo which maintains common utilities, this repo has my common utils, jenkins file and all the dependencies are managed by Poetry so I've some TOML files to manage. We keep adding new utils as we go on.
.
├── Jenkinsfile
├── README.md
├── common_utils
│ ├── __init__.py
│ ├── aws.py
│ ├── sftp.py
├── pre-commit.yaml
├── poetry.lock
└── pyproject.toml
When I push my code to git, my Jenkinsfile packages it and publishes it to AWS code artifact. But everything time I push it, I've to manually update the TOML version in my local first before I push it to dev branch and then pull a new branch from dev to update the version again before pushing to master branch to update the version again otherwise I get a conflict on TOML version in code artifact.
I can't modify the Jenkins because even though it solves the conflict issue by updating the version, the version in the source code isn't modified, which means I still need to manually update all the versions manually. I'm not sure if pre-commit will help because I dont want to update the version every time I push to feature branch.
Is there a known way to handle this or is going with a release branch the only option?
I'm quite new to the Amplify function world. I've been struggling to deploy my Golang function, connected to a DynamoDB stream. Even though I am able to run my lambda successfully by manually uploading a .zip I created myself after I built the binary using GOARCH=amd64 GOOS=linux go build src/index.go (I develop on a mac), when I use the Amplify CLI tools I am not able to deploy my function.
This is the folder structure of my function myfunction
+ myfunction
├── amplify.state
├── custom-policies.json
├── dist
│ └── latest-build.zip
├── function-parameters.json
├── go.mod
├── go.sum
├── parameters.json
├── src
│ ├── event.json
│ └── index.go
└── tinkusercreate-cloudformation-template.json
The problem is I can't use the amplify function build command, since looks like it creates a .zip file with my source file index.go in there (not the binary), so the lambda, regardless of the handler I set, it seems not able to run it from that source. I record errors like
fork/exec /var/task/index.go: exec format error: PathError null or
fork/exec /var/task/index: no such file or directory: PathError null
depending on the handler I set.
Is there a way to make Amplify build function work for a Golang lambda? I would like to successfully execute the command amplify function build myfunction so that I will be able to deliver the working deployment by amplify push to my target environment.
I am developing a serverless data pipeline on AWS. Compared to the Serverless framework, Terraform has better support for services like Glue.
The good thing about Serverless is that you can define the --stage argument when deploying. This allows creating an isolated stack on AWS. When developing new features on our data pipeline I can deploy my state of the code like
serverless deploy --stage my-new-feature
this allows me to do an isolated integration test on the AWS account I share with my colleagues. Is this possible using Terraform?
Did you have a look at workspace? https://www.terraform.io/docs/state/workspaces.html
Terraform manages resources by way of state.
If a resource already exists in the state file and Terraform doesn't detect any drift between the configuration, the state and any differences in the provider (eg something was changed in the AWS console or by another tool) then it will show that there are no changes. If it does detect some form of drift then a plan will show you what changes it needs to make to push the existing state of things in the provider to what is defined in the Terraform code.
Separating state between different environments
If you want to have multiple environments or even other resources that are separate from each other and not managed by the same Terraform action (such as a plan, apply or destroy) then you want to separate these into different state files.
One way to do this is to separate your Terraform code by environment and use a state file matching the directory structure of your code base. A simple example might look something like this:
terraform/
├── production
│ ├── main.tf -> ../stacks/main.tf
│ └── terraform.tfvars
├── stacks
│ └── main.tf
└── staging
├── main.tf -> ../stacks/main.tf
└── terraform.tfvars
stacks/main.tf
variable "environment" {}
resource "aws_lambda_function" "foo" {
function_name = "foo-${var.environment}"
# ...
}
production/terraform.tfvars
environment = "production"
staging/terraform.tfvars
environment = "staging"
This uses symlinks so that staging and production are kept in line in code with the only changes being introduced by the terraform.tfvars file. In this case it changes the Lambda function's name to include the environment.
This is what I generally recommend for static environments as it's much clearer from looking at the code/directory structure which environments exist.
Dynamic environments
However, if you have more dynamic environments, such as per feature branch, then it's not going to work to hard code the environment name directly in your terraform.tfvars file.
In this case I would recommend something like this:
terraform/
├── production
│ ├── main.tf -> ../stacks/main.tf
│ └── terraform.tfvars
├── review
│ ├── main.tf -> ../stacks/main.tf
│ └── terraform.tfvars
├── stacks
│ └── main.tf
└── staging
├── main.tf -> ../stacks/main.tf
└── terraform.tfvars
This works the same way but I would omit the environment variable from the review structure so that it's set interactively or via CI environment variables (eg export TF_VAR_environment=${TRAVIS_BRANCH} when running in Travis CI, adapt this to support whatever CI system you use).
Keeping state separate between review environments on different branches
This only gets you half way though because when another person tries to use this with another branch then they will see that Terraform wants to destroy/update any resources that are already created by running Terraform if you were just using the default workspace.
Workspaces provide an option for separating state in a more dynamic way and also allows you to interpolate the workspace name into Terraform code:
resource "aws_instance" "example" {
tags {
Name = "web - ${terraform.workspace}"
}
# ... other arguments
}
Instead the review environments will need to create or use a dynamic workspace that is scoped for that branch only. You can do this by running the following command:
terraform workspace new [NAME]
If the workspace already exists then you should instead use the following command:
terraform workspace select [NAME]
In CI you can use the same environment variables as before to automatically use the branch name as your workspace name.
I have a File Provisioner configured on my packer template json:
"provisioners": [{
"type": "file",
"source": "packer/api-clients/database.yml",
"destination": "/tmp/api-clients-database.yml"
},
The code below doesn't work when I'm trying to build an AMI on Amazon AWS, it always says:
Bad source 'packer/api-clients/database.yml': stat packer/api-clients/database.yml: no such file or directory
If I do this:
"source": "api-clients/database.yml",
It works like a charm. But I must have all my Packer files inside of a packer folder within my app folder for organization purposes.
What am I doing wrong?
My app folder is like this:
api_v1
├── template.json
├── app
│ ├── bin
│ ├── config
│ ├── packer
│ │ ├── api-clients
│ │ │ └── database.yml
│ ├── lib
│ ├── log
│ ├── ...
It seems that it has something to do with Relative Paths / Absolute Paths on Packer but I couldn't figure out what is wrong...
Thanks in advance,
Since the path doesn't start with a / it's a relative path. The are relative to the current working directory when executing packer build.
With source packer/api-clients/database.yml you have to run packer from the app directory, i.e.
packer build ../template.json
With source api-clients/database.yml you have to run packer from the packer directory, i.e.
packer build ../../template.json
For more info see Packer documentation - File provisioner: source.
It is as you have surmised a path thing.
You do not say from what folder you are calling packer and what the calling command is, or when you have it working with "source": "api-clients/database.yml", if you have moved the api-clients folder or it works with packer in that location.
If your folder structure will always look that way then to avoid confusions if you use a full path for the source it will always work no matter where you run packer from
eg
/api_v1/app/packer/api-clients/database.yml
if you must use relative paths then make sure that the source path is always relative from the folder in which packer is run.
I'm new to Go.
I am trying to deploy a simple web project to EB without success.
I would like to deploy a project with the following local structure to Amazon EB:
$GOPATH
├── bin
├── pkg
└── src
├── github.com
│ ├── AstralinkIO
│ │ └── api-server <-- project/repository root
│ │ ├── bin
│ │ ├── cmd <-- main package
│ │ ├── pkg
│ │ ├── static
│ │ └── vendor
But I'm not sure how to do that, when building the command, Amazon is treating api-server as the $GOPATH, and of course import paths are broken.
I read that most of the time it's best to keep all repos under the same workspace, but it makes deployment harder..
I'm using Procfile and Buildfile to customize output path, but I can't find a solution to dependencies.
What is the best way to deploy such project to EB?
Long time has past since I used Beanstalk, so I'm a bit rusty on the details. But basic idea is as follows. AWS Beanstalk support for go is a bit odd by design. It basically extracts your source files into a folder on the server, declares that folder as GOPATH and tries to build your application assuming that your main package is at the root of your GOPATH. Which is not a standard layout for go projects. So your options are:
1) Package your whole GOPATH as "source bundle" for Beanstalk. Then you should be able to write build.sh script to change GOPATH and build it your way. Then call build.sh from your Buildfile.
2) Change your main package to be a regular package (e.g. github.com/AstralinkIO/api-server/cmd). Then create an application.go file at the root of your GOPATH (yes, outside of src, while all actual packages are in src as they should be). Your application.go will become your "package main" and will only contain a main function (which will call your current Main function from github.com/AstralinkIO/api-server/cmd). Should do the trick. Though your mileage might vary.
3) A bit easier option is to use Docker-based Go Platform instead. It still builds your go application on the server with mostly same issues as above, but it's better documented and possibility to test it locally helps a lot with getting configuration and build right. It will also give you some insights into how Beanstalk builds go applications thus helping with options 1 and 2. I used this option myself until I moved to plain EC2 instances. And I still use skills gained as a result of it to build my current app releases using docker.
4) Your best option though (in my humble opinion) is to build your app yourselves and package it as a ready to run binary file. See second bullet point paragraph here
Well, which ever option you choose - good luck!