Packer file provisioner doesn't copy - amazon-web-services

I have a File Provisioner configured on my packer template json:
"provisioners": [{
"type": "file",
"source": "packer/api-clients/database.yml",
"destination": "/tmp/api-clients-database.yml"
},
The code below doesn't work when I'm trying to build an AMI on Amazon AWS, it always says:
Bad source 'packer/api-clients/database.yml': stat packer/api-clients/database.yml: no such file or directory
If I do this:
"source": "api-clients/database.yml",
It works like a charm. But I must have all my Packer files inside of a packer folder within my app folder for organization purposes.
What am I doing wrong?
My app folder is like this:
api_v1
├── template.json
├── app
│   ├── bin
│   ├── config
│   ├── packer
│   │   ├── api-clients
│   │   │   └── database.yml
│   ├── lib
│   ├── log
│   ├── ...
It seems that it has something to do with Relative Paths / Absolute Paths on Packer but I couldn't figure out what is wrong...
Thanks in advance,

Since the path doesn't start with a / it's a relative path. The are relative to the current working directory when executing packer build.
With source packer/api-clients/database.yml you have to run packer from the app directory, i.e.
packer build ../template.json
With source api-clients/database.yml you have to run packer from the packer directory, i.e.
packer build ../../template.json
For more info see Packer documentation - File provisioner: source.

It is as you have surmised a path thing.
You do not say from what folder you are calling packer and what the calling command is, or when you have it working with "source": "api-clients/database.yml", if you have moved the api-clients folder or it works with packer in that location.
If your folder structure will always look that way then to avoid confusions if you use a full path for the source it will always work no matter where you run packer from
eg
/api_v1/app/packer/api-clients/database.yml
if you must use relative paths then make sure that the source path is always relative from the folder in which packer is run.

Related

Amplify functions deployment not working for Golang runtime

I'm quite new to the Amplify function world. I've been struggling to deploy my Golang function, connected to a DynamoDB stream. Even though I am able to run my lambda successfully by manually uploading a .zip I created myself after I built the binary using GOARCH=amd64 GOOS=linux go build src/index.go (I develop on a mac), when I use the Amplify CLI tools I am not able to deploy my function.
This is the folder structure of my function myfunction
+ myfunction
├── amplify.state
├── custom-policies.json
├── dist
│ └── latest-build.zip
├── function-parameters.json
├── go.mod
├── go.sum
├── parameters.json
├── src
│ ├── event.json
│ └── index.go
└── tinkusercreate-cloudformation-template.json
The problem is I can't use the amplify function build command, since looks like it creates a .zip file with my source file index.go in there (not the binary), so the lambda, regardless of the handler I set, it seems not able to run it from that source. I record errors like
fork/exec /var/task/index.go: exec format error: PathError null or
fork/exec /var/task/index: no such file or directory: PathError null
depending on the handler I set.
Is there a way to make Amplify build function work for a Golang lambda? I would like to successfully execute the command amplify function build myfunction so that I will be able to deliver the working deployment by amplify push to my target environment.

How to use variables from other modules in Terraform: Adding Host Project id to the Service Projects. (GCP)

My infrastructure its composed by a Host Project and several Service Projects that are using its Shared VPC.
I have refactored my .tf files of my infrustructure as it follows:
├── env
| ├── dev
│   ├── main.tf
│   ├── outputs.tf
│   └── variables.tf
│   ├── pre
│   └── pro
├── host
│   ├── main.tf
│   ├── outputs.tf
│   ├── terraform.tfvars
│   └── variables.tf
└── modules
├── compute
├── network
└── projects
The order of creation of the infrastructure is:
terraform apply in /host
terraform apply in /env/dev (for instance)
In the main.tf of the host directory I have created the VPC and enabled Shared VPC hosting:
# Creation of the hosted network
resource "google_compute_network" "shared_network" {
name = var.network_name
auto_create_subnetworks = false
project = google_compute_shared_vpc_host_project.host_project.project
mtu = "1460"
}
# Enable shared VPC hosting in the host project.
resource "google_compute_shared_vpc_host_project" "host_project" {
project = google_project.host_project.project_id
depends_on = [google_project_service.host_project]
}
The issue comes when I have refer to the Shared VPC Network in the Service Projects.
In the main.tf from env/dev/ I have set the following:
resource "google_compute_shared_vpc_service_project" "service_project_1" {
host_project = google_project.host_project.project_id
service_project = google_project.service_project_1.project_id
depends_on = [
google_compute_shared_vpc_host_project.host_project,
google_project_service.service_project_1,
]
}
QUESTION
How do I refer to the Host Project ID from another directory in the Service Project?
What I have tried so far
I have thought of using Ouput Values and Data Sources:
In the host/outputs.tf declared as an output the Project ID as:
output "project_id" {
value = google_project.host_project.project_id
}
But then I end up not knowing how to implement this output in my env/dev/main.tf
I have thought on Data Sources and, in the env/dev/main.tf fetch for the Host Project ID. But then, in order to fetch it, I would need its name (which breaks the purpose of providing it in a programatic way if I have to hardcode it).
What should I try next? What I am missing?
The files under the env/dev folder can't see anything above it, only any referenced modules.
You could refactor the host folder into a module to allow access to it's outputs... but that adds a risk that the host will be destroyed whenever you destroy a dev environment.
I would try running terraform output -raw project_id after creating the host and piping it to a text file or environment variable. Then using that as the input for a new "host_project" or similar variable in the 'env/dev' deployment.

How do I run unit tests from smaller applications inside of a parent directory?

I'm working in a repository with multiple smaller applications for various lambdas. I'd like to be able to run cargo test from the top level directory, but I can't seem to find a way to get this to work since the files aren't nested within a top level src directory.
├── cloudformation
├── apps
│ ├── app1
│ │ └── src
│ └── app2
│ └── src
└── otherStuff
Ideally I could run cargo test from the top level and it would dig into apps and run tests from the src directory nested within each individual app. Is there a way to accomplish this?

AWS CodeBuild buildspec.yml get all files and subfolders recursively

I'm trying to use AWS CodeBuild to get all files and subfolders inside a nested public folder and deploy to an S3 bucket using CodePipeline. I was able to hook them all together but struggling to configure the buildspec.yml file to get the output I want.
My folder structure:
<path>/public/
├── 404.html
├── css
│   ├── ...
├── fonts
│   ├── bootstrap
│   │   ├── ...
│   ├── icomoon
│   │   ├── icomoon
│   │   │   ├── ...
│   └── simple-line-icons
│   ├── ...
├── images
│   ├── ...
├── index.html
├── index.xml
├── js
│   ├── ...
└── tags
└── index.xml
I need to put everything (including the subfolders) inside the public folder into the root of an S3 bucket.
So far I've tried following the docs here, here and here. I've tried using:
**/* to get everything recursively inside the folder public but the S3 bucket will have the path to the folder so the index.html is not in the root.
discard-paths: yes to remove the path to the public folder but inside the S3 bucket, all files are there in the root, no sub-folder structure is kept.
base-directory: as described here.
artifacts:
secondary-artifacts:
artifact1:
files:
- directory/file
artifact2:
files:
- directory/file2 to keep the folder structure but my build failed.
Different combinations of all the syntaxes above but my build just failed.
TL;DR You probably don't need to use "discard-paths: yes". So, drop "discard-paths: yes" and instead use "base-directory" with "**/*" for glob pattern. Read on for more reasoning why.
So yeah, as it usually happens, the culprit was pretty trivial. In my case the directory structure looked like this:
dist/
|-- favicon.ico
|-- index.html
|-- js
| `-- app.js
`-- revision.txt
So in order for the artifact to contain everything from the "dist" directory whilst preserving the nested directories structure, the buildspec had to look like this:
artifacts:
files:
- '**/*'
base-directory: 'dist'
Essentially "base-directory" parameter behaves as unix's "cd" command, so at first CodeBuild jumps to that directory and only after that resolves the glob pattern, in this case - **/*. If you also happen to use "discard-paths" then what happens is that, presumably, CodeBuild will change current directory to "dist", resolve the files structure correctly but then remove all path prefixes from the resolved paths, thus the result will be a plain directory structure with no nesting.
Hope that helps!
I've figured out a way to work around it. I don't think it's the best way but it works. I hope you can find this solution useful buildspec.yml:
version: 0.2
phases:
build:
commands:
- mkdir build-output
- cp -R <path>/public/ build-output
post_build:
commands:
- mv build-output/**/* ./
- mv build-output/* ./
- rm -R build-output *.yml LICENSE README* .git*
artifacts:
files:
- '**/*'
In words:
I copy everything inside the nested <path>/public folder out to a folder called ./build-output.
Then in post_build I move everything out of folder build-output.
Delete files pulled from my GitHub's repo which aren't needed to host the static website in the S3 bucket
Then I get the result I want: all files inside public in the root of the S3 bucket with the right folder tree.
Update:
You can also use my buildspec.yml example file here. Since the example is out of the scope of this question, I don't paste the code here.
I explain in details here.
I was running into a similar issue and after many permutations below syntax worked for me.
artifacts:
files:
- './**/*'
base-directory: public
name: websitename
This worked for me.
/
buildspec.yml
/public
index.html
/js
/img
appspec.yml
/src
in buildspec.yml
artifacts:
files:
- 'public/*'
discard-paths: yes
For those who are facing this issue while using CodeBuid in CodePipline and have tried the correct configuration for buildspec.yml file, the culprit for me was on the deploy stage, where I had set as Input Artifact the Source artifact instead of the Build artifact.

How to deploy a Go web application in Beanstalk with custom project folder structure

I'm new to Go.
I am trying to deploy a simple web project to EB without success.
I would like to deploy a project with the following local structure to Amazon EB:
$GOPATH
├── bin
├── pkg
└── src
├── github.com
│   ├── AstralinkIO
│   │   └── api-server <-- project/repository root
│   │   ├── bin
│   │   ├── cmd <-- main package
│   │   ├── pkg
│   │   ├── static
│   │   └── vendor
But I'm not sure how to do that, when building the command, Amazon is treating api-server as the $GOPATH, and of course import paths are broken.
I read that most of the time it's best to keep all repos under the same workspace, but it makes deployment harder..
I'm using Procfile and Buildfile to customize output path, but I can't find a solution to dependencies.
What is the best way to deploy such project to EB?
Long time has past since I used Beanstalk, so I'm a bit rusty on the details. But basic idea is as follows. AWS Beanstalk support for go is a bit odd by design. It basically extracts your source files into a folder on the server, declares that folder as GOPATH and tries to build your application assuming that your main package is at the root of your GOPATH. Which is not a standard layout for go projects. So your options are:
1) Package your whole GOPATH as "source bundle" for Beanstalk. Then you should be able to write build.sh script to change GOPATH and build it your way. Then call build.sh from your Buildfile.
2) Change your main package to be a regular package (e.g. github.com/AstralinkIO/api-server/cmd). Then create an application.go file at the root of your GOPATH (yes, outside of src, while all actual packages are in src as they should be). Your application.go will become your "package main" and will only contain a main function (which will call your current Main function from github.com/AstralinkIO/api-server/cmd). Should do the trick. Though your mileage might vary.
3) A bit easier option is to use Docker-based Go Platform instead. It still builds your go application on the server with mostly same issues as above, but it's better documented and possibility to test it locally helps a lot with getting configuration and build right. It will also give you some insights into how Beanstalk builds go applications thus helping with options 1 and 2. I used this option myself until I moved to plain EC2 instances. And I still use skills gained as a result of it to build my current app releases using docker.
4) Your best option though (in my humble opinion) is to build your app yourselves and package it as a ready to run binary file. See second bullet point paragraph here
Well, which ever option you choose - good luck!